Accepted Manuscript Title: From Design to Dissemination: Conducting Quantitative Medical Education Research Author: Erika L. Abramson, Caroline R. Paul, Jean Petershack, Janet Serwint, Janet E. Fischel, Mary Rocha, Meghan Treitz, Heather McPhillips, Tai Lockspeiser, Patricia Hicks, Linda Tewksbury, Margarita Vasquez, Daniel J. Tancredi, Su-Ting T. Li PII: DOI: Reference:
S1876-2859(17)30561-2 https://doi.org/10.1016/j.acap.2017.10.008 ACAP 1112
To appear in:
Academic Pediatrics
Received date: Revised date: Accepted date:
24-1-2017 3-10-2017 26-10-2017
Please cite this article as: Erika L. Abramson, Caroline R. Paul, Jean Petershack, Janet Serwint, Janet E. Fischel, Mary Rocha, Meghan Treitz, Heather McPhillips, Tai Lockspeiser, Patricia Hicks, Linda Tewksbury, Margarita Vasquez, Daniel J. Tancredi, Su-Ting T. Li, From Design to Dissemination: Conducting Quantitative Medical Education Research, Academic Pediatrics (2017), https://doi.org/10.1016/j.acap.2017.10.008. This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46
From Design to Dissemination: Conducting Quantitative Medical Education Research Erika L. Abramson, MD MSa, Caroline R. Paul, MDb, Jean Petershack, MDc, Janet Serwint, MDd, Janet E. Fischel, PhDe, Mary Rocha, MDf, Meghan Treitz, MDg, Heather McPhillips MD MPHh, Tai Lockspeiser MD MHPEi, Patricia Hicks, MD MPHEj, Linda Tewksbury, MDk, Margarita Vasquez, MDl, Daniel J. Tancredi, PhDm, Su-Ting T. Li, MD MPHn
Corresponding Author Information: Erika L. Abramson, MD MS Weill Cornell Medicine Departments of Pediatrics and Healthcare Policy & Research 525 E. 68th Street, Rm M610A New York, NY 10065 Phone: 212 746-3929 Fax: 212-746-3140 Email:
[email protected] b Caroline R. Paul, MD University of Wisconsin School of Medicine and Public Health Department of Pediatrics Email:
[email protected] c Janet E. Fischel, PhD Stony Brook University School of Medicine Department of Pediatrics Email:
[email protected] d Jean Petershack, MD University of Texas Health Science Center at San Antonio Department of Pediatrics Email:
[email protected] e Janet Serwint, MD Johns Hopkins University School of Medicine Department of Pediatrics Email:
[email protected]
f Mary Rocha, MD Baylor College of Medicine Department of Pediatrics Email:
[email protected] g Meghan Treitz, MD 1 Page 1 of 38
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46
University of Colorado School of Medicine Department of Pediatrics Email:
[email protected] h Heather McPhillips, MD MPH University of Washington, Seattle Children’s Hospital Email:
[email protected] i Tai Lockspeiser MD MHPE University of Colorado School of Medicine Department of Pediatrics Email:
[email protected];
[email protected] j Patricia Hicks, MD, MHPE Perelman School of Medicine at the University of Pennsylvania Department of Pediatrics Email:
[email protected] k Linda Tewksbury, MD New York University School of Medicine Department of Pediatrics Email:
[email protected] l Margarita Vasquez, MD University of Texas Health Science Center at San Antonio Department of Pediatrics Email:
[email protected] m Daniel J. Tancredi, PhD University of California, Davis Department of Pediatrics and the Center for Healthcare Policy and Research Email:
[email protected] n Su-Ting T. Li, MD, MPH University of California, Davis Department of Pediatrics Email:
[email protected] Key Words: medical education, research Running title: Conducting quantitative medical education research Abstract Word Count: 221 Main Text Word Count: 4969
2 Page 2 of 38
1 2 3 4 5 6
Funding: This work was not supported by any funding source. Conflicts of Interest: The authors have no conflicts of interest to disclose.
3 Page 3 of 38
1
Abstract
2
Rigorous medical education research is critical to effectively develop and evaluate the training
3
we provide our learners. Yet, many clinical medical educators lack the training and skills needed
4
to conduct high quality medical education research. This paper offers guidance on conducting
5
sound quantitative medical education research. Our aim is to equip readers with the key skills
6
and strategies necessary to conduct successful research projects, highlighting new concepts and
7
controversies in the field. We utilize Glassick’s criteria for scholarship as a framework to
8
discuss strategies to ensure that the research question of interest is worthy of further study and
9
how to use existing literature and conceptual frameworks to strengthen a research study.
10
Through discussions of the strengths and limitations of commonly used study designs, we expose
11
the reader to particular nuances of these decisions in medical education research and discuss
12
outcomes generally focused upon, as well as strategies for determining the significance of
13
consequent findings. We conclude with information on critiquing research findings and
14
preparing results for dissemination to a broad audience. Practical planning worksheets and
15
comprehensive tables illustrating key concepts are provided in order to guide researchers through
16
each step of the process. Medical education research provides wonderful opportunities to
17
improve how we teach our learners, to satisfy our own intellectual curiosity and ultimately, to
18
enhance the care provided to patients.
19
4 Page 4 of 38
1 2
Introduction Rigorous medical education research provides evidence for approaches to improve the
3
education of our learners, and shares with clinical, basic and translational research the ultimate
4
goal of improved patient outcomes. Medical education research can provide data to affirm that
5
we are training competent physicians. Evidence-based decisions, grounded in rigorous medical
6
education research, can guide changes in the delivery of medical education in order to assure that
7
the educational “product,” the trainee, is best prepared for the practice of medicine.1
8
At the same time, conducting a well-designed quantitative medical education research
9
study requires attention to factors quite different from traditional clinical research. For example,
10
the effectiveness of educational interventions often results from a complex interplay between the
11
learner, the educator, and the educational and clinical learning environment. As these factors
12
may vary significantly across programs, conducting large-scale studies and demonstrating
13
generalizability of findings can be challenging. Within individual programs, bleed or cross
14
contamination across learners who have and have not received an intervention can threaten one’s
15
ability to demonstrate an intervention’s effectiveness, as can the natural maturation that tends to
16
occur as learners gain experience and knowledge over time. Also important are institutional
17
review board (IRB) considerations when administering an educational intervention that may
18
impact only a subset of learners, or when trainees are the study subjects and may fear negative
19
consequences for declining consent.
20
Unfortunately, many clinical medical educators lack the training needed to conduct high-
21
quality medical education research and clear reporting of experimental medical education
22
research has been modest at best.2-4 It is imperative to fill this skill gap because quality medical
23
education research has the potential to improve several facets of the complex educational
5 Page 5 of 38
1
process: teaching and learning strategies, curriculum development and evaluation methods,
2
health care delivery, and, ultimately, patient outcomes.
3
To address these challenges, we have developed this paper on how to conduct meaningful
4
and rigorous quantitative medical education research. We use Glassick’s criteria for scholarship,
5
which includes clear goals, adequate preparation, appropriate methods, significant results,
6
effective presentation, and reflective critique, as essential components of a framework that can
7
be used to answer any research question.5 This paper fills important gaps through its discussion
8
of conceptual frameworks, focus on the nuances and challenges that govern methodologic and
9
ethical considerations in medical education research, and provision of practical tools and
10
suggestions for those embarking on a research project.6, 7 Qualitative research and studies with
11
mixed methodologies are increasingly being recognized for their considerable value in medical
12
education scholarship for many reasons, including with regard to identifying worthwhile
13
hypotheses or most relevant measures for further studies. Coverage of both quantitative and
14
qualitative research methods in a single paper would be prohibitively expansive. This paper
15
therefore focuses on frequently utilized quantitative methods. The reader is referred to an
16
excellent primer on qualitative medical education research for guidance with such designs.8
17 18 19
Before you Begin: Acquiring Skills to Conduct Medical Education Research Prior to beginning any research endeavor, every researcher must reflect on his or her own
20
skill set and that of potential collaborators. Consider working with content and methodologic
21
mentors within and outside your institution to facilitate planning and conducting your work.
22
Medical education scholarship workshops at national meetings such as Pediatric Academic
23
Societies (PAS), Association for Pediatric Program Directors (APPD), or Council on Medical
6 Page 6 of 38
1
Student Education in Pediatrics (COMSEP) offer practical approaches to designing and
2
conducting medical education research. Programs such as the Academic Pediatric Association
3
(APA) Educational Scholars Program, the Association of American Medical Colleges (AAMC)
4
Medical Education Research Certificate Program, and the APPD Leadership in Educational
5
Academic Development program offer more formal training.9-11 People interested in more
6
rigorous training may benefit from pursuing a Masters or a PhD in medical education, which can
7
often be completed remotely while working full-time.
8 9 10
Clear Goals – What do you want to Accomplish? Sound scholarship starts with a sound idea. We outline below an approach for developing an
11
idea into a compelling research argument, transforming it into a research question guided by
12
conceptual frameworks, and developing a study design and analysis guided by the question.
13
While we present this as a linear process for the sake of clarity, this process is highly iterative.
14 15
Crafting a Compelling Research Argument
16
To be worthy of dissemination, research must address a compelling and widely shared or
17
recognized problem. For example, suppose you want to improve the rates of successful neonatal
18
intubation by trainees. Articulation of a compelling problem might include: 1) residents are
19
mostly unsuccessful when attempting neonatal intubation, potentially jeopardizing patient
20
outcomes 2) this is true despite conventional neonatal resuscitation program participation; and 3)
21
few residents have opportunities to intubate patients.12 Therefore, an intervention that
22
successfully addresses these issues will likely be of interest to many residency programs.
23
7 Page 7 of 38
1 2
Conceptual Frameworks With a compelling problem and idea in hand, a key step is to consider the educational
3
philosophies which may illuminate the approach to addressing this problem. Georges Bordage
4
describes these philosophies, known as conceptual frameworks, as “ways of thinking about a
5
problem or a study, or ways of representing how complex things work the way they do.”13
6
Appropriate use of conceptual frameworks in educational research is increasingly being
7
recognized as key to advancing the rigor of quantitative studies.14 One or multiple conceptual
8
frameworks may be used and should be made explicit in order to afford readers greater
9
understanding of what guided particular decisions and how findings might translate to other
10
educational contexts. See Table 1 for examples of conceptual frameworks often used to frame
11
teaching interventions.
12
Consider the neonatal resuscitation example. The theory of deliberate practice may
13
inform an intervention to improve rates of successful neonatal intubation.15 Deliberate practice
14
is the individualized training activities designed to improve specific aspects of an individual’s
15
performance through repetition, feedback, and successive refinement.15 Applying this
16
conceptual model may help you refine and frame your question – What is the optimal frequency
17
and duration of practice? What is the optimal frequency of feedback?
18
As an example from the literature, authors Hunt et al were interested in using a novel
19
simulation approach to improve pediatric resident performance of cardiopulmonary resuscitation
20
(CPR). In traditional simulation interventions, learners are given a scenario, progress through
21
the entire scenario, and then debrief afterwards about what was done effectively or ineffectively.
22
In this study, however, the authors explicitly drew upon the theory of deliberate practice to test
23
the idea that a simulation-based intervention would be more effective if mistakes were corrected
8 Page 8 of 38
1
in real-time in order to maximize the time spent deliberately practicing a skill performed in the
2
correct way. Thus, they designed their educational simulation intervention as “rapid cycle
3
deliberate practice training” in which residents were given a scenario, performed a hands-on
4
skill, got immediate coaching to correct mistakes, and continued performing the hands-on skill
5
correctly to create muscle memory. This novel approach, guided by the use of a common
6
educational conceptual framework, was associated with improvement in performance compared
7
to traditional simulation methods.
8 9
Formulating the Research Question
10
Adapting the PICOTS format that has been successfully used in evidence-based
11
medicine can help you develop a clear and focused research question. PICOTS includes
12
population, intervention, relevant comparison group(s), outcome(s) assessed, timing and setting
13
(of the outcome assessment).16 For our example, the research question might be: Among
14
pediatric residents from a medium-sized academic residency program, does participation in
15
weekly 30-minute neonatal intubation simulation during neonatology rotations, compared with
16
participation in an annual standard neonatal resuscitation training program, result in higher rates
17
of successful intubation of patients at the end of the academic year? Our hypothesis may be:
18
implementing weekly 30-minute neonatal intubation simulation sessions during neonatology
19
rotations will lead to mastery of neonatal intubation more effectively than annual standard
20
neonatal resuscitation programs. Hypothesis testing, a hallmark of clinical and bench research,
21
is equally important for rigorous quantitative educational scholarship.
22 23
The strength of a research question can be measured with the I-SMART criteria (an acronym for important, specific, measurable, achievable, relevant, and timely), which connects
9 Page 9 of 38
1
well to the first Glassick criterion: Clear Goals (Table 2).17 Given rigorous publication
2
standards, it is prudent to consider the I-SMART question as the “outcome-driven I-SMART
3
question.” A classic paradigm often utilized for educational outcomes is Kirkpatrick’s Learning
4
Evaluation Model, which consists of 4 levels of educational outcomes: Level 1: reaction (learner
5
satisfaction), Level 2: learning (attitude, knowledge, and skill acquisition), Level 3: behavior
6
change, and Level 4: patient impact (Table 3).18 Aim as high on the pyramid as possible.
7
Research with lower outcome levels may be publishable if highly innovative, while less
8
innovative studies generally require higher outcome levels. Importantly, one cannot assume that
9
changes in knowledge, skills, and attitudes in a training or research context will have a direct
10
impact on behavior in the clinical setting or on patient outcomes. Quality improvement, medical
11
education, and perspectives from behavioral economics all support this assertion.19-22 Measuring
12
behavior change in situ and measuring relevant patient outcomes are challenging but critical for
13
advancing the field of medical education research.
14 15 16
Adequate Preparation– Are you ready to do this project? Glassick’s second criterion, adequate preparation, ensures that the research question
17
satisfies the relevant element of I-SMART. Adequate preparation includes: (1) understanding
18
what has already been studied and where the gaps lie through literature review; (2) acquiring
19
necessary skills/resources/collaborators; and (3) ensuring institutional review board (IRB)
20
approval/exemption.
21 22
Literature Review
10 Page 10 of 38
1
Before embarking on your study, you must understand what is known and unknown
2
about a particular problem and how well your question addresses these gaps. A medical librarian
3
is a valuable resource to assist with this process. Typical databases include MEDLINE,
4
PubMed, Google Scholar, Scopus, Cumulative Index to Nursing and Allied Health Literature
5
(CINAHL), and PsychINFO. Education-specific resources include Educational Resource
6
Information Center (ERIC) [articles, books, theses, guidelines in education], EdIT Library
7
[articles about education and information technology], MedEdPORTAL [a repository of peer
8
reviewed educational tools and curricular materials], MERLOT [a repository of peer reviewed
9
educational materials for all of higher education], Association of Pediatric Program Directors
10
(APPD) Share Warehouse [available to APPD members], the British Educational Index, and Best
11
Evidence Medical Education Collaboration. Critical appraisal of the literature follows to
12
determine whether existing studies are current and comprehensive, methodologically sound, and
13
broadly generalizable. If your idea has already been studied, consider if your project offers a
14
different perspective, such as a different level of learner, an expanded sample size, newer and
15
more accurate measurement tools, or an innovative teaching technique.
16 17 18
Acquire Necessary Skills and Resources Building a research team with the necessary skills and expertise is key to any project’s
19
success. Statisticians, content and methodological mentors, physician scientists, and non-clinical
20
research faculty can all serve as outstanding partners. Trainees or junior faculty who can
21
contribute in a substantial manner and, in turn, benefit from collaborative efforts, can also be
22
valuable. It is also important to identify the costs to complete your project and potential funding
23
sources if necessary. These sources may include local funding opportunities, educational grants
11 Page 11 of 38
1
available through national organizations such as the American Academy of Pediatrics (AAP),
2
APA, APPD, COMSEP or larger foundation and federal grants (particularly for projects
3
assessing patient outcomes).
4 5
Institutional Review Board (IRB) Considerations
6
It is essential to receive formal approval through the IRB prior to beginning any study.
7
However, there are several unique considerations for educational research.23 First, educational
8
research is often considered exempt if it is conducted in established or commonly accepted
9
educational settings, involves normal educational strategies, and the participant information is
10
de-identified.24 Thus comparing different instructional techniques and curricula through test
11
scores, performance evaluations, or de-identified surveys is often IRB exempt. Some institutions
12
may even have a blanket IRB exemption for all activities considered education research.
13
On the other hand, learners are a vulnerable population at risk for coercion for many
14
reasons including that they often interact with or are evaluated by those conducting the research
15
study. Therefore, protocols must clearly describe the process of informed consent and detail how
16
learners can refuse to participate or feel free to give honest opinions without fear of retribution,
17
while still being able to participate in all other aspects of any educational intervention that will
18
be implemented broadly. Consideration must also be given as to who will be collecting data so
19
that learners do not feel unfairly scrutinized by those in a position of authority. In addition,
20
concerns about fairness and equity in education may prompt some IRBs to have difficulty
21
approving studies where only some trainees receive an educational intervention, and may be
22
more likely to approve studies utilizing historical controls, cluster-randomized controls by
23
program, or crossover study designs.25 An additional challenge may occur when multiple
12 Page 12 of 38
1
institutions participate in the same educational research study. IRB approval at one institution
2
may suffice; however, often most institutions require their own individual approval.
3 4 5
Appropriate Methods – How will you design your project to achieve your specific aims? The saying “begin with the end in mind” applies to the selection of methods for a
6
research study. Some projects aim to generate hypotheses; others to test them. In this section,
7
we will introduce some of the more common terminology and quantitative designs used in
8
educational research, using our neonatal resuscitation example (Table 4). We will focus
9
primarily on methodologies used for hypothesis testing where the goal is to obtain measurable
10
outcomes to answer a question (often described as a positivist paradigm). While this paradigm
11
tends to be predominant in medical education, other paradigms, such as constructivist paradigms,
12
in which subjective experiences are emphasized, are also well suited to medical education
13
research.
14
Let us begin with this scenario: A program director, Dr. Smith, is concerned about the
15
effectiveness of the training her residents receive in neonatal resuscitation. While eager to
16
design an actual intervention to address this concern, she recognizes that she does not have a
17
good understanding of the true nature of the underlying problem and wants to explore the
18
problem through a descriptive study. Her research question might be: What are the facilitators
19
and barriers affecting pediatric residents’ comfort with and skill in performing neonatal
20
resuscitation? To answer this question, she may perform a cross-sectional study in which all data
21
are collected at one time point, such as through a survey with close-ended questions. Her a priori
22
hypotheses can be explored by having residents self-assess comfort and indicate agreement or
23
disagreement with pre-identified barriers and facilitators listed in the survey instrument.
13 Page 13 of 38
1
The results of a descriptive study often provide useful background to generate hypotheses
2
for conducting further explanatory studies. Perhaps the descriptive research revealed that
3
residents are predominantly struggling with the inter-professional teamwork aspect of conducting
4
a neonatal resuscitation. To address this challenge, Dr. Smith decides to introduce a new
5
curriculum that focuses on facilitating the teamwork of residents and inter-professional staff
6
during high-fidelity simulation training with mannequins. Her research question might be: Does
7
introduction of a curriculum to facilitate inter-professional teamwork during high fidelity
8
neonatal resuscitation training improve resident resuscitation skills and ability to work in inter-
9
professional teams in the delivery room?
10
Several different explanatory study designs could be utilized to answer this question,
11
each with distinct strengths and limitations. Cohort studies follow one or more populations of
12
learners longitudinally with respect to an outcome of interest. They are observational rather than
13
experimental – in other words, there is no intervention implemented to assess the outcome of
14
interest. Cohort studies cannot show causality, and thus, in this case, a cohort study would be
15
unlikely to successfully answer Dr. Smith’s research question.
16
In this case, there are quasi-experimental or experimental designs that Dr. Smith can
17
consider. Experimental designs are characterized by the investigator being able to assign
18
participants to specific interventions or conditions, ideally using rigorous randomization methods
19
to ensure a statistically equivalent comparison group. Quasi-experimental designs share
20
attributes of experimental designs, but typically lack a randomized control group, which make
21
them vulnerable to internal validity threats. One such quasi-experimental design is the pre-post
22
design, in which a baseline pre-intervention assessment and a post-intervention assessment of the
23
learner are conducted. The main limitation to this design is the lack of a concurrent comparison
14 Page 14 of 38
1
group. While you may demonstrate changes within each learner with a pre-post design, it is
2
difficult to attribute these to the intervention, because other confounding factors, such as natural
3
professional maturation, may impact learner performance at the post-intervention assessment.
4
Another quasi-experimental approach is the pre/post design with nonequivalent parallel
5
groups. In this approach, pre-intervention and post-intervention data are collected on
6
“intervention” and “control” groups respectively. However, these groups are naturally existing,
7
such as residents in different academic years or at different institutions. They are not randomly
8
allocated. Again returning to our example, if Dr. Smith opts for this quasi-experimental
9
approach, she might have all interns complete the new curriculum, and compare their
10
performance at baseline and the end of intern year to the performance of external NICU rotators
11
who experience the same resuscitation training but are not exposed to the new curriculum. To
12
address one of the main limitations of this study design – that the two groups are fundamentally
13
different from each other -- she might collect information on potential confounding factors such
14
as baseline comfort with and exposure to intubations and resuscitations, which could be used in
15
restricting the sample or in statistical analysis procedures to minimize bias from measured
16
confounders. Another significant limitation of this quasi-experimental study design is
17
“bleeding” between the control and intervention groups that often occurs when residents work
18
together, which can be limited by choosing historical controls or controls at separate sites that
19
have limited interaction with the intervention group.
20
A third option is the randomized control trial, in which subjects are randomly allocated
21
to an “intervention” or “control” arm. While this design is often considered the gold standard in
22
clinical research, it can be particularly challenging in medical education as sample sizes are
23
typically small and random assignment may be difficult or impossible due to deliberate
15 Page 15 of 38
1
placement of trainees on specific rotations. Randomization in a small cohort of students working
2
closely together may also result in exchange of information, which can be difficult to prevent but
3
important to note in any limitation section. In addition, with all experimental study designs,
4
educators may need to consider how to offer an educational intervention to all learners if proven
5
beneficial; one can consider designing a study that includes a plan to train or teach the “control”
6
participants the newly studied skill at the conclusion of the trial, or, with IRB approval, train the
7
“control” participants at the conclusion of the trial and measure their performance at some end
8
point once again on the outcomes of interest.
9
Sample Size Planning. Sample size justifications typically consider the necessary
10
precision to satisfactorily estimate the parameters of interest or the sample size necessary to
11
provide good statistical power (i.e. a high probability of detecting meaningful effects). Ensuring
12
you will have adequate sample size is critical to obtaining meaningful results. One of the major
13
challenges in educational research is that settings such as residency programs are often limited in
14
terms of size and representativeness. As a result, research projects should have built in, from the
15
beginning, a long-term plan for scaling up beyond a simple pilot to extend either to multiple sites
16
or be conducted across multiple time points.
17
Study Registration. Registration of a study prior to data collection in a well-recognized
18
study registry (e.g. clinicaltrials.gov) can be an essential element to promote the rigor and
19
transparency of the study and for some study designs (e.g. clinical trials, including educational
20
randomized controlled trials, and systematic reviews) is required by publishers.
21 22
Are your Results Meaningful?
16 Page 16 of 38
1
Meaningful results are both internally valid (include appropriate research design and analysis
2
to answer the important research question) and externally valid (replicable and generalizable to
3
other populations and settings). Design your analysis to answer your research question. For
4
quantitative studies, choose your study design, outcome variables (often called dependent
5
variables), measurement approaches, and analysis methods in collaboration with your statistician
6
(a partnership that cannot be emphasized enough!). Remember that lack of statistical
7
significance may have multiple explanations: 1) there truly is no difference between the groups;
8
2) there is a difference between the groups but it cannot be detected with your sample’s size and
9
the variability on the outcome measure of interest (Type 1 and Type 2 errors); and 3) there are
10
confounding factors that are not controlled or not equivalent across groups. To properly interpret
11
your results, your study must be appropriately powered to detect a difference between groups.
12
Collect outcome variables and potential confounding variables appropriate to your research
13
question. Confounding variables are variables that could correlate with both the outcomes you
14
are trying to measure (dependent variables) and your intervention/exposure (independent
15
variable). For example, if you are studying whether a just-in-time neonatal intubation simulation
16
is associated with increased success in intubations, you may want to include potential
17
confounders such as size and gestational age of the infant.
18
Measure data using instruments with validity evidence to address your research question.26
19
Validity evidence includes content validity, response process, internal structure, relationship to
20
other variables, and consequences.26 If you conduct a research project that does not yield the
21
outcomes you are seeking, it could be that the outcome you sought did not actually occur in your
22
study. Alternatively, it could be that your measurement instrument was insensitive to those
23
outcomes or changes that did indeed take place. You will not be able to untangle those two
17 Page 17 of 38
1
explanations, so it is important to use instruments with validity evidence.
2
When a measurement instrument with validity evidence does not already exist, you might
3
consider developing and validating a measurement instrument to disseminate for further use. For
4
example, if designing a survey, consider whether the survey questions are based on the literature
5
and agreed upon by content experts in the field (content validity), examined via cognitive
6
interviews to ensure the survey items were clear, relevant, and correctly interpreted (response
7
process), and piloted to ensure survey scores measuring the same construct are correlated
8
(internal structure) and relate to other measures in ways that make sense (relationship to other
9
variables)27 These concepts should be assessed for pre-existing as well as de novo surveys.
10
Returning to the example detailed in the methods section, it is key to note that Dr. Smith could
11
only perform her research effectively if she had a valid instrument with which to assess resident
12
performance in resuscitation skills and resident ability to work effectively as part of an inter-
13
professional team.
14
Utilize data analysis appropriate for your research (Table 5).28 In most instances, instead of
15
or in addition to reporting p-values, one should report parameter estimates (including treatment
16
effect estimates) along with 95% confidence intervals or other measures of uncertainty.29, 30
17
When making multiple comparisons, consider the possibility that as the number of comparisons
18
increase, so too does the risk of incorrectly concluding that a comparison is statistically
19
significant (Type 1 error). Hence, it may be desirable to use methods for simultaneous statistical
20
inference that can limit the overall probability of such an error for a family of comparisons.31
21
The more the study population represents the population at large, the more generalizable the
22
results of the study. Single site studies can be published if they add new information to the
23
literature and address important innovative questions. They may lead to multi-institutional
18 Page 18 of 38
1
studies, which can enhance generalizability of the results. The APPD Longitudinal Educational
2
Assessment Research Network (LEARN) and APA Continuity Research Network (CORNET),
3
are examples of collaborative research networks that enable generalizability of pediatric
4
educational research through supporting multi-institutional studies.32
5
Describe your sample in sufficient detail so that others can determine whether your study
6
sample is similar to their learners in their setting. If you have information about the population
7
at large, compare your study sample to that population. For example, if you are surveying
8
pediatric program directors, compare your study sample (respondents) to non-respondents
9
nationally in terms of program size and location.33 When making inferences about a population,
10
sample the population in a manner that promotes generalizability and minimizes selection bias.
11
For example, if you are studying whether graduating pediatric residents perceive themselves to
12
be prepared to counsel patients on obesity prevention, sample from a population of all graduating
13
pediatric residents, such as the AAP Annual Survey of Graduating Residents.34 However, if your
14
study question is to determine whether resident characteristics and experiences are related to
15
practicing in underserved areas and your hypothesis is that their background may affect their
16
desire to practice in underserved areas, consider oversampling underrepresented minority
17
residents.35
18 19
Effective Presentation –How will you disseminate your findings?
20
Effective presentation and dissemination are critical to advancing the science of medical
21
education. Furthermore, academic success and promotion are typically measured, in part, by the
22
quality and quantity of peer-reviewed presentations and publications. Dissemination of your
23
work builds a track record and creates a niche reflective of your research interests and expertise.
19 Page 19 of 38
1
There are a number of ways to share your research findings. Initially you may present your
2
work at the departmental level as research-in-progress. Once you have results to share, consider
3
presenting at local, regional or national meetings. Through these venues, the feedback and
4
potential mentoring you receive can help frame your writing plan and forge potential multisite
5
collaborations that propel your work to a higher level of importance. Also, consider if your
6
study lends itself to workshop development and delivery. Evaluations from such national
7
workshops that demonstrate the effectiveness of the presentation are critical to include for
8
promotional capital.
9
Identification of the optimal journal for publication requires a critical look into the
10
educational literature. Which journals are interested in this type of work? What journals have
11
published articles using similar methods? Which audience do you wish to reach? What is the
12
impact factor of the journal? The impact factor is a proxy measure of a journal’s relative
13
importance in a field and measures the yearly average number of citations to recent articles
14
published in the journal. In addition, if you have developed an innovative curriculum or an
15
assessment tool, you can consider submitting your work to a resource portal such as
16
MedEdPORTAL or APPD Share Warehouse. Recognize that it may take several attempts to
17
publish a single paper. Incorporating reviewer comments can strengthen a paper prior to
18
resubmission. (See Appendix 2, checklist for manuscript submission, to facilitate the writing
19
process).36, 37 Also of note, for a variety of study designs, there are evidence-based reporting
20
guidelines available from the EQUATOR NETWORK (www.equator-network.org). We strongly
21
recommend that these guidelines be used both in the planning and reporting of studies.
22 23
Reflective Critique – How will You Improve upon Your Work?
20 Page 20 of 38
1
Glassick’s final criterion requires scholars to critically evaluate their own work. This
2
involves reflecting on the literature review, critiquing the findings, and considering next steps to
3
advance your research. Your results should be viewed in the context of the gap identified for the
4
study and should be carefully linked to previous studies and/or the conceptual framework(s) that
5
you utilized. You must display a clear understanding not only of the results of the study but also
6
of the strengths and limitations. Simply listing limitations does not suffice. It is important to
7
consider how the limitations impact your results and their implications. Finally, a primary goal
8
of educational scholarship is to create the foundation for future scholarship. Since no single
9
study solves or resolves all issues around a topic, reflective critique must discuss the implications
10
for educational practice as well as future research to move the field forward.
11 12
Conclusions: How do we Continue to Advance the Field?
13
Quality educational research benefits our trainees as well as the faculty who teach them,
14
our institutions, the medical education community at large and ultimately, the patients we serve.
15
When educators identify and disseminate best educational practices, they have the opportunity to
16
greatly influence the next generation of physicians. Because of the continually changing
17
healthcare environment, and the challenges that educational institutions face as they prepare,
18
promote and assess the knowledge, skills and attitudes of their learners, educators need to
19
continually evaluate educational programs and share best practices among institutions and across
20
the training continuum.
21
Building the skills of clinical medical educators to conduct high quality quantitative
22
research is therefore critical to advancing education at the pace we need. Educational research
23
has made great advances in the past 10 years, as demonstrated in part by an increase in the
21 Page 21 of 38
1
number of articles published as well as journals devoted specifically to medical education.14
2
However, there remain important gaps. Indeed, weak study questions and designs were the
3
primary reasons for manuscripts not being sent for review by the editors of Academic
4
Medicine.38
5
How do we continue to elevate the rigor of medical education research? First,
6
educational studies must be carefully designed prior to implementing educational interventions
7
so that the rigor of the study can be optimized at the outset. Second, conceptual frameworks
8
should guide the development of educational interventions and their assessments. Quality
9
checklists have been proposed that can be used by authors, editors, and consumers of medical
10
education research to help ensure the rigor of research studies.14, 39 In the past these did not
11
explicitly include conceptual frameworks, while more current checklists do. Third, in addition to
12
assessing the effects of interventions on learner knowledge, skills, attitudes, and behaviors, more
13
research must focus on the processes that facilitate application of learning and behavior, as well
14
as the impact of interventions on patients and organizational goals and outcomes. These
15
principles have recently been endorsed as part of the New World Kirkpatrick Model.40 Fourth,
16
within pediatrics, there is growing infrastructure to allow for multi-site studies, such as the
17
Association of Pediatric Program Directors Longitudinal Educational Assessment and Research
18
Network (APPD LEARN) and CORNET.32 Researchers should take advantage of such
19
resources in order to ensure that research findings are generalizable across settings. Lastly,
20
journal readers, authors, and editors alike must continue to demand these high standards of
21
medical education research in order to ensure that these standards become routine and that we
22
continue to innovate within our profession.
23
22 Page 22 of 38
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44
References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20.
Klein MD, Li ST. Building on the Shoulders of Giants: A Model for Developing Medical Education Scholarship Using I-PASS. Academic pediatrics. 2016;16:499-500. Levinson W, Rubenstein A. Integrating clinician-educators into Academic Medical Centers: challenges and potential solutions. Acad Med. 2000;75:906-912. Cook DA, Beckman TJ, Bordage G. Quality of reporting of experimental studies in medical education: a systematic review. Medical education. 2007;41:737-745. Castiglioni A, Aagaard E, Spencer A, et al. Succeeding as a Clinician Educator: useful tips and resources. J Gen Intern Med. 2013;28:136-140. Glassick CE HM, Maeroff GI. Scholarship assessed – evaluation of the professoriate. San Francisco, CA: Jossey-Bass; 1997. Yarris LM, Deiorio NM. Education research: a primer for educators in emergency medicine. Acad Emerg Med. 2011;18 Suppl 2:S27-35. Beckman TJ, Cook DA. Developing scholarly projects in education: a primer for medical teachers. Medical teacher. 2007;29:210-218. Hanson JL, Balmer DF, Giardino AP. Qualitative research methods for medical educators. Academic pediatrics. 2011;11:375-386. Jerardi KE, Mogilner L, Turner T, Chandran L, Baldwin CD, Klein M. Investment in Faculty as Educational Scholars: Outcomes from the National Educational Scholars Program. J Pediatr. Vol 171. 2016/03/29 ed2016:4-5 e1. Medical Education Research Certificate Program. In: Colleges AoAM, ed. Trimm F, Caputo G, Bostwick S, et al. Developing leaders in pediatric graduate medical education: the APPD LEAD Program. Academic pediatrics. 2015;15:143-146. Campbell DM, Barozzino T, Farrugia M, Sgro M. High-fidelity simulation in neonatal resuscitation. Paediatrics & child health. 2009;14:19-23. Bordage G. Conceptual frameworks to illuminate and magnify. Medical education. 2009;43:312-319. Sullivan GM, Simpson D, Cook DA, et al. Redefining Quality in Medical Education Research: A Consumer's View. Journal of graduate medical education. 2014;6:424429. Ericsson KA KR, Tesch-Römer C. The role of deliberate practice in the acquisition of expert performance. Psychology Review. 1993;100:363-406. Medicine CfEB. Asking Focused Questions. Nuffield Department of Primary Care Health Science, University of Oxford. Doran GT. There’s a SMART way to write management’s goals and objectives. Manage Rev. 1981;70:35-36. Kirkpatrick D. Great ideas revisited. Techniques for evaluating training programs. Revisiting Kirkpatrick's four-level model. . Technology and Development. 1996;50:54-59. Hostetter M, and Klein S;. In Focus: Using Behavioral Economics to Advance Population Health and Improve the Quality of Health Care Services. Quality Matters Archive: The Commonwealth Fund; 2013. Davis D, O'Brien MA, Freemantle N, Wolf FM, Mazmanian P, Taylor-Vaisey A. Impact of formal continuing medical education: do conferences, workshops, rounds, and 23 Page 23 of 38
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46
21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39.
other traditional continuing education activities change physician behavior or health care outcomes? Jama. 1999;282:867-874. Team; IE. In: Exchange AHI, ed. Sustaining and Spreading Quality Improvement. Vol 20172014. Mayer-Mihalski NaD, M;. Effective Education Leading to Behavior Change. In: ParagonRx, ed. Vol 20172009. Keune JD, Brunsvold ME, Hohmann E, Korndorffer JR, Jr., Weinstein DF, Smink DS. The ethics of conducting graduate medical education research on residents. Acad Med. 2013;88:449-453. Miser WF. Educational research--to IRB, or not to IRB? Family medicine. 2005;37:168-173. Kraus CK, Guth T, Richardson D, Kane B, Marco CA. Ethical considerations in education research in emergency medicine. Acad Emerg Med. 2012;19:1328-1332. Downing SM. Validity: on meaningful interpretation of assessment data. Medical education. 2003;37:830-837. Rickards G, Magee C, Artino AR, Jr. You Can't Fix by Analysis What You've Spoiled by Design: Developing Survey Instruments and Collecting Validity Evidence. Journal of graduate medical education. 2012;4:407-410. Windish DM, Diener-West M. A clinician-educator's roadmap to choosing and interpreting statistical tests. J Gen Intern Med. 2006;21:656-660. du Prel JB, Hommel G, Rohrig B, Blettner M. Confidence interval or p-value?: part 4 of a series on evaluation of scientific publications. Deutsches Arzteblatt international. 2009;106:335-339. Cummings P, Rivara FP. Reporting statistical information in medical journal articles. Arch Pediatr Adolesc Med. 2003;157:321-324. Gelman A, Hill J, Yajima, M. . Methodological Studies: Why we (usually) don’t have to worry about multiple comparisons. Journal of Research on Educational Effectiveness. 2012;5:189-211. Schwartz A, Young R, Hicks PJ. Medical education practice-based research networks: Facilitating collaborative research. Medical teacher. 2016;38:64-74. Abramson EL, Naifeh MM, Stevenson MD, et al. Research training among pediatric residency programs: a national assessment. Acad Med. 2014;89:1674-1680. Frintner MP, Liebhart JL, Lindros J, Baker A, Hassink SG. Are Graduating Pediatric Residents Prepared to Engage in Obesity Prevention and Treatment? Academic pediatrics. 2016;16:394-400. Laraque-Arena D, Frintner MP, Cull WL. Underserved Areas and Pediatric Resident Characteristics: Is There Reason for Optimism? Academic pediatrics. 2016;16:401410. AAMC Review Criteria for Research Manuscripts. In: Durning SJ CJ, ed2015. Li ST KM, Gusic M, Vinci R, Szilagyi P. . Crossing the Finish Line: Getting your Medical Education Work Published. . Pediatric Academic Societies Workshop2016. Meyer HS, Durning SJ, Sklar D, Maggio LA. Making the First Cut: An Analysis of Academic Medicine Editors' Reasons for Not Sending Manuscripts Out for External Peer Review. Acad Med. 2017. Reed DA, Beckman TJ, Wright SM, Levine RB, Kern DE, Cook DA. Predictive validity evidence for medical education research study quality instrument scores: quality of 24 Page 24 of 38
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
40. 41. 42. 43. 44. 45. 46.
submissions to JGIM's Medical Education Special Issue. J Gen Intern Med. 2008;23:903-907. Moreau KA. Has the new Kirkpatrick generation built a better hammer for our evaluation toolbox? Medical teacher. 2017;39:999-1001. Bandura A. Social foundations of thought and action: A social cognitive theory. Englewood Cliffs, NJ1986. Young HN, Schumacher JB, Moreno MA, et al. Medical student self-efficacy with family-centered care during bedside rounds. Acad Med. 2012;87:767-775. Ericsson KA. Acquisition and maintenance of medical expertise: a perspective from the expert-performance approach with deliberate practice. Acad Med. 2015;90:1471-1486. Hunt EA, Duval-Arnould JM, Nelson-McMillan KL, et al. Pediatric resident resuscitation skills improve after "rapid cycle deliberate practice" training. Resuscitation. 2014;85:945-951. Kolb DA. Experiential Learning: Experience as the Source of Learning and Development. Upper Saddle River, NJ: Prentice Hall, Inc.; 1984. Klein M, Vaughn LM. Teaching social determinants of child health in a pediatric advocacy rotation: small intervention, big impact. Medical teacher. 2010;32:754759.
22
25 Page 25 of 38
1
26 Page 26 of 38
1 2
Table 1: Examples of Conceptual Frameworks Used in Medical Education
Conceptual Framework Bandura’s Social Cognitive Theory41
Description
Example of use of conceptual framework
People learn from one another by observing and imitating others’ behavior.
Used cross-sectional surveys to explore factors that supported self-efficacy with family centered care among third year medical students during their pediatric clerkship42
Self-efficacy is an important prerequisite guiding behavior Self-efficacy can be supported by observing role models, having opportunities to practice a behavior, and receiving feedback on performance
Ericsson’s Theory of Individualized training activities designed to improve specific Deliberate 43 aspects of an individual’s Practice performance through repetition, immediate feedback, and successive refinement.
Prospective pre-post intervention study of pediatric resident rapid cycle deliberate practice of resuscitation skills with immediate feedback and opportunity to “pause, rewind 10 s and try it again” in simulated cardiopulmonary arrest scenarios.44
Kolb’s Experiential Learning Cycle45
Development of a curriculum to teach social determinants of health using experiential learning (e.g., “field trip” to food bank), followed by reflection on the experience and development of theory through abstract conceptualization (residents asked to reflect on learning experience and how it will influence their clinical practice through “Memo-to-Myself” exercise), and testing their hypotheses (apply what they learned to their care of underserved patients in clinic).46
Learning happens through transforming experience through a 4-stage learning cycle: Concrete Experience: Do Something Reflective Observation: Think About What You Did Abstract Conceptualization: Make Sense of What You Experienced Through Developing Theories Active Experimentation: Put What You Learned Into Practice
3 4 5 6
27 Page 27 of 38
Table 2: Elements of an I-Smart Research Question Important
Specific
Measurable
Achievable
Relevant (not rehashing)
Timely
Is the question important to you and others in your field? Is the question specific? Can it be distilled down further? Will it stand on its own? Is there a measurable outcome (or outcomes) for the study? Can you collect the data variables necessary to study the outcome you wish to measure? Do you have the resources (research team, mentorship, funding, time, etc.) to successfully complete your project? Will the results add new information to the literature? Will the results add to the depth and breadth of the current literature, or will it simply restate what is already known? Can the study be completed in a time frame that is reasonable for you? For the audience? For granting agencies (if applicable)?
28 Page 28 of 38
Table 3. Examples of Kirkpatrick’s Pyramid of Educational Outcomes Level 1
Description Reaction
2
Learning (attitudes, knowledge, skills)
3
Behavior
4
Patient outcomes
Question Intubation Simulation Example “Did they like it?” Survey participants on the usefulness of the “What do they plan to do differently based intubation simulation session using a 5-point on what they learned?” Likert scale. Survey participants on self-assessed comfort with intubation using a 5-point Likert scale. “What did they learn?” Attitudes: Survey participants on whether they “How much did they learn?” feel all pediatric residents should be competent at intubating neonates prior to graduation. Knowledge: Test participants on medical knowledge of indications, contraindications, risks, benefits, and mechanics of intubation (choosing appropriate sized tube, blade; landmarks, etc.) before and after intubation simulation sessions. Skills: Compare rates of successful first-attempt intubation on mannequins for participants in intubation simulation sessions compared to nonparticipants. “Did it change behavior?” Determine rates of successful first-attempt intubation on neonates for participants in intubation simulation sessions compared to nonparticipants. “Did the behavior change affect patients?” Determine differences in morbidity/mortality of neonates intubated by participants compared to nonparticipants.
29 Page 29 of 38
Table 4: Common Quantitative Study Designs Used in Medical Education Research Study Goal
Study Type
Example Research Question
Example Methods
Study Advantages
Study Disadvantages
Cross-sectional survey Provides As data collected at administered to residents descriptive data one point in time, from one point in often relies on recall Presents preset choices of barriers time of information which and facilitators which residents is subject to bias have to select or rank Less demand on resources as does Cannot demonstrate Answers will include percentages, not require causality rank order lists, counts follow-up
Describe/Explore a group or phenomenon
Descriptive, Quantitative
What are the most common facilitators and barriers affecting pediatric resident comfort with performing neonatal resuscitation from the point of view of the residents?
Test a hypothesis (Explanatory)
Cohort Study
How likely is it that residents exposed to a neonatal simulation-based curriculum will chose a career in neonatology?
Follow one group of residents longitudinally after exposure to the simulation-curriculum to see how many go on to choose a career in neonatology Can compare rates to a cohort of residents not exposed to the curriculum
Takes advantage of naturalistic setting Avoids learning from pretest
Can only demonstrate associations, not causality
Intervention only, pre/posttest design (a quasi-experimental design because of the absence of a randomized comparison group)
Does introduction of a novel, simulation-based neonatal resuscitation program involving residents and interprofessional staff improve resident resuscitation skills and ability to work in interprofessional teams in the delivery room?
Conduct a pretest of residents No need for prior to exposure to new control curriculum assessing resuscitation Can demonstrate skills (such as number of attempts change/gains per successful intubation, time to successful intubation) and teamwork skills (using a validated teamwork assessment scale) Deliver curriculum Administer same assessments post-curriculum exposure to assess change in performance
Cannot attribute any change/gains to intervention alone (time, practices, and other educational experiences may also factor in)
Pre/post design with nonequivalent parallel groups (a quasiexperimental design because of the absence of assignment of individuals to study groups)
Does introduction of a novel, simulation-based neonatal resuscitation program involving residents and interprofessional staff improve resident resuscitation skills and ability to work in interprofessional teams in the delivery room better than a traditional neonatal resuscitation program?
Pre-intervention and Post Able to compare Need to control for intervention outcome data on two to control; yet baseline differences groups: Control and still more feasible Impact of different than randomized Intervention non-randomized sites/times study groups. Subject to immersion Conduct a pretest of all interns and ecological effects assessing resuscitation skills (such as number of attempts per successful intubation, time to
30 Page 30 of 38
successful intubation) and teamwork skills (using a validated teamwork assessment scale) First three blocks of interns get novel curriculum, second three blocks of interns get traditional neonatal resuscitation program Conduct similar post-curriculum assessments and compare results for the two groups
Equivalence parallel groups pre-post design (an Experimental design characterized by Random Assignment of individuals to study groups)
Does introduction of a novel, simulation-based neonatal resuscitation program involving residents and interprofessional staff improve resident resuscitation skills and ability to work in interprofessional teams in the delivery room better than a traditional neonatal resuscitation program?
Random allocation for control Reduces and intervention group. allocation bias (minimizes Conduct a pretest of all interns baseline assessing resuscitation skills differences) (such as number of attempts per successful intubation, time to successful intubation) and teamwork skills (using a validated teamwork assessment scale) Use a random number generator to allocate interns to two groups: one group receives traditional neonatal resuscitation program at orientation, the other group receives the novel simulation-based curriculum Conduct similar post-curriculum assessments and compare results for the two groups
Difficult to do Resource intensive Does not address nonuniform intervention Intervention “bleed” Subject to immersion and ecological effects Education Ethics concerns
31 Page 31 of 38
Table 5. Common Statistical Tests Used in Medical Education Research What are you trying to determine? Summary values for a random variable with a bell-shaped distribution
Midpoint value of a rankordered list –minimizes influence of extreme values; useful for ordinal data
Example Question What is the average number of patients that third-year medical students take care of on the general inpatient wards? What is the median medical school educational debt of pediatric residents?
Most common value – useful for What is the most common nominal or ordinal data subspecialty pediatric residents enter after residency? Compare observed vs. expected Is the study population of values – categorical variables pediatric residents similar to all pediatric residents in the United States in regards to gender? Compare means of 2 Does an interactive webindependent groups – data based module on EKG normally distributed interpretation improve residents’ ability to accurately interpret EKGs compared to a lecture on EKG interpretation? Compare means of 2 paired Does an EKG module groups (e.g., pre- and post-test) improve residents’ ability to – data normally distributed accurately interpret EKGs?
Statistical Methods Estimated mean and standard deviation from sample Median
Mode
How should results be reported? Mean (95% CI) and standard deviation estimates
Median (often accompanied by the minimum and maximum value and/or the 25th and 75th percentiles) Mode
Chi-Square tests
p-values
Unpaired Ttest
Differences in means (95% CI) and p-value, possible adjusted for multiple comparisons
Paired t-test
Differences in means (95% CI) and p-value, possibly adjusted for multiple comparisons (Adjusted) mean differences (95% CI) and p-value, possibly adjusted for multiple comparisons Correlation coefficient and 95% CI
Compare means of 3 or more groups
Is there a difference in medical school debt for residents who choose to practice in rural, urban, or suburban areas?
Analysis of Variance (ANOVA) or multiple regression
Correlation – data normally distributed (parametric)
How well do resident selfassessments of intubation skills correlate with faculty assessment?
Pearson productmoment correlation
32 Page 32 of 38
Correlation – data not normally distributed (non-parametric)
Association – interval and ordinal data Association – binary data
How well does resident performance on the InTraining-Examination correlate with their performance on the American Board of Pediatrics certifying exam? What are factors associated with USMLE Step 1 scores? What are factors associated with passing the American Board of Pediatrics certifying exam on the first attempt?
coefficient Spearman’s rank correlation coefficient
Linear regression Logistic regression
Correlation coefficient with 95% CI
Regression coefficient with 95% CI Odds Ratio with 95% CI
CI: Confidence Interval Interval – Data where the difference between two values is meaningful (i.e., Age) Ordinal – Data where there is a sense of order, but consecutive values may not be equally spaced (i.e., Likert scales: Strongly Disagree – 1; Disagree – 2; Neither agree nor disagree – 3; Agree – 4; Strongly Agree – 5) Nominal – Categorical - Data in which there is no inherent order (i.e., cardiology, pulmonary, general pediatrics)
33 Page 33 of 38
Appendix 1: Educational Research Scholarship Guide/Timeline 1. Clear Goals (I-SMART) - Specific Aim: What are you trying to do? a. Important, interesting b.Specific, simple to understand c. Measurable outcome d.Achievable e. Relevant and not rehashing f. Timely What conceptual framework(s) are you utilizing? 2. Adequate preparation – Are you ready to do this project? a. Literature review – Sources, keywords
Deadline
b.Acquire necessary skills c. Acquire necessary resources (i.e. collaborators, statistical support, etc.) d.IRB Considerations and Submission e. Registration in study registry (e.g. clinicaltrials.gov) f. Selection of relevant reporting guidelines (from EQUATORNETWORK) 3. Appropriate methods – How are you going to do it? a. Study Design, including sample size justification and selection of measurement approaches and observation schedule
b.Analysis/Evaluation
4. Significant results – So 34 Page 34 of 38
what? 5. Effective presentation – How will you disseminate your work? a. Public dissemination – publication, workshop, presentation? Where? b.Peer review c. Platform on which others can build 6. Reflective critique – How will you improve upon your work? Plan Do Study Act (PDSA) a. Critically evaluate your work b.Compare your findings with prior scholarship c. Discuss limitations of your work d.Discuss next steps
35 Page 35 of 38
Appendix 2: Checklist for Authors Prior to Educational Manuscript Submission Title/Abstract 1. Title is clear and representative of content 2. Abstract concisely describes study and key findings 3. Conclusions in abstract are justified given information provided in abstract 4. All information provided in abstract are presented in text 5. All information in abstract/ text/figures/tables are consistent Introduction 1. Builds a convincing case why this problem is important with literature review 2. Identifies gaps in literature and addresses how this study will fill the gaps 3. Conceptual framework is explicit and justified (and/or in Discussion) 4. Specific aim of the study (and hypothesis where applicable) is clearly stated Methods For ALL Studies 1. Research design appropriate to address research question 2. Research design clearly stated (i.e., cross-sectional cohort study) 3. Methods clearly described in sufficient detail to permit study to be replicated 3a. Study population (sampling, selection bias) 3b. Study intervention (objectives, activities, time allocation, training) 3c. Study instrument validity evidence (instrument development, content, preparation of observers/interviewers/raters, scoring method, psychometric properties) 3d. Study outcomes clearly defined (and high on Kirkpatrick’s pyramid – may be inversely related to level of innovation, with less innovative ideas requiring higher outcome levels) 4. Data analysis appropriate for research design and research question 5. Data analysis procedures clearly described in sufficient detail to be replicated 6. IRB approval/exemption and consent clearly stated For Quantitative Studies: 1. Study is generalizable due to selection of participants, setting, educational intervention/materials (external validity – less innovative studies require higher generalizability with more sites, etc.) 2. Potential confounding variables addressed and adjusted for in analysis (internal validity) 3. Statistical tests appropriate. Effect size, functional significance discussed when appropriate. When making multiple comparisons, adjustment for significance level for multiple tests/comparisons are considered. 4. Power issues are considered in studies that make statistical inferences (particularly if results not significant) For Qualitative Studies: 36 Page 36 of 38
1. Study offers concepts or theories that are transferable to other settings and methods described in sufficient detail (setting, sample) 2. Philosophical framework clearly stated (i.e., grounded theory) 3. Study design incorporates techniques to ensure trustworthiness (i.e., triangulation, prolonged observation) 4. Characteristics of the researchers that may influence the research are described and accounted for during data collection/analysis 5. Describe how members of the research team contribute to coding, identifying themes, and/or drawing inferences (dependability, confirmability) For Mixed-Methods (Quantitative and Qualitative) Studies: 1. Justify use of mixed-methods (Study must do justice to both methodology) 2. Justify order of quantitative and qualitative study Results 1. All results are presented and align with study question and methods. All results are presented in Results section (and not in other sections) 2. Sufficient data is presented to support inferences/themes 3. Tables, graphs, figures used judiciously to illustrate main points in text Discussion 1. Key findings clearly stated. Conclusions follow from design, methods, results 2. Findings placed in context of relevant literature, including conceptual framework. Alternative interpretations of findings are considered as needed 3. Study Limitations and Study Strengths discussed 4. Practical significance or implications for medical education are discussed. Guidance for future studies is offered. References 1. Literature review is comprehensive, relevant, and up-to-date 2. Ideas and materials of others are appropriately attributed (No plagiarism) Final Journal Check 1. Study is relevant to mission of journal and journal audience 2. Author guidelines are followed (including word count) 3. Prior publication(s) by author(s) of substantial portions of the data are appropriately acknowledged 4. Conflicts of interest are disclosed 5. Text is well written and easy to follow 6. Manuscript is well organized Note: This table was adapted from AAMC Review Criteria for Research Manuscripts, 2nd Edition. Eds: Durning SJ, Carline JD. 2015.
37 Page 37 of 38
38 Page 38 of 38