Examining mixing methods in an evaluation of a smoking cessation program

Examining mixing methods in an evaluation of a smoking cessation program

Accepted Manuscript Title: Examining Mixing Methods in an Evaluation of a Smoking Cessation Program Author: Anne Betzner Frances P. Lawrenz Mao Thao P...

148KB Sizes 0 Downloads 32 Views

Accepted Manuscript Title: Examining Mixing Methods in an Evaluation of a Smoking Cessation Program Author: Anne Betzner Frances P. Lawrenz Mao Thao PII: DOI: Reference:

S0149-7189(15)00067-1 http://dx.doi.org/doi:10.1016/j.evalprogplan.2015.06.004 EPP 1226

To appear in: Received date: Revised date: Accepted date:

14-8-2014 6-2-2015 13-6-2015

Please cite this article as: Betzner, A., Lawrenz, F. P., and Thao, M.,Examining Mixing Methods in an Evaluation of a Smoking Cessation Program, Evaluation and Program Planning (2015), http://dx.doi.org/10.1016/j.evalprogplan.2015.06.004 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

RUNNING HEAD: Mixing Methods

1

TITLE: Examining Mixing Methods in an Evaluation of a Smoking Cessation Program RUNNING HEAD: Mixing Methods

cr us

Ac ce pt e

Mao Thao, BS BA PhD Student University of Minnesota [email protected]

d

Frances P. Lawrenz, Ph.D. Professor Associate Vice President for Research University of Minnesota [email protected] 612-625-2046

an

Author for correspondence

M

*

ip t

SUBMITTED BY: Anne Betzner, Ph.D.* Vice President Professional Data Analysts, Inc. 219 Main Street SE, Suite 302 Minneapolis, MN 55418 [email protected] Phone: 612-623-9110 Fax: 612-623-8807

Highlights Surveys, focus groups, and phenomenological interviews were used in an evaluation Methods were analyzed individually and mixed at the point of interpretation pragmatically and dialectically Strongly interpretive methods produced different findings than the weakly interpretive ones Mixed methods appear optimized when method paradigms differ substantially The cost of mixed method may be justified for this study based on knowledge gains from divergent findings Introduction Mixed method research and evaluation is a tool commonly used by researchers and evaluators to investigate program or policy merit and worth (Creswell, Trout, & Barbuto, 2002; Teddlie & Tashakkori, 2003). The intentional use of mixed methods and research on mixing methods as a methodology has coalesced into a field of study in its own right. Use of mixed methods has been most prominent in applied fields such as evaluation (Greene & Caracelli, 1997; Greene, 2007), health sciences (O’Cathain, 2009), and education (Day, Sammons, & Gu, 2008).

Page 1 of 16

RUNNING HEAD: Mixing Methods

2

Ac ce pt e

d

M

an

us

cr

ip t

Evaluators are attracted to mixed methods research for many reasons. In many situations, mixed methods are used to meet the needs of multiple stakeholders (Benkofske, 1996; Chelimsky, 2007; Patton, 1997; Smith, 1997). Evaluators also turn to mixed method methodology to address the practical challenges and resultant uncertainty of using any single method (Datta,1997; O’Cathain, Murphy & Nicholl, 2007), because both post-positivist and interpretive methods of gathering information have limitations. Furthermore, using a mixed methods approach with different types of qualitative and quantitative data helps to provide a more complete understanding of evaluation questions and problems than solely using a qualitative or quantitative approach (Creswell, 2014). Mixed methods can also be a tool to increase the credibility of evidence in an era of evidence-based practice (Greene, 2013; Hesse-Biber, 2013). Greene (2007) described that mixed method studies may be generative, as paradox and contradiction are engaged and ``fresh insights, new perspectives, and original understandings'' (p. 103) emerge. Other mixed method authors share this belief in the promise of mixed methods. For example, Tashakkori and Teddlie (2003) used the term gestalt to indicate how inferences from mixed methods may be greater than the single method components. Barbour (1999) described mixed methods as a whole greater than the sum of its parts. Creswell (2014) maintains that researchers and evaluators are able to draw upon the strengths of both qualitative and quantitative methods in mixed methods research to gain more insight and understanding as different types of data provide different types of information. Greene and Caracelli (1997) provided the first comprehensive theory of mixed methods in evaluation via their dialectic approach. Recently there has been more work on the dialetic approach (Creswell, Trout, & Barbuto, 2002; Greene & Hall, 2010; Tashakkori & Teddlie, 2010). There has also been emphasis on a pragmatic approach as well (Johnson & Onwuegbuzie, 2004; Morgan, 2007; Tashakkori & Teddlie, 2010). Datta (1997) and Maxcy (2003) articulated a pragmatic stance to mixing methods that has its roots in the philosophic writings of John Dewey and William James (among others), but is different. Pragmatism in the Deweyism sense is seen not as a philosophical approach, but rather a ``set of philosophical tools'' (p. 97) for researchers and evaluators to address problems (Biesta, 2010). Whereas other researchers view pragmatism as a philosophical approach to choose what works best for their given research or evaluation (Creswell, 2014; Greene & Hall, 2010; Johnson & Onwuegbuzie, 2004; Morgan, 2007; Rescher, 2001). Despite the different views on pragmatism, it appears to be the dominant stance employed by mixed method researchers (Creswell, 2014; Greene & Hall, 2010; Tashakkori & Teddlie, 2010). Riggin (1997) found a pragmatic stance to be almost exclusively employed when she reviewed all examples of mixed method evaluations presented in a volume of New Directions in Evaluation dedicated to the subject. More recently, Johnson and Onwuegbuzie (2004) suggested that ``the time has come'' for mixed method research, and that investigators do whatever is practical. The purpose of this study is to present the results of a comprehensive evaluation of a smoking cessation study that used three distinct evaluation methods and provide a comparison of these methods. The study probes the idea that mixing methods yields findings over and above those found using single methods, that divergence of methods is a critical factor, that mixed method studies can better meet the demands of multiple stakeholders, and that mixed method studies are more expensive. Background Two stances to mixing methods in an evaluation are pragmatic and dialectic. Pragmatism is a uniquely American philosophical tradition, most fully developed by Charles Sanders Pierce (see

Page 2 of 16

RUNNING HEAD: Mixing Methods

3

Ac ce pt e

d

M

an

us

cr

ip t

Pierce, 1992; Pierce, 1998), William James (see James, 1975), and John Dewey (see Dewey, 1998a; Dewey, 1998b). In their scholarship, pragmatism is primarily concerned with meaning or epistemology as measured by its consequences. Modern pragmatist Rescher (2001) described that in pragmatism, what works in practice becomes the standard for the truth of assertions, the rightness of actions and value of appraisals. Creswell (2014) describes pragmatism as a stance that is not committed to any set of philosophical ideas, but allows evaluators to choose the methods that best meet the needs and purpose of their evaluation. Greene and Hall (2010) see pragmatism as providing ``actionable knowledge'' and ``practical solutions'' (pp. 138) to addressing problems within research and evaluation, with the rationale being that multiple perspectives are useful for inquiry. Johnson & Onwuegbuzie (2007) support the adoption of a pragmatic approach in mixed methods research. Johnson & Onwuegbuzie (2004) outline 22 characteristics of pragmatism and view pragmatism as a way to connect conflicting paradigms allowing researchers and evaluators a middle ground to consider what methods and philosophies are useful for their work. While proponents of pragmatism, they do acknowledge some weaknesses, such as pragmatism promoting small incremental changes rather than larger societal changes and potential difficulty in dealing with ``useful but non-true'' or ``non-useful but true'' beliefs and propositions (Johnson & Onwuegbuzie, 2004, p. 19). Morgan (2007) is also a proponent of the pragmatic stance proposing a mixed methods pragmatic approach to social science research methodology as an alternative to a solely qualitative or quantitative approach. In doing so, his framework offers processes that balance the dichotomies present in the qualitative versus quantitative debate. First in terms of connecting theory with data, Morgan (2007) proposes an abductive reasoning that goes back and forth between the induction reasoning in the qualitative approach and the deduction reasoning in the quantitative approach. This process maximizes the strengths of qualitative and quantitative methods by allowing results of one approach to inform the other. Secondly, Morgan (2007) proposes an intersubjectivitiy dimension focusing on communication and shared meaning to describe the relationship between the researcher and the research process as opposed to the relationship being subjective in the qualitative approach and objective in the quantitative approach. He argues that researchers and evaluators need to have a mutual understanding with both their audience and colleagues. Lastly, Morgan (2007) proposes the idea of transferability in making inferences, which supersedes the dichotomy of context and generalizability in the qualitative and quantitative approaches respectively. The transferability dimension is borrowed from Lincoln and Guba (1985) (as cited in Morgan, 2007) and refers to whether knowledge gained from researcher and evaluation can be transferable in other contexts and settings. Dialectic is a term derived from Greek meaning to converse or discuss. Hegel provides a comprehensive treatment of the dialectic where it is concerned with contradictions (Singer, 2001). A position is challenged by an argument and the two points are united by a third that transcends and subsumes both. This transformation is termed in The Science of Logic and is translated as ``sublation'' or ``overcoming.'' This transcendent concept then becomes subject to challenge, until the final transformation is perfected (Singer, 2001). This approach allows for considering conflicting findings side by side and creating a synthesis that encompasses but transcends them – seeking to generate new truths that transcend the old. Hegel’s approach also allows methods to be combined in a spiraling manner. The spiraling is manifest because the synthesis created could itself turn into a thesis, which may then be challenged by another antithesis, until the final synthesis is perfected. This may be especially important in mixed

Page 3 of 16

RUNNING HEAD: Mixing Methods

4

Ac ce pt e

d

M

an

us

cr

ip t

methods because as new syntheses are generated, they may conflict with one another and require resolution. The generative and spiraling nature of Hegel’s dialectic makes it a suitable for mixed method evaluations. More recently, Greene and Hall (2010) are advocates of the dialectic stance. They state, ``A dialectic stance actively welcomes more than one paradigmatic tradition and mental model, along with more than one methodology, into the same inquiry space and engages them in respectful dialogue with the other throughout the inquiry (pp. 124).'' In a dialectical stance, multiple perspectives are valuable. The aim is not so much to seek convergence in mixing methods, but rather to juxtapose differences in order to gain greater insight and understanding. Thus, the rationale for taking a dialectic stance is recognizing multiple philosophical perspectives and engaging with differences in those perspectives to lead to greater understanding of a problem or issue (Greene & Hall, 2010). When mixing methods, it is important to assess the methodological quality of the integration of methods, as well as the inferences made. Heyvaert, Hannes, Maes, & Onghena (2013) argue that the methodological quality of mixed methods research need to be assessed for three key reasons: 1) the analysis of mixed methods informs readers about ways in which qualitative and quantitative data converges or diverges, 2) the qualitative and quantitative approaches cannot be assessed independently when one informs the other, and 3) the quality of the mixing methods matter when qualitative and quantitative methods come together to create a bigger understanding of overarching research questions. In their study, they reviewed 13 unique critical appraisal frameworks published between 2004 and 2009 and found 13 categories of criteria for evaluating the quality of mixed methods studies. Of the 13 categories of criteria, two were specific to assessing the quality of mixing methods where 9 of 13 frameworks included criteria for assessing the mixing and integration of mixed methods and 4 of 13 frameworks included criteria for including a rationale for mixing methods. Equally important, is the assessment of a researcher’s decision-making process in taking a mixed methods approach. Datta (1997) provides three essential criteria for making mixed method decisions. The criteria include (1) practicality, which implies one’s experience and knowledge of what does and does not work, (2) contextual responsiveness to the demands, opportunities and constraints of an evaluation situation, and (3) consequentiality, or making decisions based on practical consequences. These three criteria are used to evaluate the decision making in this present study. Methods This study was an analytical reflection on a mixed method evaluation conducted for a non-profit organization dedicated to enhancing life for all citizens by reducing tobacco use and exposure to secondhand smoke through research, action, and collaboration. The organization funds a variety of programs to assist people to stop using tobacco. The purpose of the evaluation was to determine the extent to which local smoke-free regulations impacted program participants in their efforts to stop smoking. The evaluation took place over a three year period and consisted of three single method sub-studies the results of which were then combined following different stances. The first sub-study was a panel design follow-up telephone survey with a comparison group. A total of 1,169 program participants, who live in communities with and without smoke-free regulations, were surveyed at enrollment and at 6 and 18 months after enrollment. Demographic and participant program use data were provided by the service vendor. The survey assessed participants’ smoking behaviors, their exposure to bans, and other ban-related behaviors and

Page 4 of 16

RUNNING HEAD: Mixing Methods

5

Ac ce pt e

d

M

an

us

cr

ip t

attitudes. Frequencies, bivariate statistics and logistic regression analyses were conducted using SPSS. This sub-study is considered post-positivistic because its goal was to understand the relationship between ban status and quit rates using the most objective, standardized, and controlled design possible. The second sub-study was a series of 13 two-hour focus groups of program participants living in a large Midwest city metropolitan area. In a completely crossed design, participants were sampled on quit status and the ban status of their home address. The groups were conducted according to the pragmatically-oriented methods of Richard Krueger (Krueger & Casey, 2009). A total of 70 people participated in the focus groups. Group size ranged from 3 to 10 people and the median number of participants per group was five. Participants were asked about their quit attempts around the time when bans were implemented, their motivations for quitting, what helped and hurt their quits, their experience with the bans, and their opinion on how bans impacted them. According to methods outlined by Saldana (2012), the lead author developed descriptive first cycle codes such as ``ban impact on quitting'' and ``travel to different ban communities''. Second cycle codes were developed to identify any emerging issues such as smell of smoke and social norms. Data were transcribed and coded in NVIVO. Focus groups embodied a mix of characteristics from both interpretive and post-positivist traditions. For example, participants responded in their own words to open-ended questions and provided important context regarding their quit attempts. However, the fully crossed design reflected a more postpositivist approach and the two hour group discussion lacked the depth and richness necessary for an intensely interpretive approach. Therefore, this method was considered to be weakly interpretive in nature. The third sub-study was descriptive phenomenological interviews with twelve participants lasting one hour each. Participants were asked about their experiences with the bans and quitting so as to best understand their lived experience unmediated by reflection, intellectualization, or desire to please the researcher. The lead author of this manuscript conducted interviews using several techniques specific to phenomenology, such as placing herself in a state of reduction and bracketing her preconceptions, considering the phenomenon of bans ``precisely as it is given'' (Giorgi, 1997, p. 237) and conducting the interview in a posture of wonder and openness while seeking concrete details. Interview transcripts were transcribed and coded using NVIVO software with a free imaginative variation analysis technique in order to uncover the essential nature of the phenomenon of interest and its invariant meanings (Giorgi, 1997). Phenomenology is considered to be its own interpretive paradigm. Because the conduct of the interviews sought to adhere as closely as possible to the phenomenological approach, this method is considered to be strongly interpretive in nature. Findings from each of the three sub-studies were organized by topic area and compared to one another in order to determine any differences in the findings and any new insights that could be gained through the mixing. Findings were compared using a fundamental qualitative descriptive approach with the goal of conducting a low-inference analysis (Saldana, 2012). The results of the survey were used in both mixes. The survey results were compared to the results from the focus groups to operationalize a pragmatic stance. This pair of studies was considered pragmatic because the arrangement has several practical advantages: focus groups collect both quantitative and qualitative data, they provide the opportunity to have a larger sample of participants, and the methodology is widely known among researchers with a variety of methodological backgrounds. To create a dialectic mix, survey and phenomenological interviews were combined because Greene and Caracelli (1997) stipulate that paradigms should

Page 5 of 16

RUNNING HEAD: Mixing Methods

6

Ac ce pt e

d

M

an

us

cr

ip t

be prioritized in the dialectic approach. One way this may be accomplished is by selecting two methods that differ greatly from each other paradigmatically, which was best achieved by mixing the post-positivistic telephone survey with the strongly interpretive phenomenological interviews. Like the analysis of single method studies, the findings from the pragmatic and dialectic mixes were compared using a fundamental qualitative descriptive approach with the goal of conducting a low-inference analysis (Saldana, 2012). Specifically, findings in each method mixed were examined to determine convergence, divergence, and uniqueness of findings. Additionally, the primary stakeholders of the evaluation were surveyed regarding their experience with and perception of the credibility, meaningfulness, and trustworthiness of each of the individual methods. They were also asked to rate each method mix on eight items assessing perceptions of validity and trustworthiness (Lincoln and Guba, 1985). Finally, the survey asked stakeholders to identify in open-ended comments any ways in which each mix did or might influence the organization’s work. Finally, the cost to conduct each single method study in billable research dollars and participant response hours was computed and analyzed using Quickbooks accounting software. Each of these methods is described in greater detail below. Within each single method study, topic areas were identified to reflect the major content domains for which data was collected, e.g., the impact of ordinances on relapse, travel to communities with different ordinance conditions, social norms, etc. Next, within each content area, findings were extracted. Findings were defined as information units bounded by substantive content about a specified topic area. More than one finding may populate a topic area, and a finding may include more than one related idea. Findings may also include conclusions from the single study findings. To compare the phenomenological interviews and the survey, the single method tables of findings of each were compared. Findings were categorized as being convergent (C), divergent (D), divergent by degree (Dd) or unique (U). A finding that is divergent by degree is a finding that contains both convergent and divergent elements across two studies or that a finding was similar but diverged in its strength or magnitude. An example of a finding unique to interviews was the experience of some participants who were relieved when bans were implemented because the ban was an external support to smoke less; this sentiment was not found in the focus group or interview studies. A two-column table was constructed that described each finding and the relationship between methods for that finding. Hegel’s conception of the dialectic was used and operationalized as depicted in Table 1 as a stance for this mixing. Similarly in mixing the survey and focus group data, findings from the telephone surveys and focus groups were categorized as being convergent, divergent, divergent by degree or unique as described above. For example, surveys and focus groups diverged by degree regarding travel to a communtiy with a different ban status than one’s home. Survey findings precisely indicated that smokers were more likely to travel to communities with a different ban than non-smokers; focus group findings were more muted on this point. They found while smokers traveled to smoke, non-smokers also traveled to avoid smoke. The framework developed by Datta (1997) using a pragmatic stance was used to facilitate the mixing as depicted in Table 2. The credibility and utility of single method findings and mixed method findings were investigated through surveying and interviewing the Senior Research Program Manager responsible for managing the evaluation grant and the Senior Marketing Manager who requested analyses to inform marketing efforts and attended a presentation of findings. Data on stakeholder

Page 6 of 16

RUNNING HEAD: Mixing Methods

7

ip t

views of credibility and utility of findings was gathered via a five-page paper and pencil survey. The survey covered three broad content areas: respondent experience with the three single methods used in the evaluation (telephone surveys, focus groups and phenomenological interviews); respondent opinion on the credibility and utility of single method findings of these methods; and respondent report on the credibility and utility of two mixed methods survey and phenomenological interviews and survey and focus groups. Finally, the cost to conduct each single study in billable research dollars and participant response hours was computed and analyzed using Quickbooks accounting software. Combining the costs for the mixed studies contrasts the expense of mixing methods with conducting single studies.

Ac ce pt e

d

M

an

us

cr

Results The majority of findings for the pragmatic mixof the survey and the focus group data were either convergent across surveys and focus groups or unique to one of the methods. The relatively few divergent findings made drawing inferences from the mix very straightforward and that the application of Datta’s (1997) criteria did not add significant value. However, findings on one topic, travel, were sufficiently complex that pragmatic criteria needed to be employed. The telephone survey findings about travel suggested that those in ordinance communities were more likely to travel to a community with a different ordinance status. However, the focus group suggested that ordinances were about as equally likely to influence travel as not, but that any effect depended on smoking status. To resolve the issue of travel, Datta’s (1997) three criteria were employed as illustrated in Table 2. Examining findings on travel according to these criteria brought forward important considerations that lead to unique conclusions about travel to ordinance and non-ordinance communities. The mixing of the survey and the phenomenological interviews was dominated by the unique findings generated by the phenomenological interviews, which described the intersection of participants’ experience of the ordinance with addiction, shame, the smell of cigarette smoke and quitting. These elements led to a rich convergence with survey findings, where the power of addiction as revealed through the interviews was illustrative of low levels of self-reported support for and impact of the ordinance. This convergence was richer and more resonant because the interviews illuminated the underlying mechanisms that influenced behaviors as opposed to providing examples that related directly to survey results in a one-to-one manner. The relationship of addiction to support for the ordinance and self-reported impact was more complex and web-like. However, it is important to note that the phenomenological interviews produced simple convergence as well, such as when the survey finding of the relationship between relapse and exposure to ordinance was supported by the interview finding that ordinances reduced triggers to smoking that supported quit attempts. Similarly, the two methods converged with one another in the area of travel. One substantial and major divergence in findings was identified regarding the impact of ordinances on abstinence. When the telephone survey and interview findings were compared, one substantial set of divergent findings was identified: the impact of exposure to ordinances on the outcomes of 7-day abstinence and new quits. While the telephone survey found no impact, the interviews suggested the opposite and even provided detailed information about the underlying mechanisms by which the ordinances impacted outcomes. Table 1 below outlines how the Hegelian dialectic approach was used to resolve the divergence. Comparison of the two different mixes

Page 7 of 16

RUNNING HEAD: Mixing Methods

8

Ac ce pt e

d

M

an

us

cr

ip t

One major convergence and one major divergence in findings emerged from a review of the major issues in the two different comparisons. However, on six issues, more subtle differences emerged that could not be clearly classified as either a convergence or divergence. Finally, each mixed method combination brought to the table one important and unique issue that was not addressed in the other combination. A strong and clear convergence between the two mixes was found on the issue of relapse. In the survey-focus group mix, the explanation centered on the relationship between smoking, drinking, bars and relapse, while in the survey-phenomenological interview mix, participants’ experiences in bars with temptation, social smoking and feeling ostracized explained relapse. Despite these differences, in both studies the conclusion was the same: ordinances impacted relapse downward. Both mixes highlighted the impact of ordinances on 7-day abstinence and new quits, but came to different conclusions. The pragmatic study resulted in a greater convergence of evidence that ordinances had no impact on the outcomes of 7-day abstinence and new quits. The telephone survey found no relationship, while the focus groups found a weak relationship, as well as evidence that the impact may be under-reported. In contrast, the survey-phenomenological interviews mix generated divergence on the impact of ordinances on the outcomes. The telephone survey found no relationship, but the interviews revealed that ordinances did help people quit. The interviews also outlined in depth several mechanisms by which the impact occurred, such as through cognitive dissonance and the guilt and shame of smoking. It was concluded that a relationship between exposure to ordinances and quitting does exist, but that the relationship is indirect and heavily mediated by the experiences outlined in the interviews. For six topics, the difference between the mixes were nuanced, and categorizing the difference as convergent or divergent was not applicable or helpful. For example, several topics were primarily similar to one another, but diverged only in degree. This reflected the differences between the focus group and phenomenological interview methods. Both solicited participant experiences, but the interviews sought more in-depth information that most authentically reflected participants’ lived experience. The one-on-one interview format and phenomenological framework facilitated this discovery. Another way the mixes subtly differed from each other was in how they framed issues. Both mixes described the role of guilt and shame and the smell of cigarette smoke. In the case of guilt and shame, for example, the focus group study showed the ordinances helped because going outside to smoke sucked the joy out of smoking while the phenomenographic study described how shame and guilt were an integral component of addiction, and thus both a precursor and antecedent of the ordinances. The focus group study explored one issue unique to itself: the role of social norms. It discussed how smokers felt social pressure not to smoke and how this pressure motivated some to stop smoking. The conclusion was that social norms may have the most potential to create lasting change in current and potential smokers, and those who want to quit. The fact that the focus group provided the most detailed information about social norms is interesting, because each focus group was its own laboratory of social interactions. The phenomenographic study was unique in reporting on smokers’ experiences smoking and quitting. It discussed in great detail the intensity of addiction and smokers’ coping mechanisms for it, and the way that ordinances forced smokers to face their addiction more clearly. Mixing the methods contributed new information beyond the findings of the single method studies. The combination of two methods in mixed method studies generated unique conclusions and interpretations based on the joint examination of findings on a similar topic. Table 3 below

Page 8 of 16

RUNNING HEAD: Mixing Methods

9

Ac ce pt e

d

M

an

us

cr

ip t

highlights the new conclusions and inferences that the two mixes contributed as compared to those of single methods. Overall all three sub-studies resulted in unique findings and in the two types of mixing convergence strengthened conclusions. It appears that mixing of the survey and the focus groups resulted in more convergence than the mixing of the survey and the phenomenological interviews. Stakeholder opinion The two main stakeholders were asked to read summaries of findings for survey and focus group methods (method mix 1) and survey and interview methods (method mix 2). Respondents were asked to read the mixes and rate them on validity and trustworthiness on a 10 point scale, and to comment on the utility of each method mix. Overall the respondents thought all three studies were useful as well as the mixes and valued the extent of the information provided by the diverse approaches. Costs Billable research dollars were computed based on a rate of $100 per hour for lead staff and $50 per hour for support staff. Table 4 illustrates the total cost for all three single methods was $178,756. The average cost per respondent across single methods was $244.54. Surveys yield the lowest cost per respondent at $154.51, followed by focus groups ($678.21). Phenomenological interviews yield the greatest cost per respondent ($2,097.81). Table 5 itemizes each single method budget by primary task. Table 6 presents the cost of the three individual studies in terms of participant response hours. The comparison of focus groups to interviews is somewhat misleading, however. Each interview participant conversed one-on-one for 70░minutes, while each participant in the focus groups did not converse directly for 120░minutes because the groups consisted of 5.4 people on average. Assuming that focus group participants spoke one at a time and participated equally in a 120░minute focus group, each participant would have spoken for about 22.2░minutes each. This translates to 25.9░hours of direct response time per focus group participant. Conclusions and lessons learned This study affirms that the choice of methods matter. In the mixed method literature, interpretive methods can sometimes be lumped together as ``qualitative'' methods (O’Cathain, Murphy, & Nicholl, 2007; Johnson & Onwuegbuzie, 2004), with insufficient attention paid to the paradigms that under gird them. However, an analysis of single methods revealed that the focus group method produced 17 findings that were substantively unique to the focus groups and not found in any other method (including the phenomenological interviews). Likewise, the phenomenological interview method produced 15 unique findings. The strongly interpretive phenomenological interviews produced different findings than the weakly interpretive focus groups. Not only do interpretive methods provide different kinds of information such as group consensus versus rich experience, but they produced different findings or types of findings. Focus groups provided greater insight into social norms and processes perhaps due to focus groups being a social experience in of itself (Bloor, 2001; Kitzinger, 1994; Krueger & Casey, 2009), while phenomenological interviews illuminated smell and respondents’ hidden hatred of smoking. Additionally, because the strongly interpretive phenomenological interviews illuminated direct lived experience, which tends to be less mediated by respondent preconceptions and rationalizations, the findings from the interviews appear more powerful and trustworthy than from the focus groups. For example, when the trustworthiness/validity of the method mix with phenomenological interviews was rated by stakeholders, they found it to better reflect participant

Page 9 of 16

RUNNING HEAD: Mixing Methods

10

Ac ce pt e

d

M

an

us

cr

ip t

perspectives and the underlying truth as compared to the method mix with focus groups. More strongly interpretive methods may be more powerful and generative when combined with postpositivistic methods as compared to weaker interpretive methods in a mixed method design. A major difficulty in conducting mixed method studies is a lack of guidance on how those approaches would be operationalized in practice. This study represents explicit operationalization of mixing at the point of interpretation, based on different stances. Further study on a Hegelian-inspired dialectic approach is especially warranted because this approach appears to be the most generative. One concern is that the Hegelian synthesis requires that two conflicting findings be united by an overarching truth. It is critical that this overarching truth not be used reflexively and that the possibility of legitimately conflicting findings due to external factors such as different samples not be ignored. All synthesis judgments should broadly consider reasons for conflicting findings and incorporate them into synthesis statements. This study lends some support to the usefulness of mixed methods for multiple stakeholders. Stakeholders viewed both the telephone surveys and phenomenological interviews as being most credible. Although the findings are likely to be related to the single methods rather than any belief in the actual mixing. Stakeholders hold inherent preconceptions of methods that support their decisions making (Green, 2007; Patton, 1997). However, the integration of two or more methods in a mixed method study is much less common (O’Cathain, Murphy, & Nicholl, 2007), scholars commonly agree that the discussion of validity of mixed method approaches requires further study (Greene, 2007; Dellinger & Leech, 2007; Creswell & Plano Clark, 2010; Teddlie & Tashakkori, 2008). Several limitations in this study circumscribe the conclusions that may be drawn. First, it is important to note that this study represents one simple investigation using one real-world evaluation. Further, the context of this study is a relatively young field with little empirical literature to inform it. Second, this study focused on mixing methods at the point of interpretation, although mixing did occur at both sampling and analysis. Advances are being made in mixing at the point of analysis (Day, Simmons, & Gu, 2008). Finally, three practical considerations require attention. Mixing methods that are at a distance philosophically appears to be most productive. However, the cognitive demands of using very diverse approaches are considerable, much more research is necessary to refine how to optimize the use of diverse mixed methods and training in how to do this should be included in evaluation education programs. It is also important to gather more information on the cost and benefits of using mixed methods. This study presents some indication that the costs are justifiable but in which conditions and to what extent remains to be investigated. More study is necessary to understand the evaluation questions and contexts in which the additional costs of mixed methods are outweighed by gains in knowledge and guidance for decision making. In sum, using mixed methods is recommended when convergence of findings is desired for triangulation, when diverse perspectives are desired to expand one’s understanding of a phenomenon, if a study seeks to initiate fresh insight through an examination of divergent findings, and/or if multiple stakeholders have differing information needs. The effectiveness of mixing methods appears to be optimized by using single studies that differ substantially from one another on the paradigm continuum. Acknowledgements The authors wish to acknowledge ClearWaySM Minnesota for partial funding of this research.

Page 10 of 16

RUNNING HEAD: Mixing Methods

11

Ac ce pt e

d

M

an

us

cr

ip t

References Barbour, R. S. (1999). The case for combining qualitative and quantitative approaches in health services research. Journal of Health Services Research and Policy, 4(1), 39-43. Biesta, G. (2010). Pragmatism and the philosophical and theoretical issues for mixed methods research. In A. Tashakkori & C. Teddlie (Eds.) SAGE handbook of mixed methods in social and behavioral research (2nd ed.) (pp. 95-118). Thousand Oaks, CA: Sage Publications, Inc. Benkofske, M. T. (January 1, 1996). An examination of how evaluators and evaluation clients choose data collection methods: The factors and decision-making process. ETD collection for University of Nebraska - Lincoln. Paper AAI9623619. Bloor, M., Frankland, J., Thomas, M., & Robson, K. (2001). Focus groups in social research: Introducing qualitative methods. Thousand Oaks, CA: Sage Publications, Inc. Chelimsky, E. (2007). Factors influencing the choice of methods in federal evaluation practice. New Directions for Evaluation, 2007(13), 13-33. Creswell, J. W. (2013). Qualitative inquiry and research design: Choosing among five approaches (3rd ed.). Thousand Oaks, CA: Sage Publications, Inc. Creswell, J. W. (2014). Research design: Qualitative, quantitative, and mixed methods approaches (4th ed.). Thousand Oaks, CA: Sage Publications. Creswell, J. W. & Plano Clark, V. L. (2010). Designing and Conducting Mixed Methods Research (2nd ed.). Thousand Oaks, CA: SAGE Publications. Creswell, J. W., Trout, S., & Barbuto, J. E. (2002). A Decade of Mixed Methods Writings: A Retrospective. Research Methods Forum. pp. 1-30. Published by the Academy of Management. http://aom.pace.edu/rmd/2002forum.retrospect.html. ARD # 14355. Datta, L. (1997). A pragmatic basis for mixed method designs. New Directions for Evaluation, 74, 33-46. 237 Day, C., Sammons, P., & Gu, Q. (2008). Combining qualitative and quantitative methodologies in research on teachers’ lives, work, and effectiveness: From integration to synergy. Educational Researcher, 37(6), 330-342. Dellinger, A. B., & Leech, N. L. (2007). Toward a unified validation framework in mixed methods research. Journal of Mixed Methods Research, 1(4), 309-332. Dewey, J. (1998a). The essential Dewey: Pragmatism, Education, Democracy (Volume 1) (Edited by L. A. Hickman & T. M. Alexander). Bloomington, IN: Indiana University Press. Dewey, J. (1998b). The essential Dewey: Ethics, logic, psychology (Volume 2) (Edited by L. A. Hickman & T. M. Alexander). Bloomington, IN: Indiana University Press. Giorgi, A. (1997). The theory, practice and evaluation of the phenomonolgocial method as a qualitative research procedure. Journal of Phenomenological Psychology, 28(2), 235-260. Greene, J. C. (2007). Mixed methods in social inquiry. San Francisco: Jossey-Bass. Greene, J. C., & Caracelli, V. (1997). Defining and describing the paradigm issue in mixed method evaluation. New Directions for Evaluation, 74, 5-17. Greene, J. C. & Hall, J. N. (2010). Dialectics and pragmatism: Being of consequence. In A. Tashakkori & C. Teddlie (Eds.) SAGE handbook of mixed methods in social and behavioral research (2nd ed.) (pp. 119-144). Thousand Oaks, CA: Sage Publications, Inc. Green, J.C. (2013). Reflections and ruminations. In D. M. Mertens & S. Hesse-Bieber (Eds). Mixed methods and credibility of evidence in evaluation. New Directions for Evaluation, 138, 109-119.

Page 11 of 16

RUNNING HEAD: Mixing Methods

12

Ac ce pt e

d

M

an

us

cr

ip t

Hesse-Biber, S. (2013). Thinking outside the randomized controlled trials experimental box: Strategies for enhancing credibility and social justice. In D. M. Mertens & S. Hesse-Bieber (Eds), Mixed methods and credibility of evidence in evaluation. New Directions for Evaluation, 138, 49-60. Heyvaert, M., Hannes, K., Maes, B., & Onghena, P. (2013). Critical appraisal of mixed methods studies. Journal of Mixed Methods Research, 7(4), 302-327. James, W. (1975). The meaning of truth. Cambridge, MA: Harvard University Press. Johnson, R. B., & Onwuegbuzie, A. J. (2004). Mixed methods research: A research paradigm whose time has come. Educational Researcher, 33(7), 14-26. Johnson, R.B. & Onwuegbuzie, A. J., & Turner, L. A. (2007). Toward a definition of mixed methods research. Journal of Mixed Methods Research, 1(2), 112-133. Kitzinger, J. (1994). The methodology of focus groups: The importance of interaction between research participants. Sociology of health & illness, 16(1), 103-121. Krueger, R. A. & Casey, M. A. (2009). Focus groups: A practical guide for applied research (4th Ed.). Thousand Oaks, CA: Sage Publications, Inc. Lincoln, Y. S. & Guba, E. G. (1985). Naturalistic Inquiry. Beverly Hills, CA: SAGE Publications. Maxcy, S. J. (2003). Pragmatic threads in mixed method research in the social sciences: The search for multiple modes of inquiry and the end of the philosophy of formalism. In A. Tashakkori & C. Teddlie (Eds.), Handbook of mixed methods in the social and behavioral sciences. Thousand Oaks: SAGE Publications. Morgan, D. L. (2007). Paradigms lost and pragmatism regained: Methodological implications of combining qualitative and quantitative methods. Journal of Mixed Methods Research. 1(1), 4876. O’Cathain, A. (2009). Editorial: Mixed methods research in the health sciences: A quiet revolution. Journal of Mixed Methods Research, 3(1), 3-6. O’Cathain, A., Murphey, E., & Nicholl, J. (2007). Integration and publications as indicators of ``yield'' from mixed methods studies. Journal of Mixed Methods Research, 1(2), 147-163. Patton, M. Q. (1997). Utilization-focused evaluation. Thousand Oaks, CA: Sage Publications, Inc. Patton, M. Q. (2002). Qualitative research and evaluation methods (3rd ed.). Thousand Oaks, CA: Sage Publications, Inc. Peirce, C. S. (1992). The essential Peirce: Selected philosophical writings (Volume 1: 18671893) (Edited by the Peirce Edition Project). Bloomington, IN: Indiana University Press. Peirce, C. S. (1998). The essential Peirce: Selected philosophical writings (Volume 2: 18931913) (Edited by the Peirce Edition Project). Bloomington, IN: Indiana University Press. Rescher, N. (2001). Cognitive pragmatism: The theory of knowledge in pragmatic perspective. Pittsburgh, PA: University of Pittsburgh Press. Riggin, L. J. C. (1997). Advances in mixed-method evaluation: A synthesis and comment. New Directions for Evaluation, 74, 87-95. Saldana, J.M. (2012). The Coding Manual for Qualitative Researchers. Thousand Oaks, CA: Sage Publications. Singer, P. (2001). Hegel: A very short introduction. Oxford, England: Oxford University Press. Smith, M. L. (1997). Mixing and matching: Methods and models. New Directions for Evaluation, 74, 73-86.

Page 12 of 16

RUNNING HEAD: Mixing Methods

13

ip t

Teddlie, C., & Tashakkori, A. (2003). Major issues and controversies in the use of mixed methods in the social and behavioral sciences. In A. Tashakkori & C. Eddlie (Eds), Handbook of mixed methods in social and behavioral research (pp. 3-50). Thousand Oaks, CA: Sage Publications. Teddlie, C. B. & Tashakkori, A. (2008). Foundations of Mixed Methods Research: Integrating Quantitative and Qualitative Approaches in the Social and Behavioral Sciences. Thousand Oaks, CA: SAGE Publications.

M

an

us

cr

Anne Betzner, PhD. is Vice President at Professional Data Analysts, Inc., an evaluation firm located in Minneapolis, MN. She has been evaluating state-wide public health interventions nationally for 20 years. She obtained her doctorate in Quantitative Methods in Education with an emphasis in evaluation within the Department of Educational Psychology at the University of Minnesota, Twin Cities. Frances Lawrenz, PhD. is a professor of evaluation in the Quantitative Methods in Education Program in the Educational Psychology Department at the University of Minnesota, Twin Cities. Her specialty is in science, technology, engineering, and mathematics (STEM) program evaluation. She is also the Associate Vice President for Research at the University. Mao Thao is a student in the Quantitative Methods in Education PhD. Program with an emphasis in evaluation within the Department of Educational Psychology at the University of Minnesota, Twin Cities. Her interests include STEM education evaluation, mixed methods, and instrument design. Mao received a Bachelor of Science in Sociology and a Bachelor of Arts in Communication Studies from the University of Minnesota, Twin Cities.

Ac ce pt e

d

Table 1 Dialectic Mixed Method Decisions

Topic Survey Conflicting Finding Interviews Conflicting Finding

Synthesis

Rationale

Impact of Exposure of Ordinances on 7-day Abstinence and New Quits There was no significant association between exposure to the ban and 7-day abstinence and new quits. Some interview subjects reported that the ban had no impact, although in most cases, this was because participants had little or no exposure to the ban. This suggests the impact of the ban was associated with exposure. The interviews provided detailed information about the mechanisms by which the ordinances impact outcomes. Being exposed to ordinances does not directly increase quit rates. However, being exposed to bans can trigger a complex and influential set of experiences that motivate quitting. The impact of bans on quitting is indirect, and difficult to detect without moderating variables that capture motivating experiences. The lived experiences of program participants’ exposure to bans provided convincing evidence of a possible impact that was not accounted for in the statistical model. However, the model does suggest that a clear and direct impact of the bans on quitting across all populations is unlikely

Page 13 of 16

RUNNING HEAD: Mixing Methods Evidence Needed to Support Synthesis

14

Information about the frequency with which the ban triggers experiences that motivate quitting and about the populations who are most sensitive to being influenced by the ban. Greater depth in understanding experiences that drive quitting.

cr

us

Ac ce pt e

Knowledge-Based Considerations Contextually Responsive Considerations

an

Experience-Based Considerations

M

Focus Group Findings

Travel to ordinance and non-ordinance communities Those in ordinance communities were more likely to travel to a community with a different ordinance status. Those who traveled to communities in order to go to smoking establishments were more likely to be smokers. No strong relationship between ordinance status and travel was found. Instead, ordinances were about as equally likely to influence travel as not, but any effect depended on smoking status. Smokers were more likely to travel to non-ordinance communities, and non-smokers were more likely to travel to ordinance communities. The association between ordinance status and travel in the telephone survey may be confounded by geographic location. Hennepin and Ramsey counties were the primary ordinance communities and are adjacent to metropolitan non-ordinance communities. Travel between these communities is easy and frequent, therefore influencing reported frequency of travel. The telephone survey and focus group findings converged that the magnitude of travel is small. The telephone surveys and interviews were conducted in the first month after the ordinances were implemented while residents were still adjusting to the new regulations. Local as well as outside tobaccosupported lobbying groups were active in fighting the regulations. Opinions about the ordinances – especially negative ones – ran hot and high. This milieu likely influenced travel and how it was reported in the focus groups. For example, some focus group participants were very transparently politically motivated and appeared to report travel to other communities in a reactionary and ideological manner versus a personal one, suggesting that travel was over-reported. Travel to communities with different smoking regulations appears to be associated with ordinances. However, the impact of the association does not appear to be meaningful because travel is infrequent and may have been over reported, and possibly confounded with geographic location.

d

Topic Telephone Survey Findings

ip t

Table 2 Pragmatic Mixed Method Decisions

Conclusions

Table 3 Key Conclusions by Study

Page 14 of 16

RUNNING HEAD: Mixing Methods

d

Ac ce pt e

ip t

Exposure to the ban was not significantly associated with 7day abstinence or new quits

cr

Being exposed to bans trigger a complex set of experiences that mediate an indirect relationship between bans and 7-day abstinence and new quits - Travel because of bans is - Travel because of bans is infrequent infrequent - Travel appears to be - Travel appears to associated with bans, but be associated with bans, but the impact not meaningfully - Consider that surveys and is not meaningful interviews were conducted during the first, most controversial months of the ordinance. This may have led to more reactionary and ideological responses.

M

Travel

The impact of bans on 7day abstinence and new quits may be underreported. However, even in this case, the relationship is weak.

Single Method Conclusions & Inte Telephone Survey Focus Group Exposure to the ban Bans created was marginally conditions w associated with maintaining relapse. attempt was

us

Impact of ordinance on 7-day abstinence

Mixed Method Conclusions & Interpretations Pragmatic Dialectic Telephone surveys and Telephone surveys focus groups converged on and interviews the conclusion that converged that ordinances impact relapse ordinances impact relapse

an

Topic Impact of ordinances on relapse

15

- Travel because of bans is infrequent - Most respondents do not travel regularly, but being in a ban community is positively associated with regular travel and traveling to smoke.

For some, th was not help had no impa However, re no impact ap be influence several facto Others expe an impact. - Travel bec bans is infre - Bans appea equally likel impact trave not, but any is determine smoking sta

Table 4 Total Billable Hours Expended by Method

Method Focus groups Interviews Surveys Total

$ 47,474.88 31,467.28 99,814.15 178,756.31

% 26.6 17.6 55.8 100.0

# Respondents 70 15 646 731

Billable $ per Respondent $678.21 $2,097.81 $154.51 $244.54

Table 5 Dollars Worked and Percent of Budget for Study Tasks by Method

Planning Focus Groups Interviews

$

%

6,825.00 2,450.00

14.4 7.8

Page 15 of 16

RUNNING HEAD: Mixing Methods

39.9 12.5 23.1 30.7

700.00 700.00 22,587.50 23,987.50

1.5 2.2 22.6 13.4

1,800.00 8,750.00 19,350.00 29,900.00

3.8 27.8 19.4 16.7

an

M

10,037.50 9,175.00 8,612.50 27,825.00

ip t

23,450.00 8,425.00 23,013.00 54,888.00

cr

14.9 13.5

us

14,912.50 24,187.50

21.1 29.2 8.6 15.6

9.8 6.3 11.4 10.0

47,474.88 31,467.28 99,814.15 178,756.31

100.0 100.0 100.0 100.0

d

4,662.38 1,967.28 11,338.63 17,968.31

Ac ce pt e

Surveys Subtotal Implementation Focus Groups Interviews Surveys Subtotal Database Focus Groups Interviews Surveys Subtotal Analyses Focus Groups Interviews Surveys Subtotal Reporting Focus Groups Interviews Surveys Subtotal Expenses Focus Groups Interviews Surveys Subtotal Total Focus Groups Interviews Surveys Grand Total

16

Table 6 Cost in Participant Hours for Recruitment and Completion

Method Focus groups Interviews Surveys Total

N of Participants 646 70 70 15

Minutes per Participant 6.5 120.0 22.2 70.0

Total Minutes 4,199 8,400 1,554 1,050

Total Hours 70.0 140.0 25.9 17.5

Page 16 of 16