Social Science Research 39 (2010) 341–355
Contents lists available at ScienceDirect
Social Science Research journal homepage: www.elsevier.com/locate/ssresearch
Mobilizing on the margin: How does interpersonal recruitment affect citizen participation in politics? Chaeyoon Lim Department of Sociology, University of Wisconsin Madison, 2446 William H. Sewell Social Sciences Building, 1180 Observatory Drive, Madison, WI 53706, USA
a r t i c l e
i n f o
Article history: Available online 21 May 2009
Keywords: Political participation Mobilization Social networks Selection bias Counterfactual Effect heterogeneity
a b s t r a c t This study examines how interpersonal recruitment affects participation in demanding forms of political activity. Previous studies have argued that interpersonal recruitment causally affects participation. However, recruitment attempts are selectively targeted toward people with high participation potential. This implies that interpersonal recruitment may work simply because recruiters ask the people who would participate anyway. Using propensity score matching, I show that interpersonal recruitment has larger effects on non-electoral forms of participation than electoral ones such as campaign volunteer activity. I find little evidence that personal contact increases campaign donation. I also demonstrate that the effect of recruitment is not constant across individuals but varies by the propensity of being a recruitment target. Ironically, people who have less chance of receiving requests are more likely to be affected. I discuss the implications of these findings for declining and increasingly unequal political participation. Ó 2009 Elsevier Inc. All rights reserved.
1. Introduction Many political theorists agree that the active engagement of citizens in political life, especially in activities other than going to the polls, is an essential condition for the liveliness and integrity of a democratic society (e.g., Walzer, 1984). Not surprisingly, why some people participate in more demanding forms of political activity such as protests and electoral campaigns, has long been a central concern for scholars (e.g., Snow et al., 1980; McAdam, 1986; McAdam and Paulsen, 1993; Verba et al., 1995). One answer well established in the social movement literature is that people participate in these activities because somebody, often someone whom they know personally, asks them to do so (e.g., Klandermans and Oegema, 1987; Schussman and Soule, 2005).1 The importance of mobilization through personal contacts also is well documented in other forms of political and civic activities like voting, electoral campaigns, and volunteering (Rosenstone and Hansen, 1993; Verba et al., 1995; Freeman, 1997; Brady et al., 1999; Gerber and Green, 2000). Despite these studies, it still is unclear how and why such mobilization efforts influence citizen participation. An important issue that complicates the question is that recruitment attempts are not made randomly. As some studies have found,
E-mail address:
[email protected] A similar but more general finding is that people with prior ties to activists are more likely to participate in social movements (McAdam, 1986; della Porta, 1988; McAdam and Paulsen, 1993; Opp and Gern, 1993; Kim and Bearman, 1997; Nepstad and Smith, 1999; Passy and Giugni, 2001; Gould, 2004). In these studies, however, it is usually not clear whether there was an explicit attempt to recruit or encourage participation through activist ties. In part, it is because those studies have little information about the specific interactions between prospective participants and their activist ties. Without such information, they usually assume that activist ties somehow influence prospective participants’ decision, either explicitly through recruitment attempts or through tacit social influence, although the distinction between the two could be critical in understanding how social networks work in mobilizing participants. This study focuses on interpersonal recruitment in which prospective participants receive explicit personal requests. 1
0049-089X/$ - see front matter Ó 2009 Elsevier Inc. All rights reserved. doi:10.1016/j.ssresearch.2009.05.005
342
C. Lim / Social Science Research 39 (2010) 341–355
recruiters carefully target people who are likely to participate when asked (Zipp and Smith, 1979; Smith and Zipp, 1983; Rosenstone and Hansen, 1993; Brady et al., 1999; Schussman and Soule, 2005).2 Given this selective nature of interpersonal recruitment, one possibility is that it works simply because recruiters ask the right people—i.e., those who might participate even without such requests—not because personal requests influence people’s decision. If recruitment works mainly due to selective targeting, to say that people participate ‘‘because” they are asked would be incorrect. We may observe a higher participation rate among the people who are asked merely because they have a higher baseline level of participation than those who are not. People who are frequently asked may be different from those who are not, not only in their baseline propensity to participate, but also in their responsiveness to recruitment attempts. Because recruiters may make requests to the people upon whom they have—or they think they have—a larger impact, interpersonal recruitment could work more effectively on people who are frequently targeted by recruiters. Alternatively, the effect of interpersonal recruitment may be smaller on those who are frequently targeted because recruiters may avoid the people who would require a larger ‘recruitment effect’ for participation. A larger ‘recruitment effect’ could mean more effort for recruiters, and given their limited resources, recruiters may prioritize on the people who would not require such a large influence for participation. Either way, any such heterogeneity of recruitment effect would make the estimation of causal effect challenging because the standard regression approach is likely to fail to generate an unbiased and consistent estimate of causal effect (Angrist et al., 1996; Heckman et al., 1997; Morgan and Winship, 2007). This paper takes both types of potential biases resulting from selective targeting—i.e., different baselines of participation and effect heterogeneity—seriously and asks whether interpersonal recruitment causally affects participation in politics. The selection effect in interpersonal recruitment is an important issue because it poses serious challenges in establishing the causal relationship between recruitment and participation. More importantly, however, confronting selection bias could shed light on how and why interpersonal recruitment works. If the effect of interpersonal recruitment is primarily due to selection bias, this would suggest that mobilization through personal contact is largely a matter of learning about and then mobilizing the people who already are active in politics. Conversely, if interpersonal recruitment causally affects individual participation in politics, this would imply that interpersonal recruitment works because the recruitment attempt actually influences people’s decisions and converts non-participants into participants. Examining effect heterogeneity could be particularly fruitful for understanding why interpersonal recruitment works. If the effect of personal contacts on the people who are frequently targeted is different from the effect on those who are not, the source of the difference should be examined. People frequently targeted, who tend to be interested in and knowledgeable about politics, may be motivated by different reasons from those who are rarely targeted. By exploring the incentives that motivate different segments of the population, we could better understand the complexity of political participation. 2. Selection bias in political mobilization Despite these implications, selection bias in interpersonal recruitment has not received enough attention in the literature on political participation, especially participation in activities other than voting. The studies that examined the mobilization effect on political participation, controlled for other variables that could affect participation, using a regression framework (e.g., Zipp et al., 1982; Rosenstone and Hansen, 1993; Brady et al., 1999; Schussman and Soule, 2005). Controlling for other variables that affect the outcome variable in the conventional regression framework could be an effective way to deal with selection bias, but only under certain sets of strict conditions. Among other things, there should be no omitted variable that affects both causal and outcome variables—recruitment attempt and participation, respectively, in this case. One form of political participation for which selection bias has received rigorous treatment is voting. Scholars of voter turnout have long studied the causal effects of various types of mobilization efforts (e.g., Gosnell, 1927; Gerber and Green, 2000; Green et al., 2003). Some of these studies have used field experiments to demonstrate that personal contacts significantly increase voter turnout. This is an important development since randomized experiments can isolate the causal effect of interpersonal recruitment effectively. Unfortunately, no such experimental study has been conducted for political activities other than voting. Still, someone may infer, based on these findings from the field experiments, that interpersonal recruitment would causally affect participation in more demanding forms of political activity as well. The findings in the voter mobilization studies are illuminating, but there are good reasons to suspect that interpersonal recruitment may work differently in other forms of political activism. First of all, voting is a more common form of participation than others, and it involves much lower cost. Moreover, voting is considered to be a civic duty that all eligible citizens are expected to carry out. As a result, interpersonal requests may carry more social pressure for voting than for other forms of political activism, and people may be more responsive to social pressure, given the lower cost of the action. In addition, contacts in field experiments usually are made by strangers, following a standardized protocol. This standardized contact has the advantage of removing potential heterogeneity that could make some contacts more effective than others. However, interpersonal recruitment attempts for other types of political activity usually are made by someone to whom people have personal ties. Therefore, field experiments
2 The term ‘recruiter’ in this study refers not only to professional organizers, but also to more informal political activists who occasionally ask other people to participate.
C. Lim / Social Science Research 39 (2010) 341–355
343
based on contacts by strangers may not capture the effects of interpersonal recruitment on non-voting forms of political participation, although they could be relevant for the study of voting, in which canvassing by strangers is not uncommon. 3. Counterfactual causal inference using propensity score matching Lacking experimental data, this study turns to the counterfactual approach of causal inference using propensity score matching to address selection bias in the interpersonal recruitment of political activists (Rubin, 1974, 1977, 1991; Rosenbaum, 1984a, 1984b, 2002; Rosenbaum and Rubin, 1983; Heckman and Robb, 1985, 1986; Morgan and Winship, 2007). Compared to traditional regression techniques, propensity score matching estimators are known to have several advantages (Rosenbaum, 2002; Morgan and Harding, 2006; Ho et al., 2007). The matching method usually is more efficient than conventional regression because fewer parameters are estimated3; it also is non-parametric, thereby making no assumptions about the functional form of the relationship between the treatment and outcome variables. The matching method also ensures that the treatment and control groups are comparable, so that inferences are based strictly upon comparable treatment and control cases. Some of the most important advantages of the matching method, however, are not inherent to matching itself, but rather are the results of the counterfactual causal framework in which most matching analyses are conducted (Morgan and Harding, 2006). The counterfactual causal framework forces researchers to spell out clearly the causal argument that they are trying to make, and to confront potential problems. In particular, the counterfactual causal framework draws researchers’ attention to the process that assigns people into one treatment condition versus another, and how this process could confound the estimation of the causal effect. As a result, matching methods help to address selection bias more explicitly, rather than simply admitting the possibility of a problem. Most importantly for this study, the counterfactual framework forces investigators to pay attention to causal effect heterogeneity between the treatment and the control group. Despite these advantages, however, propensity score matching does not remove the selection bias introduced by unobserved covariates that may affect both treatment and outcome variables (Rosenbaum, 2002). Arceneaux et al. (2006) compare the results from their field experiment on voter mobilization with the estimates from matching and regression to find that matching does not perform any better than regression in the presence of unobserved covariates. Although their finding does not diminish the other advantages of propensity score matching and the counterfactual framework, it shows that the unobserved covariate is a serious challenge for causal inference with matching analysis. To examine how unobserved covariates may affect the matching estimate of causal effect, this study employs sensitivity analysis (Rosenbaum and Rubin, 1983; Rosenbaum, 2002; Harding, 2003). Sensitivity analysis assesses the magnitude of a hidden bias that would be necessary to change the inference about the treatment effect; it does this by calculating hypothetical point estimates of the treatment effect and their confidence intervals at various levels of hidden bias.4 If a small amount of hidden bias could yield a very different point estimate and lead to a qualitatively different conclusion, the inference based upon matching analysis should be considered sensitive to hidden bias and, therefore, less defensible. If an extreme hidden bias is required to alter the conclusion, however, we can consider the inference robust. Using a counterfactual framework, this study asks whether a person who participates in response to interpersonal recruitment would do so even if he or she did not receive a request from a recruiter. In this case, the ‘‘treatment” is being asked by somebody to participate in a political activity, and the ‘‘outcome” is participation in the activity.5 The treatment group will be compared to a control group that does not receive any requests, but otherwise is identical to the treatment group in all observed characteristics that are expected to affect both the treatment and the outcome. The causal effect of interpersonal recruitment will be defined as the difference in the participation rate between the treatment and the control groups. The estimate in this framework is the ‘‘average treatment effect on the treated” (ATT) rather than the average treatment effect for the whole sample (ATE). In other words, the estimates are the causal effect of the recruitment attempt on the people who actually receive requests.6 Under many circumstances, the ATT is not the same as the ATE (Angrist et al., 1996; Heckman et al., 1997; Winship and Morgan, 1999; Morgan, 2001). One such condition is causal effect heterogeneity. When the ATT is different from the average treatment effect on the untreated (ATU), the estimate of ATE (weighted average of the ATT and ATU) is likely to be biased and inconsistent (Morgan and Winship, 2007). More importantly, it would mask the causal effect heterogeneity that could be substantively important. In this situation, it is critical to carefully define the causal effect of interest and to focus on the effect that could be estimated without bias. Using survey data on citizen participation in politics, this study examines the causal effect of interpersonal recruitment in various non-voting forms of political activity. By examining the effects of interpersonal recruitment in different forms of political activity, this study will demonstrate how the effect of interpersonal recruitment varies in different types of political activism.
3
If the model is correctly specified and matching discards a large number of observations, however, regression will be more efficient than matching. For the details of how hypothetical point estimates and their confidence intervals are calculated, see Rosenbaum and Rubin (1983) and Harding (2003). Interpersonal recruitment includes different treatment conditions that may vary in their effectiveness. For example, a request by a friend may be more effective than that by a stranger. This issue is addressed in another paper by the author Lim (2008). The current paper focuses on the effect of interpersonal recruitment, in general. 6 The effect on the untreated would be a hypothetical effect a recruitment attempt would have on the people who did not receive a request. 4 5
344
C. Lim / Social Science Research 39 (2010) 341–355
Table 1 Coding of outcome variable. Have you received any request directed at you personally to take part in (activity name)? Received request
Participated in (activity name) in past 12 months?
Participated Not Participated
Not received request
Participated in response
Not participated
Participant Missing
Non-participant Non-participant
Participant Non-participant
4. Data and measures This study uses data from the Citizen Participation Study collected by Verba, Schlozman, and Brady (see Appendices A and B in Verba et al., 1995 for more information about the study). The data were collected by means of in-person interviews with a stratified random sample of 2517 respondents. This sample was selected from 15,053 respondents who participated during the first wave of the survey, conducted in 1989. Because Verba and his colleagues over-sampled blacks, Latinos, and political activists, all the analyses in this study are weighted using the sampling weight provided by the authors, so as to make the sample representative of the population.7 While it was collected almost two decades ago, the Citizen Participation Study has some important advantages that are not available in recent datasets. For example, because political activists identified in the first wave were oversampled in the second wave, the data contains a relatively large number of participants in important but infrequent political activities such as protest. More importantly for this study, the Citizen Participation Study includes an excellent survey module on the process of interpersonal recruitment of political activists. The module asks whether a respondent received any personal requests to participate in each of five political activities—volunteering for an electoral campaign; making a financial contribution to a campaign; contacting government or elected officials; participating in a protest; and participating in community activities— within the past 12 months (see Appendix A for the original questions). These questions provide the information for the key causal variables.8 The study asked those who received requests how they responded to the requests—i.e., whether they participated in the activity or not.9 In a different part of the survey, the researchers also asked all respondents whether they had participated in the same activity during the past 12 months, regardless of whether they were asked. I combined these two questions to construct the outcome variables. If the answers to the two questions were consistent—that is, they said either ‘‘yes” or ‘‘no” to both questions—the outcome variable was coded accordingly. If a respondent did not receive any request, the outcome variable was coded on the basis of the second question—whether they had participated in the activity over the preceding 12 months, regardless of request. A few respondents said ‘‘yes” to requests but also reported that they did not participate in any activity during the past 12 months. These cases were not included in the analysis. Complication arises when the respondents said they rejected the request, but reported that they participated in the activity anyway. A problem is that some of these respondents received more than one request for the same activity and although they rejected the latest request they received, it is still possible that their participation was in response to one of the earlier requests. Unfortunately the Citizen Participation Study does not provide any information about who among them participated voluntarily and who did so in response to previous requests. Without such information, I decided to code those respondents as non-participant because the key concern in this study is the overestimation of the causal effect of recruitment due to selective targeting. If the respondents in question—i.e., those who rejected the latest one but participated in the activity anyway—are coded as recruited participants, we may incorrectly credit recruiters for voluntary participation and as a result, overestimate the effect of interpersonal recruitment. Table 1 summarizes how the outcome variables were coded. In addition, the Citizen Participation Study has rich data on respondents’ socioeconomic status, their social and political dispositions, and their involvement in civic and political activities, so I was able to match various characteristics that could influence the assignment of the treatment condition. The matching covariates were selected carefully based upon the prior studies on how recruiters target their mobilization efforts (Zipp and Smith, 1979; Smith and Zipp, 1983; Rosenstone and Hansen, 1993; Brady et al., 1999). More specifically, the matching covariates include a few different groups of factors: (1) socioeconomic variables, including education and income; (2) various measures of political engagement, including political interest, political efficacy, and intensity of partisan identification; (3) indicators of embeddedness in political and civic net-
7 It is important to weight the analyses, particularly because of the over-sampling of political activists—a sampling on the dependent variable. See Appendix B for details about how the weight was handled in the propensity score matching analysis. 8 A couple of other surveys including the National Election Study and the Giving and Volunteering Survey also ask about interpersonal recruitment, but they focus on either voting or non-political activities such as charity and volunteering. 9 When respondents received more than one request for the same activity, the survey asked the follow-up question only with respect to the most recent request they received.
345
C. Lim / Social Science Research 39 (2010) 341–355 Table 2 Effect of interpersonal recruitment on electoral political participation.
Baseline participation rate Control Treatment Difference (T C) Odds ratio(T/C) v2 (1 df) Matched sample participation rate Control Treatment Difference (T C) Odds ratio (T/C) v2 (1 df) Matching statistics Number of treatment cases (% Matched) Unique control cases Covariate balance (Caliper) Number of covariates with standard Number of covariates with standard Number of covariates with standard Number of covariates with standard Standard bias for propensity score
bias bias bias bias
<.05 .05 .10 .10–.15 .15 .30
Sensitivity to hidden bias: point estimate (confidence interval) D = 1, N = 1 D = 1.2, N = 1.2 D = 1.4, N = 1.4 D = 1.6, N = 1.6 D = 1.8, N = 1.8 Estimates from traditional logit regression Logit coefficient (Standard error) Odds ratio
Campaign work
Campaign donation
5.91 23.17 17.26 4.80 97.78 p < .001
12.10 29.54 17.44 3.05 103.24 p < .001
19.09 23.04 3.95 1.27 1.815 p = .178
43.17 28.60 14.57 0.53 33.55 p < .001
373 (100.00) 294
728 (97.72) 360
(0.01) 16 6 2 2 0.00052
(0.03) 11 12 2 2 0.000074
1.27 1.26 1.24 1.21 1.16
(.88 (.89 (.87 (.84 (.81
0.99 (.198) 2.70
1.78) 1.80) 1.77) 1.72) 1.67)
.53 .52 .51 .50 .48
(.42 (.42 (.41 (.40 (.39
.66) .65) .64) .62) .60)
0.24 (.107) 0.787
works (e.g., union and other civic association membership, and involvement in political organizations and other civic associations). Appendix C shows the full list of matching covariates. 5. Results In this section, the causal effects of interpersonal recruitment are examined with respect to five different types of political activities. Initially, participation in two electoral campaign activities is examined; thereafter, participation in non-electoral activities is assessed. The paper then addresses the issue of causal effect heterogeneity and examines how the effect of interpersonal recruitment is dependent upon one’s propensity to be a recruitment target. 5.1. The causal effects of recruitment attempts Table 2 displays the results of matching analyses for two types of electoral political participation—volunteering for campaign activities and making a campaign donation.10 Both the baseline participation rate in the unmatched sample and the participation rate in the matched sample are presented. The baseline participation rate simply is the percentage of respondents who participated in each activity in the unmatched sample. Beginning with campaign activity in the first column, 23.17% of the respondents who received requests did volunteer work for electoral campaigns in response to those requests; this is considerably higher than the baseline rate of those who did not receive requests, which is merely 5.91%. The odds ratio comparing participation rates in the treatment versus non-treatment groups is 4.8, and this is statistically different from 1.0.11 10 I used a simple algorithm called nearest neighbor matching, combined with the replacement and caliper options (Rosenbaum 2002). The replacement option allows control cases to serve as a match for more than one treatment case, and the caliper option constrains matches within a specific percentage of difference in propensity score. Different caliper sizes, from 1% to 3%, were used here for different dependent variables to achieve optimal covariate balance. Only one nearest neighbor is selected for each treatment case. I also tried kernel matching to see whether the findings were sensitive to a specific matching procedure. The results were very similar to those from nearest neighbor matching. 11 v2 statistic for a 2 2 table was used to test the statistical significance of the difference.
346
C. Lim / Social Science Research 39 (2010) 341–355
The same comparison in the matched sample is found immediately below. As the matching statistics show, the matching algorithm found matches for all 373 treatment-cases, and the good covariate balance indicates that the two groups in the matched sample are comparable with respect to most observed variables.12 In the matched sample, the participation rates in the two groups differ only by about 4–19.09% in the control group and 23.04% in the treatment group, producing an odds ratio of 1.27, which is not statistically different from 1.0. In other words, the participation gap in campaign activity between the two groups appears to be driven largely by differences in other characteristics, rather than by the causal effect of interpersonal recruitment. Table 2 also shows the results from the sensitivity analysis. As mentioned earlier, the main goal of the sensitivity analysis is to assess the magnitude of the bias by unobserved covariates that would be required to alter the conclusion. The sensitivity analysis does this by calculating point estimates and confidence intervals for the treatment effect for various combinations of two sensitivity parameters: C—the association between a hypothetical unobserved covariate and the treatment variable, measured as an odds ratio—and D—the association between the unobserved covariate and the outcome variable. For example, when both sensitivity parameters are ‘‘1.0” (i.e., when the unobserved covariate has no effect on either the treatment variable or the outcome variable), the estimated treatment effect is as the same as the estimate in the matched sample— 1.27. As the associations of the unobserved covariate with both the treatment and the control variables increase, however, the point estimate of the treatment effect becomes smaller.13 In the case of campaign activity, however, the sensitivity analysis is less meaningful, because the estimated treatment effect in the matched sample is not statistically significant. As a comparison for the matching estimates, the bottom section of Table 2 shows the estimates from traditional logit regression models with the unmatched data.14 The logit models control for all the covariates included in the matching analyses. This comparison shows whether there is any advantage to using the matching analysis instead of the conventional regression approach. The logit estimate of the odds ratio is 2.7, substantially larger than the matching estimate (1.27). Moreover, the effect is statistically significant at the p < 0.001 level. In other words, the traditional logit estimate leads to a very different conclusion regarding the interpersonal recruitment effect on participation in campaign work. One of the main sources of this difference, as I will demonstrate later, is causal effect heterogeneity between the treatment and control groups. The matching estimates in this study are the average treatment effect for the treated—the effect upon the respondents who actually received requests. The treatment effect for the treatment group may not be the same as the average treatment effect for both the treated and the untreated, which the logit regression model estimates, especially when the treatment effect for the untreated is not the same as the effect for the treated. This issue will be discussed in greater detail later. The second column in Table 2 shows the results for campaign donations. In the unmatched sample, again there is a significant difference in the participation rate between the treatment and control groups—29.54% and 12.1%, respectively. In the matched sample, however, the treatment group actually is less likely to donate money to campaigns than the control group does voluntarily. Only 28.6% in the treatment group donated money to campaigns in response to the requests they received, while 43.17% in the control group did so. This is counterintuitive, because it suggests that interpersonal recruitment attempts have a negative effect on campaign donations—i.e., making people less likely to give money to campaigns than they would otherwise. One possible explanation for this negative effect is that the participation of the treatment group is underestimated because of the coding decision on the respondents who rejected the latest request they received but participated in the activity anyway. As explained earlier, these respondents were coded as ‘non-participant’ under the assumption that they participated voluntarily although some of them may have participated in response to previous requests and therefore should have been counted as ‘recruited participant.’ About 83% of the treated respondents received more than one request, and 30% of them made a donation even though they rejected the most recent request. The question is how many of these respondents donated in response to previous requests and how many did so voluntarily. While it is impossible to know the exact answer to this question, we can get a range within which the real answer should fall. The participation rate reported in Table 2 (28.6%) can be considered as a lower bound as all those respondents were considered as voluntary participants—therefore as non-participants in this analysis. An upper bound would be 52.8% when all those respondents are counted as recruited participants. This is higher than the participation rate in the control group (43.17%) and the chi-squared test suggests that the difference is statistically significant, which indicates that interpersonal recruitment does increase campaign donation significantly. However, this is certainly an overestimation of the participation rate of the treatment group. The real answer should be somewhere between the two. One way to acquire a more realistic estimate is to use the information from the control group. The respondents in the control group are similar to those in the treatment group on all observed characteristics and therefore the voluntary participation rate in the treatment group would be similar to the participation rate in the control group. The donation rate in the control group is 43.17%, all of which should be voluntary donation, as they did not receive any request, and therefore we could assume that the 43.17% of the respondents who rejected the most recent request but made a donation did so volun-
12 To assess the quality of matching, standardized bias, which commonly is used as an indicator of covariate balance, was used (Rosenbaum and Rubin, 1985). Simply speaking, standardized bias is the difference in means of a covariate between the treated and control groups, divided by the square root of the mean variances. 13 Given that our major concern in the presence of selection bias is overestimation rather than underestimation of treatment effect, only an unobserved covariate that would decrease the effect of interpersonal recruitment was considered. 14 Both matched and unmatched observations were included in this logit regression analysis.
347
C. Lim / Social Science Research 39 (2010) 341–355 Table 3 Effect of interpersonal recruitment on non-electoral political participation.
Baseline participation rate Control Treatment Difference (T C) Odds ratio (T/C) v2 (1 df) Matched sample participation rate Control Treatment Difference (T C) Odds ratio (T/C) v2 (1 df) Matching statistics Number of treatment cases % Matched Unique control cases Covariate balance (Caliper) Covariates with standard bias <.05 Covariates with standard bias .05 .10 Covariates with standard bias .10 .15 Covariates with standard bias .15 .30 Standard bias for propensity score
Contacting officials
Community politics
Protest
22.67 58.47 35.8 4.799 288.57 p < .001
17.94 53.34 35.4 5.016 182.43 p < .001
2.21 28.51 26.3 18.29 335.48 p < .001
42.96 58.33 15.37 1.86 35.51 p < .001
35.58 53.13 17.55 2.05 15.74 p < .001
7.07 28.51 21.44 5.29 61.1 p < .001
758 98.95% 407
606 98.54% 359
387 100.00% 292
(0.01) 16 6 3 2 0.00002
(0.03) 17 7 1 2 0.000068
(0.03) 15 8 2 1 0.00001
2.05 1.97 1.83 1.68 1.54
5.29 5.11 4.71 4.33 4.07
Sensitivity to hidden bias: point estimate (confidence intervals) D = 1, N = 1 1.86 D = 1.5, N = 1.5 1.78 D = 2.0, N = 2.0 1.65 D = 2.5, N = 2.5 1.51 D = 3.0, N = 3.0 1.39 Logit regression estimates Logit coefficient (Standard error) Odds ratio
(1.51 (1.45 (1.34 (1.22 (1.12
0.99 (.111) 2.69
2.27) 2.19) 2.03) 1.87) 1.73)
(1.63-2.58) (1.56 2.48) (1.45 2.32) (1.32 2.14) (1.20 1.96)
1.05 (.128) 2.85
(3.27 (3.25 (2.99 (2.74 (2.56
7.93) 8.02) 7.42) 6.86) 6.46)
2.46 (.238) 11.70
tarily. Under this assumption, the recruited donation rate in the treatment group would be 43.49%, which is substantially higher than the 28.6% reported in Table 2 but almost identical to the donation rate in the control group (43.17%). In short, although the negative effect of interpersonal recruitment in Table 2 could be due to a coding decision on the respondents who received more than one request, there still is no strong evidence that interpersonal recruitment increases campaign donation. The estimate of the interpersonal recruitment effect on campaign volunteer activity is also vulnerable to the same problem. The lack of recruitment effect reported in Table 2 could be due to the same coding decision on the people who received multiple requests.15 Based on the same approach, the upper bound of the recruited participation rate for campaign volunteering would be 28.8%, which is larger than the 19.1% of the control group. When we exclude possible voluntary participants based on the participation rate of the control group, the participation rate of the treatment group would be somewhat lower at 27.9%. The odds ratio would be 1.64, which is significantly different from 1. In short, if we assume that a part of the campaign volunteers who rejected the most recent request, participated in response to earlier requests, the participation rate of the treatment group would be significantly higher than that of the control group. The conclusion that personal recruitment does not increase the participation in electoral campaign activity, thus, is more sensitive to the coding decision made in Table 2 than the conclusion about campaign donation. Despite this issue of multiple requests, it is worth contemplating what the non-effect or the negative effect of interpersonal recruitment on electoral campaign participation in Table 2 implies. The estimates reported in Table 2 can be considered as the effect of a specific recruitment attempt on participation as opposed to the cumulative effect of all recruitment attempts the respondents receive. In other words, 28.6% is the donation rate that a specific recruiter can expect from her recruitment effort when she targets a respondent in the treatment group. This is lower than the participation rate in the control group, partly because the latter is the rate of voluntary donation for all campaigns whereas the former is the rate for a specific campaign. Since recruiters have to compete with other recruiters who target the same pool of people, they not only
15
About 70% of the respondents who were asked to volunteer for a campaign received multiple requests.
348
C. Lim / Social Science Research 39 (2010) 341–355
have to motivate the people to donate but also have to persuade them to make a donation to their campaign, not to their competitors. Although their recruitment efforts may not increase the overall participation rate in the population because they target the people who would make a donation anyway, they still need to make the effort to channel the money into their campaign. Table 3 presents the results of matching analyses for the three non-electoral forms of political participation. In the unmatched sample, the odds of contacting officials or participating in community activity are about five times greater when people receive personal requests. Controlling for the covariates by matching decreases the gap substantially in both types of activity, which indicates that a large proportion of the participation gap can be attributed to differences in the matching covariates. Even in the matched sample, however, personal requests almost doubled the odds of participation for both forms of activism, and the sensitivity analysis shows that the effects are quite robust to potential omitted variable bias. To eliminate the effect of interpersonal recruitment on official contacting, for example, an unobserved covariate would have to increase the odds of both the treatment assignment and the outcome by more than four times, which is unlikely, given that even the variable that has the largest effect on the treatment assignment has an odds ratio smaller than 4.0. Finally, the logit regression estimates of the recruitment effect support the conclusion, although the logit estimates are considerably larger than the matching estimates. The last column shows the effect of interpersonal recruitment on protest participation. The pattern is very similar to that for community politics, except that the recruitment effect is much larger. Before matching, the odds of participation in the treatment group are more than 18 times greater than for those in the control group. In fact, the baseline participation rate for those who did not receive requests is merely 2.21%, which indicates that people rarely go to protests without receiving personal requests. Even in the matched sample, the participation rate for the control group is only 7.07%, and the odds ratio measuring the difference between the treatment and control groups is 5.29, a larger effect than that observed with any other form of participation. The sensitivity analysis indicates that the effect is very robust to even an extremely large hidden bias. In summary, the analyses in Tables 2 and 3 suggest that interpersonal recruitment does have a significant effect on participation, at least in three non-electoral forms of political participation. The evidence for the recruitment effect in electoral campaign activity, especially in campaign donation, is much weaker. In the most conservative estimates reported in Table 2, interpersonal recruitment does not increase participation in either of the electoral campaign activities. Only under an unlikely assumption that all donors who received multiple requests but rejected the latest one donated in response to the requests they had received previously, does interpersonal recruitment significantly increase participation in campaign donation. And even in that case, the size of the causal effect is smaller than that in non-campaign activities. Later, I will discuss in greater details why the effect of interpersonal recruitment is more salient in non-electoral activities than in electoral campaign activities. Another finding that warrants attention is the substantial difference between matching and logit estimators of interpersonal recruitment effects. In all five forms of participation, the logit estimators with the full sample are significantly larger than the matching estimators. Where does this difference come from? One of the main sources of the difference is that they estimate different kinds of causal effect. As mentioned earlier, the matching analyses in Tables 2 and 3 estimate the average treatment effects for the treated, while the logit regressions estimate the average treatment effects for both the treated and the untreated.16 When a causal effect is constant across individuals in both the treatment and control groups, the two estimates must be the same, or at least similar. When this assumption does not hold, however, the average overall treatment effect will be different from the average treatment effect on the treated. In the next section, this issue will be addressed to examine the patterns of causal heterogeneity more closely. 5.2. Causal effect heterogeneity Why would the effect of interpersonal recruitment be different between the treatment and control groups? As suggested earlier, the answer may lie in the way recruiters target their recruitment attempts. One of the reasons recruiters selectively target their recruitment attempts is that they have limited resources and want to use these resources more efficiently, by concentrating on the people who are most likely to be recruited with the least amount of effort. This reasoning suggests two different possibilities for causal effect heterogeneity. First, recruiters may choose their targets because their recruitment attempts would have a greater effect, while avoiding others because their efforts would have little effect upon the others’ decisions. If this is the case, we would expect the causal effect of recruitment to be greater on those who are more prone to receiving requests. Another possibility, however, is that the recruitment effect is smaller on the people with a higher propensity for becoming recruitment targets, because recruiters would avoid the people who require a larger ‘recruitment effect’ for participation. To maximize the utility of their efforts, they may prioritize on the people who are closer to the threshold of participation and therefore require less influence from recruiters. To see which of the two possibilities is more consistent with the data, I examined how propensity scores interact with the treatment variable. If interpersonal recruitment has a larger effect on the people who are more likely to receive requests, we would expect a positive interaction between the treatment variable and the propensity score. On the contrary, a negative
16 Note that the logit regressions in Table 2 and Table 3 were estimated with the full dataset without matching. In this context, the logit coefficients estimate the average treatment effects for both the treated and the untreated (Heckman and Robb 1985; Morgan and Winship 2007).
349
C. Lim / Social Science Research 39 (2010) 341–355 Table 4 Propensity score logit regression of participation with interaction.
Treatment Propensity score Treatment X Propensity Score Constant LL N ** ***
Campaign work
Campaign donation
Contacting official
Community politics
Protest
1.871 (0.347)*** 8.106 (0.785)*** 4.153 (1.419)** 3.913 (0.166)*** 571.919 2457
1.516 (0.291)*** 5.673 (0.369)*** 3.594 (0.553)*** 3.551 (0.151)*** 877.138 2411
1.538 (0.221)*** 3.104 (0.285)*** 1.422 (0.486)** 1.988 (0.097)*** 1344.516 2460
1.643 (0.212)*** 3.473 (0.339)*** 1.913 (0.568)*** 2.098 (0.087)*** 1217.267 2494
3.703 (0.369)*** 8.062 (0.963)*** 6.119 (1.317)*** 5.080 (0.262)*** 336.916 2403
p < .01. p < .001.
Fig. 1. Propensity score and probability of participation: campaign volunteering and donation.
interaction between the treatment variable and the propensity score would indicate that the recruitment effect is larger on those people who are less prone to becoming targets of recruitment. Table 4 shows the results from the propensity score logit regressions for all five forms of participation.17 The interaction term is negative and statistically significant in all five models, which suggests that the effect of interpersonal recruitment is larger for those with low propensity scores and smaller for those with high propensity scores. To make the interaction more interpretable, the predicted probability of participation and its confidence interval at different levels of the propensity score was calculated and presented in Fig. 1 and 2.18 The horizontal axis of each graph represents the propensity score, and the vertical axis shows the probability of participation predicted from the logit regression model in Table 4. The circles represent the probability for the treatment group, and the diamonds show the probability for the control group. In each graph, the gap between the two groups shows the effect of interpersonal recruitment at each level of propensity score.
17 18
All observations on the common support region are included in the analyses. I used the ‘prvalue’ command in the SPOST package in STATA (Long and Freese, 2006) to calculate the predicted probabilities and their confidence intervals.
350
C. Lim / Social Science Research 39 (2010) 341–355
Fig. 2. Propensity score and probability of participation: community politics and protest.
Table 5 Reasons to volunteer for campaign activity. Civic duty
Policy/political goals+
Material incentives*
Collective social incentive
Individual social incentive**
1st Quartile 2nd Quartile 3rd Quartile 4th Quartile
2.44 2.46 2.48 2.53
2.15 2.25 2.15 2.34
1.70 1.67 1.54 1.50
2.21 2.25 2.23 2.15
1.49 1.37 1.21 1.21
Total
2.48
2.22
1.60
2.21
1.32
+ * **
p < .10. p < .05. p < .01.
Both Figs. 1 and 2 indicate that the size of the treatment effect declines as the propensity score increases. In the case of campaign volunteering, for example, the two groups are almost indistinguishable above a propensity score of 0.4. For campaign donation, the treatment group has a higher probability than the control group only up to around .3, and beyond .5, the control group’s probability actually is higher than the treatment group’s. The patterns for non-electoral activities are less striking than those observed in the first two charts, but similar heterogeneity is found (Fig. 2).19 Unlike the two forms of electoral participation, however, interpersonal recruitment increases the probability of participation even at the relatively high propensity scores. In summary, interpersonal recruitment has a larger effect on the people who are less likely to receive personal requests from recruiters in all five forms of participation. This is ironic, because it suggests that the people who are less prone to becoming targets actually are the ones who would respond most positively to interpersonal recruitment.20 This effect heterogeneity sheds light on why the matching estimates of recruitment effects are substantially smaller than the logit estimates that measure the average overall treatment effect. The matching analyses estimate the effects only for the treatment group that, on average, has a higher propensity score, whereas the logit regressions are based upon the whole 19
The pattern of official contacting is similar to that of community politics. The graph for official contacting is not shown. This pattern of effect heterogeneity parallels what recent studies of voter mobilization have found (Hillygus, 2005; Parry et al., 2008). For example, Parry et al. (2008) has shown that campaign contact only affects the turnout of ‘‘seldom” voters, who are less likely to be contacted by campaigners. Similarly, Hillygus (2005) has found that campaign efforts such as advertisement and personal persuasion have the strongest influence among people initially not planning to vote. This type of negative selection bias in which people who are less likely to be affected by a treatment condition are more prone to receive the treatment is not limited to political mobilization. Morgan (2001), for example, has found that the effect of Catholic school education on academic achievement is larger for the students who have a lower propensity to attend Catholic school (see Brand and Halaby, 2006 for a similar finding on elite college education). 20
C. Lim / Social Science Research 39 (2010) 341–355
351
sample. In other words, the matching estimators are weighted toward those cases on the right side of the graphs, while the logit estimators are heavily weighted toward the left side, where most untreated cases are concentrated. 5.3. Self-reported reasons for participation The finding that the effect of interpersonal recruitment varies by an individual’s propensity to receive personal request raises the question of whether people with varying propensities participate in political activities for different reasons. Given that the effect of interpersonal recruitment is larger for people with a lower propensity, for example, we may expect that social incentives play a greater role in motivating the people who are not frequently targeted by recruiters. On the other hand, people with a higher propensity, who tend to be better educated and politically sophisticated, may be less responsive to social incentives, but more responsive to policy issues or political ideologies. While a thorough examination of the motivations for political participation is beyond the scope of this study, I briefly examine whether people with different propensity scores report different reasons for their decision to participate. I focus on campaign volunteering because the Citizen Participation Study asked the most exhaustive list of questions about the reasons for participation in campaign activity. I combined fifteen different reasons for participation into five summary indices: (1) civic duty; (2) policy or political goals; (3) material incentive; (4) collective social incentive; (5) individual social incentive (Appendix A-3 provides the list of specific questions for each summary index). To examine whether people with different propensity scores rank the importance of each reason differently, I divided the respondents into four quartiles based on their propensity score and compared the mean of each index across the four groups.21 Table 5 suggests that the people with a low propensity score are significantly different from those with a high propensity score in a few different aspects. First, individual social incentive plays a larger role in motivating the people in the first quartile than those in the fourth quartile. In other words, people who are not frequently targeted by recruiters are more likely to report that they participated in a campaign because they did not want to say no to someone who asked. Similarly, material incentive is more important for the people with a low propensity score, which indicates that they are more likely to be motivated by specific benefits from participation such as meeting influential people or getting help with a personal problem. On the other hand, policy or political goals are more important in motivating the people with a higher propensity score. In short, different mixes of incentives seem to motivate people with different levels of propensity to receive personal requests. The people with a low propensity, who usually have a lower socioeconomic status and a lower interest in politics, are more likely to be motivated by social incentives provided by recruiters or by specific benefits from participation, whereas those with a high propensity score are more likely to motivated by the chance to influence policy or to further their partisan interest. These results shed light on why interpersonal recruitment would have a larger effect on the people who are less likely to be targeted by recruiters. 6. Summary and discussion This study aimed to assess the causal effect of interpersonal recruitment on political participation. In particular, I focused on two types of bias (i.e., differential baselines and effect heterogeneity) introduced by the selective targeting of mobilization efforts, and employed the counterfactual framework to estimate the causal effect of interpersonal recruitment on participation in five different types of political activity. The results show that a large part of the observed recruitment effect is, in fact, due to the selective targeting of recruitment efforts. However, even after taking the effect of selective targeting into account, interpersonal recruitment does have an effect on participation in some types of political activity, especially non-electoral forms of political activity. The effect is largest on protest participation, wherein interpersonal recruitment increases the odds of participation more than fivefold. The effects on participation in community politics and contacting officials also are substantial and robust as interpersonal recruitment doubles the odds of participation. The evidence for participation in electoral activities is more mixed. Personal contact may increase participation in campaign volunteer activity, but the effect is smaller compared to the recruitment effect in non-electoral activities. Moreover, the effect is significant only when we consider the cumulative effects of all recruitment attempts respondents receive. Why would the effect of interpersonal recruitment vary in different types of political activity? In particular, why would interpersonal recruitment have no or a smaller effect on participation in electoral campaign activities whereas it has the largest effect on protest participation? A few different factors appear to be operating here. First, electoral campaigns usually are equipped with more resources and better information than, for example, protest organizers. As a result, campaigns would be able to target their efforts more precisely toward those people who have the highest potential for participation, which would make the causal effect of their efforts smaller. Another factor to consider is the competition between different campaigns for activists and donors. As many campaigns try to mobilize from the same pool of potential activists and donors, recruiters not only must motivate someone to participate, but they must also persuade him/her to participate in their own campaign rather than in their opponent’s. This high level of competition would make it harder for recruiters to influence potential participants’ decisions. Finally, the differential information cost involved in electoral and non-electoral participation should be considered. Information about an election campaign is more widely and easily available than that for a protest event or community board
21
One-way ANOVA was used for the comparison.
352
C. Lim / Social Science Research 39 (2010) 341–355
meeting; thus, as long as a person is interested in the campaign, he or she can find out how to volunteer and donate money for a campaign without too much difficulty. On the other hand, a protest event tends to be less well advertised, so that participation in a protest requires a higher search cost on the part of each participant, just to find out where and when the event is going to happen. Interpersonal recruitment would be helpful in reducing the high search cost, thereby having a significant effect even on those who already are prone to participation. Another intriguing finding in this study is that the causal effect of interpersonal recruitment is not constant across individuals, but varies relative to their propensity to receive requests. People who have a higher chance of becoming targets of recruitment are less likely to be influenced by those recruitment attempts, while those who have less chance of receiving requests are most likely to be affected. The analyses of the self-reported reasons for participation shed some light on why the effect would be larger for the people with a low propensity. Compared to the people who are frequently targeted, those with a low propensity are more likely to be motivated by social and material incentives, and less by policy goals or political ideology. These kinds of incentives would be more effective when they are offered through personal contacts rather than by mail or by other means of communication. The heterogeneity of incentives that motivate people to participate in politics challenges the ‘one-size-fits-all’ approach in conventional models of political participation that focus on personal interests and individual resources (e.g., Verba et al., 1995). These models explain differential participation across general population well, but they may not be very helpful for understanding why and how those who are unlikely to be active in politics according to the conventional models, sometimes become active in politics. While the people with low socioeconomic status who lack individual resources or civic skills are generally less likely to be active in politics than those with high socioeconomic status, some of them do participate in politics. How do they overcome the barriers to participation? What motivate them to become political activists despite the high barriers? To answer these questions, future research needs to pay attention to whether different factors influence political participation of various subgroups and also to how the same factors could affect subgroups differently. The findings about the heterogeneity of interpersonal recruitment effect raise another question: why would recruiters target people on whom they have less effect? As rational prospectors, recruiters would like to maximize the marginal return by concentrating their efforts on the people they have larger effects on. One factor that needs to be considered is the cost of recruitment. Although the recruiter could have a larger effect on people with a low propensity reaching out and motivating them could be more costly than persuading those who are easier to reach and more prone to participate. Even if the cost of recruiting the people with a low propensity does not exceed the potential marginal gain from their participation, the cost and their participation potential could be uncertain to the recruiters (Oliver and Marwell, 1992). Facing the high uncertainty, the recruiters may turn to the people whose political interests and orientations are more visible. In particular, when there are other organizations or campaigns that compete for the same pool of potential activists, recruiters would want to spend their limited resources on ensuring the participation of those with high participation potential rather than thinly spreading their resources to people whose participation potential is uncertain. This mobilization strategy, although it could be effective for an organization or a campaign, may have an important social consequence. As recruiters concentrate their mobilization efforts on the fraction of the population that is both equipped with personal resources and interested in politics rather than on a larger segment that has higher barriers to participation and lacks the resources or the motivation to overcome them, interpersonal recruitment is likely to reinforce the already unequal pattern of political participation. In fact, some studies have demonstrated that this targeted recruitment strategy is responsible for a declining and increasingly unequal citizen participation in the political process in the U.S. (Schier, 2000; Goldstein and Ridout, 2002). The findings in this study, however, suggest that interpersonal recruitment could be a useful tool to reverse this trend if recruiters actually directed their efforts toward the people who usually do not participate in politics of their own accord.22 They, who often lack an interest in and personal resources for participation, often require additional and more specific incentives to become politically active, and mobilization through personal contacts could be a very useful source of them. Acknowledgments This paper has benefited from valuable comments and advice given by Peter V. Marsden, Kenneth T. Andrews, Marshall Ganz, Jason Kaufman, Christopher Winship, Alison Jones, Matthew Baggetta, David Campbell and anonymous reviewers. Appendix A. Survey questions 1. Participation in activity 1.1. Campaign work Since January 1988, the start of the last national election year, have you worked as a volunteer—that is, for no pay at all or for only a token amount—for a candidate running for national, state, or local office?
22 Schussman and Soule (2005) found that the level of political engagement and civic skills have larger effects on protest participation among the people who did not receive requests than among those who did. Their finding mirrors the effect heterogeneity this study has found since it indicates that interpersonal recruitment equalizes the probability of participation between the ‘‘political haves” and the ‘‘have-nots.”
C. Lim / Social Science Research 39 (2010) 341–355
353
1.2. Campaign donation Now we would like to talk about contributions to campaigns. Since January 1988, did you contribute money to an individual candidate, a party group, a political action committee, or any other organization that supported candidate? 1.3. Contacting officials (A participant responded positively to one or more of the following) Now I want to ask you a few questions about contacts you may have initiated with government officials or someone on the staff of such officials—either in person or by phone or letter—about problems or issues with which you were concerned. Please don’t count any contacts you have made as a regular part of your job. In the past 12 months, have you initiated any contacts with a federal elected official or someone on the staff of such an official: I mean someone in the White House or a congregational or Senate office? What about a non-elected official in a federal government agency? What about an elected official on the state or local level—a governor or mayor or a member of the state legislature or a city of town council—or someone on the staff of such an elected official? And what about a non-elected official in a state or local government agency or board? 1.4. Community activity (A participant responded positively to one or more of the following in the past 12 months) Now some questions about your role in your community. In the past two years, have you served in a voluntary capacity— that is, for no pay at all or for only a token amount—on any official local governmental board or council that deals with community problems and issues, such as a town council, a school board, a zoning board, a planning board, or the like? Was this in the past 12 months? Have you attended a meeting of such an official local government board or council in the past 12 months? Aside from membership on a board or council or attendance at meetings, I’d like to ask also about informal activity in your community or neighborhood. N the past 12 months, have you gotten together informally with or worked with some community issue or problem? (An activist in the community is someone who answered positively to at least one of the acts in the following questions.) 1.5. Protest (Participant participated in protest during the past 12 months) In the past two years, have you taken part in a protest, march, or demonstration on some national or local issue (other than a strike against your employer)? Was this in the last 12 months? 2. Recruitment and participation in response to recruitment In the past 12 months have you received any request directed at you personally to take part in (activity named)? Did you respond positively to the request—I mean, did you take part in the action requested? 3. Reasons for participation Summary index
‘‘Thinking about the time when you decided to become active in the campaign, please tell me if each of these reasons was very important, somewhat important, or not very important in your decision to become active in the campaign. How about:”
Civic duty
My duty as a citizen I am the kind of person who does my share The chance to make the community or nation a better place to live The chance to influence government policy The chance to further the goals of my party The chance to further my job or career The chance for recognition from people I respect I might want to get help from an official on a personal or family problem I wanted to learn about politics and government. The chance to meet important and influential people I wanted to work with people who share my ideals The chance to be with people I enjoy I did not want to say no to someone who asked
Policy/political goals Material incentive
Collective social incentive Individual social incentive
354
C. Lim / Social Science Research 39 (2010) 341–355
Appendix B. Sampling weight I used the weight provided by the authors (the variable named Wt2517). In the matching analyses, first, I used the weight for the propensity score estimation to ensure that minorities or political activists are not concentrated in either the treatment or control groups. After I constructed the matched sample based on the propensity score, I applied individual weights of the treated subjects to both the treated and their matched subjects—i.e., their counterfactual subjects. This ensures that the treated subjects are representative of the U.S. population who received the request.
Appendix C. Description of the variables Variable Outcomes: Participation in. . . Campaign work Campaign donation Contacting officials Community activity
Protest Treatment variables: Request to participate Matching variables: Age Gender Race Birth place Education Income Employment Political engagement Civic skills
Registered to vote Partisan strength Party identification Household union members Length of residence Church attendance Membership in civic association Membership in political association Hours spent for civic association Money given to civic association Mail solicitation
Description
Worked as a volunteer for a candidate running for national, state, or local office in the past year Contributed money to an individual candidate, a party group, a political action committee, or any other organization that supported candidates Initiated any contacts with an elected or non-elected official on the federal, state, or local level in the past year Served in a voluntary capacity on local governmental board or council; attended a meeting of such a board or council; worked with others in your community to try to deal with some community issue or problem in the past 12 months Taken part in a protest, march, or demonstration on some national or local issue in the past 12 months (other than a strike against your employer) Received any request directed at you personally to take part in each of the five activities Respondent’s age Respondent’s sex Whites/Blacks/Others Born in the U.S. Years of education Family income Whether respondent is currently working Factor score based on three scale variables: political interest, political efficacy, and political information Engaged in the following activities in job, voluntary organizations, and church/synagogue: written a letter; gone to a meeting where respondent participated in decision making; planned or chaired a meeting; given a presentation or speech Currently registered to vote Strong Republican or Democrat Republican/Democrat/Others Anyone in household is a member of labor union Years lived in the current neighborhood Religious service attendance Member of civic association Member of civic association that takes a political stand Hours spent for the civic association(s) of which respondent is a member Money given to the civic association(s) of which respondent is a member Received mail asking to donate to political organizations, political cause or candidate; donated money as a response to the mail
C. Lim / Social Science Research 39 (2010) 341–355
355
References Angrist, Joshua D., Imbens, Guido W., Rubin, Donald B., 1996. Identification of causal effects using instrumental variables. Journal of the American Statistical Association 91, 444–455. Arceneaux, Kevin, Gerber, Alan S., Green, Donald P., 2006. Comparing experimental and matching methods using a large-scale voter mobilization experiment. Political Analysis 14, 1–36. Brady, Henry E., Schlozman, Kay Lehman, Verba, Sidney, 1999. Prospecting for participants: rational expectations and the recruitment of political activists. American Political Science Review 93, 153–168. Brand, Jannie E., Halaby, Charles N., 2006. Regression and matching estimates of the effects of elite college attendance on educational and career achievement. Social Science Research 35, 749–770. della Porta, Donatella, 1988. Recruitment process in Clandestine Political Organizations: Italian left-wing terrorism. In: Tarrow, S., Klandermans, B., Kriesi H. (Eds.), International Social Movement Research, vol. 1. JAI Press, Greenwich, pp. 155–169. Freeman, Richard B., 1997. Working for nothing: the supply of volunteer labor. Journal of Labor Economics 15, S140–S166. Gerber, Alan S., Green, Donald P., 2000. The effects of canvassing, telephone calls, and direct mail on voter turnout: a field experiment. American Political Science Review 94, 653–663. Goldstein, Kenneth M., Ridout, Travis N., 2002. The politics of participation: mobilization and turnout over time. Political Behavior 24, 3–29. Gosnell, Harold F., 1927. Getting Out the Vote: An Experiment in the Stimulation of Voting. University of Chicago Press, Chicago. Gould, Roger V., 2004. Why do networks matter? Rationalist and structuralist interpretations. In: Diani, Mario, McAdam, Doug (Eds.), Social Movements and Networks: Relational Approaches to Collective Action. Oxford University Press, New York, pp. 233–257. Green, Donald P., Gerber, Alan S., Nickerson, David W., 2003. Getting out the vote in local elections: results from six door-to-door canvassing experiments. Journal of Politics 65, 1083–1096. Harding, David J., 2003. Counterfactual models of neighborhood effects: the effect of neighborhood poverty on dropping out and teenage pregnancy. American Journal of Sociology 109, 676–719. Heckman, James J., Robb, Richard, 1985. Alternative methods for evaluating the impact of interventions. In: Heckman, J.J., Singer, B. (Eds.), Longitudinal Analysis of Labor Market Data. Cambridge University Press, Cambridge, pp. 156–245. Heckman, James J., Robb, Richard, 1986. Alternative methods for solving the problem of selection bias in evaluating the impact of treatments on outcomes. In: Wainer, H. (Ed.), Drawing Inferences from Self-Selected Samples. Springer-Verlag, New York, pp. 63–113. Heckman, JamesJ., Smith, Jeffrey, Clements, Nancy, 1997. Making the most out of programme evaluations and social experiments: accounting for heterogeneity in programme impacts. Review of Economic Studies 64, 487–535. Hillygus, D. Sunshine, 2005. Campaign effects and the dynamics of turnout intention in election 2000. The Journal of Politics 67, 50–68. Ho, Daniel, Imai, Kosuke, King, Gary, Stuart, Elizabeth, 2007. Matching as nonparametric preprocessing for reducing model dependence in parametric causal inference. Political Analysis 15, 199–236. Kim, Hyojoung, Bearman, Peter S., 1997. The structure and dynamics of movement participation. American Sociological Review 62, 70–93. Klandermans, Bert, Oegema, Dirk, 1987. Potentials, networks, motivations, and barriers: steps towards participation in social movements. American Sociological Review 52, 519–531. Lim, Chaeyoon, 2008. Social networks and political participation: how do networks matter. Social Forces 87, 961–981. Long, J. Scott, Freese, Jeremy, 2006. Regression Models for Categorical Dependent Variables Using Stata, second ed. STATA Press, College Station. McAdam, Doug, Paulsen, Ronnelle, 1993. Specifying the relationship between social ties and activism. American Journal of Sociology 99, 640–667. McAdam, Doug, 1986. Recruitment to high risk activism: the case of freedom summer. American Journal of Sociology 92, 64–90. Morgan, StephenL., 2001. Counterfactuals, causal effect heterogeneity, and the catholic school effect on learning. Sociology of Education 74, 341–374. Morgan, Stephen L., Harding, David J., 2006. Matching estimators of causal effects: prospects and pitfalls in theory and practice. Sociological Methods and Research 35, 3–60. Morgan, Stephen L., Winship, Christopher, 2007. Counterfactuals and Causal Inference. Methods and Principles for Social Research. Cambridge University Press, Cambridge. Nepstad, Sharon Erikson, Smith, Christian, 1999. Rethinking recruitment to high-risk/cost activism: the case of Nicaragua exchange. Mobilization 4, 25–40. Oliver, Pamela, Marwell Gerald, 1992. Mobilizing technologies for collective action. In: Aldon D., Morris, Carol M., Mueller (Eds.), Frontiers of Social Movement Theory. Yale University Press, New Haven, pp. 251–272. Opp, Karl-Dieter, Gern, Christiane, 1993. Dissident groups, personal networks, and spontaneous cooperation: the East German revolution of 1989. American Sociological Review 58, 659–680. Passy, Florence, Giugni, Marco, 2001. Social networks and individual perceptions: explaining differential participation in social movements. Sociological Forum 16, 123–153. Parry, Janine, Barth, Jay, Martha Kropf, E., Jones, Terrence, 2008. Mobilizing the seldom voter: campaign contact and effects in high profile elections. Political Behavior 30, 97–113. Rosenbaum, Paul R., 1984a. The consequences of adjustment for a concomitant covariate that has been affected by the treatment. Journal of Royal Statistical Society A 147 (5), 656–666. Rosenbaum, Paul R., 1984b. From association to causation in observational studies: the role of tests of strongly ignorable treatment assignment. Journal of the American Statistical Association, 41–48. Rosenbaum, Paul R., 2002. Observational Studies, second edition. Springer-Verlag, New York. Rosenbaum, Paul R., Rubin, Donald B., 1983. Assessing sensitivity to an unobserved covariate in an observational study with binary outcome. Journal of Royal Statistical Society 45 (2), 212–218. Rosenbaum, Paul R., Rubin, Donald B., 1985. Constructing a control group using multivariate matched sampling methods that incorporates the propensity score. American Statistician 39, 33–38. Rosenstone, StevenJ., Hansen, JohnMark, 1993. Mobilization, participation, and democracy in America. Macmillan, New York. Rubin, Donald B., 1974. Estimating the causal effects of treatment in randomized and nonrandomized studies. Journal of Educational Psychology 66 (5), 688–701. Rubin, Donald B., 1977. Assignment to treatment group on the basis of a covariate. Journal of Educational Statistics 2, 1–26. Rubin, Donald B., 1991. Practical implications of the modes of statistical inference for causal effects and the critical role of the assignment mechanism. Biometrics 47, 1213–1234. Schier, Steven E., 2000. By invitation only: the rise of exclusive politics in the United States. University of Pittsburgh Press, Pittsburgh. Schussman, Alan, Soule, SarahA., 2005. Process and protest: accounting for individual protest participation. Social Forces 84, 1083–1108. Smith, Joel, Zipp, John F., 1983. The party official next door: some consequences of friendship for political involvement. Journal of Politics 45, 958–977. Snow, David A., Zurcher Jr., Louis A, Ekland-Olson, Sheldon, 1980. Social networks and social movements: a microstructural approach to differential recruitment. American Sociological Review 45, 787–801. Verba, Sidney, Schlozman, KayLehman, Brady, HenryE., 1995. Voice and equality: civic voluntarism in American politics. Harvard University Press, Cambridge. Walzer, Michael, 1984. Spheres of Justice. A Defense of Pluralism and Equality. Basic Books Inc., New York. Winship, Christopher, Morgan, StephenL., 1999. The Estimation of causal effects from observational data. Annual Review of Sociology 25, 659–707. Zipp, JohnF., Smith, Joel, 1979. The structure of electoral participation. American Journal of Sociology 85, 167–177. Zipp, John F., Landerman, Richard, Luebke, Paul, 1982. Political parties and political participation: a reexamination of the standard socioeconomic model. Social Forces 60, 1140–1153.