Overreporting of voting participation as a function of identity

Overreporting of voting participation as a function of identity

The Social Science Journal 49 (2012) 421–429 Contents lists available at SciVerse ScienceDirect The Social Science Journal journal homepage: www.els...

294KB Sizes 2 Downloads 53 Views

The Social Science Journal 49 (2012) 421–429

Contents lists available at SciVerse ScienceDirect

The Social Science Journal journal homepage: www.elsevier.com/locate/soscij

Overreporting of voting participation as a function of identity Philip S. Brenner a,b,∗ a b

Department of Sociology, University of Massachusetts Boston, USA Center for Survey Research, University of Massachusetts Boston, USA

a r t i c l e

i n f o

Article history: Received 24 April 2012 Received in revised form 17 October 2012 Accepted 17 October 2012 Available online 9 November 2012

Keywords: Voting Survey research Identity Measurement error Response bias

a b s t r a c t This paper proposes an explanation of the overreporting of voting participation based in Stryker’s identity theory (1980), integrating the social pressure approach of Bernstein, Chadha, and Montjoy (2001). A set of logistic regression models is estimated to predict the propensity to overreport using indicators of extensive and intensive commitments to a political identity. Models are tested using vote verification data from five years of the American National Election Studies (1978, 1980, 1984, 1988, and 1990). Identity commitments are strongly predictive of overreporting and conventional understandings of social desirability fail to predict overreporting. Findings suggest that a conceptualization of self-reported voting as a measure of identity may provide a better explanation for why respondents overreport their voting in terms of extensive and intensive commitments to a political identity. © 2012 Western Social Science Association. Published by Elsevier Inc. All rights reserved.

That sample surveys yield inflated voting participation rates is well established; however, the mechanisms generating this systematic error are debated (Andersson & Granberg, 1997; Belli, Traugott, & Beckmann, 2001; Bernstein, Chadha, & Montjoy, 2001; Cassel, 2003; Granberg & Holmberg, 1991; Hill & Hurley, 1984; Presser & Traugott, 1992; Sigelman, 1982; Tittle & Hill, 1967). Comparisons between survey estimates and aggregate turnout indicate that either individuals who vote are much more likely to participate in surveys or that survey respondents overreport their voting behavior. While the former is at least partially true, research using vote validation measures from the American National Election Studies (ANES) repeatedly confirms the latter. However, the results of ANES vote validation studies raise an interesting question: if self-reported voting is not a perfect indicator of actual voting, what else do survey measures of voting behavior actually measure?

∗ Correspondence address: 100 Morrissey Blvd., Boston, MA 02125, USA. Tel.: +1 617 287 7200; fax: +1 617 287 7210. E-mail address: [email protected]

1. Literature review Conventional explanations focus on social desirability as a cause; the respondent’s desire to appear virtuous or make a good impression on the interviewer. To reduce this desire to impress the interviewer, experimental question wordings are used to encourage the respondent to admit having not voted to improve the validity of the measure. Duff, Hanmer, Park, and White (2007) compare voting rates from record data to self-reports from an experimental question revised to include excuses like “I didn’t vote, but I meant to” and “I didn’t vote, but I usually do.” While the experimental question decreases self-reported rates of voting by a few percentage points compared to the standard question, it still generates a great deal of overreporting when compared to turnout statistics. This finding is consistent with the existing literature that finds little or no improvement in criterion validity using experimental questions (Abelson, Loftus, & Greenwald, 1992; Belli, Traugott, & Rosenstone, 1994; Presser, 1990). Similar findings arise in research on other normative behaviors. DeBell and Figueroa (2011) use an experimental church attendance question in the ANES, intended to reduce

0362-3319/$ – see front matter © 2012 Western Social Science Association. Published by Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.soscij.2012.10.003

422

P.S. Brenner / The Social Science Journal 49 (2012) 421–429

bias in self-reports. However, the experimental question increased bias compared to the standard ANES question. A different perspective argues that the key to understanding overreporting is similarity between the characteristics that predict both voting and overreporting (Silver, Anderson, & Abramson, 1986). The impetus that drives some individuals to the polls is the same factor that drives others to misreport. Bernstein et al. (2001) describe this factor as “social pressure,” noting that respondents who are most likely to feel pressure are most likely to vote or overreport their voting. While this explanation challenges the standard social desirability explanation of overreporting as a function of the desire to create a good impression (Bernstein et al., 2001, pp. 25–26), its focus on demographic covariates leads to inconclusive findings that disagree on which demographic categories are significant (Cassel, 2003; Kanazawa, 2005). Brenner (2012) finds little consistency in the coefficients of demographic covariates when predicting overreporting of a related normative behavior. This finding leads him to suggest “it may be more fruitful for researchers in these areas to focus on more distal causes, like the importance of these normative identities, rather than more proximal associations with a set of demographic categories.” (Brenner, 2012, p. 14) An explanation based in identity does not rely on demographic categorizations or social pressure from important others. Rather, it is a more general approach that includes perceived social pressure, real or imagined, as a form of identity commitment. The current study expands on the work of Bernstein by proposing an explanation rooted in the respondent’s self-concept. Using a foundation in identity theory (Stryker, 1980), normative behavior overreporting is arguably better understood as the enactment of a salient identity. In the specific example explicated here, voting overreporting is understood as an enactment of a political identity. In situations that provide an opportunity for the performance of an individual’s political identity, like the survey interview, the relative location of this identity in his or her salience hierarchy determines enactment. 1.1. Social desirability and self-presentation Tourangeau and Smith (1996) and Presser (1990) argue that self-presentational concerns affect answers to sensitive questions. Compared to interviewer-administered modes, self-administered questionnaires yield increased reporting of socially undesirable behaviors, such as drug use (Aquilino, 1994). Self-administered questionnaires result in more valid measurement of sensitive behaviors because they allow the respondent to avoid the embarrassment of disclosing sensitive information to an interviewer, thus affirming that presentational concerns motivate respondents to edit their answers to appear virtuous. However, extending this explanation of the underreporting of contranormative behaviors to the overreporting of normative behaviors is problematic. If overreporting is generated by these conventional understandings rooted in the interaction between the interviewer and respondent, contextual factors (Karp & Brockington, 2005) should be expected to bias selfreports of normative behavior. First, increasing the distance

between the interviewer and respondent should relieve the pressure to report in a socially desirable fashion (Aquilino, 1994; Colombotos, 1965; de Leeuw & van der Zouwen, 1988; Moon, 1998; Rogers, 1976). As a result, increased levels of normative behavior overreporting in personal interviews may be expected compared with modes where an interviewer is not present, like a telephone interview. However, both experimental and observational studies fail to support the link between increasing distance between respondent and interviewer and increasing reporting validity on normative behaviors. For example, manipulations to the distance between interviewer and respondent, as well as input/output modality (voice vs. text), fail to show a difference in reports of sensitive behaviors (Moon, 1998). de Leeuw and van der Zouwen (1988) find a slight propensity toward overreporting in their meta-analysis of telephone surveys but suggest this counterintuitive finding is likely the result of respondents’ lack of confidence in the authenticity of the phone survey (Groves, 1979, 1990; Groves & Kahn, 1979; Holbrook, Green, & Krosnick, 2003). de Leeuw and van der Zouwen argue that if this skepticism is alleviated, sensitive behaviors reported in telephone and personal interviews will be equivalent. Data from the 1984 ANES provide an opportunity to test this argument. At the time of the post-election interview, all respondents had previously completed face-to-face interviews and were relieved of their skepticism regarding the authenticity of the study, thereby eliminating this source of bias. Second, increasing interview privacy should reduce the discomfort of disclosing a failure to behave in a socially desirable fashion (Tourangeau & Smith, 1996). However, experimental privacy manipulations do not produce this hypothesized effect. Advances in computer technology create options for manipulating the level of privacy experienced by the respondent (Richman, Kiesler, Weisband, & Drasgow, 1999), but neither simply computerizing a self-administered questionnaire (Jobe, Pratt, Tourangeau, Baldwin, & Rasinski, 1997) nor adding an audio component (ACASI) consistently improve validity above the self-administration effect (Couper, Singer, & Tourangeau, 2003). Non-technological manipulations also fail to show an effect, including varying the setting of the interview (Jobe et al., 1997) or testing the effect of the presence of others. In neither the laboratory (Couper et al., 2003) nor the field (Silver, Abramson, & Anderson, 1986) has the presence of a third party interrupting the interview had a consistent, measurable effect on the reporting of sensitive questions. 1.2. Identity theory and reported voting Stryker (1980) attempts to understand the identity function by positing two important concepts: salience and commitment. Salience is the propensity to interpret a situation in a way that provides an opportunity to perform that identity (Stryker & Serpe, 1982). Commitment is partitioned into two dimensions. Extensivity is the extent to which the identity links the individual to actual or generalized others, and intensivity is the depth or quality of those linkages. An identity with a greater quantity of higher quality commitments will rank higher in an individual’s salience hierarchy. In the specific example of political

P.S. Brenner / The Social Science Journal 49 (2012) 421–429

identity, individuals who engage in political activity that leads them to connect with others vis-à-vis their political identity have higher levels of political identity salience than individuals who do not participate in these activities. As well, individuals who engage in activities that promote an internal conversation with a generalized political other should have higher levels of political identity salience. Notably, social pressure fits into this form of identity commitment and, partisanship, which was used as an indictor of social pressure, matches intensive identity commitment as a measure of the strength of the connection of the individual to others based on similar political values and beliefs, leading to higher levels of salience. However, there may be more to identity performance than this. Voting validates and affirms a political identity by increasing commitment and salience, yielding positive affect (Green & Shachar, 2000). Reporting voting participation on a survey may work in a similar way. The voter performs an identity affirming behavior by reporting his or her participation in the election. The nonvoting respondent reports similarly, perhaps to avoid an identity–disaffirming interaction (Burke, 1980, 2003; Burke & Tully, 1977), reflecting on an ought or ideal identity (Higgins, 1987) rather than reporting actual behavior. In essence, the question being answered becomes “are you the sort of person who votes?” rather than “did you actually vote?” as the respondent pragmatically interprets a question that is semantically about past behavior, thereby creating an opportunity to enact and affirm his or her political identity (Belli, Traugott, Young, & McGonagle, 1999). In short, the respondent performs a political identity by reporting having voted. A voting decision model ignorant of high costs and low direct benefits posits that if an individual feels the need to verify his or her own ideal or normative version of his or her self-concept, s/he will vote. Yet, this approach does not consider that other incentives and deterrents may outweigh the immediate need to validate one’s self-concept (Sigelman & Berry, 1982). However, presented with other opportunities to validate his or her ideal self-concept, one may enact a political identity, especially if it is of relatively low-cost to perform. The survey interview is just this type of opportunity (Burke, 1980), providing the respondent with a salient political identity a relatively low-cost opportunity to perform this identity. Self-reported voting provides a common denominator to compare validated voters and overreporters (Table 1).

423

These two respondent groups are similar in their identity performance in the survey interview even if costs prevent overreporters from enacting their political identity by actually traveling to the polls. Moreover, the two groups of nonvoters – admitted nonvoters and overreporters – can be contrasted (Anderson & Silver, 1986). If identity is a cause of overreporting, a difference will emerge between the two groups of nonvoters, with overreporters demonstrating higher levels of identity commitment than admitted nonvoters. That is, if identity causes the unwarranted claims of behavior, the survey measure of identity commitment distinguishes between these two groups. Alternatively, if overreporting is unrelated to identity, no difference will emerge between these groups. H1. Political identity commitment will be strongly and positively predictive of overreporting amongst nonvoters (i.e., overreporters and admitted nonvoters). If the conventional approach has explanatory power, the presence of another individual and the personal interview mode encourages overreporting amongst nonvoters. H1a. Causes linked to conventional understandings of socially desirable responding (presence of others, personal interview mode) are independent of overreporting amongst nonvoters. If a salient identity causes both voting and unwarranted voting claims, no difference will emerge between the two groups of self-reported voters. Alternatively, if the cause of overreporting is orthogonal to identity, a difference will emerge between these two groups with verified voters demonstrating higher levels of identity commitment than do overreporting respondents. H2. Political identity commitment is independent of overreporting amongst self-reported voters (overreporters and validated voters). 2. Methods This analysis uses data from the ANES, an ongoing, nationally representative survey of political attitudes and behavior conducted during election years (Sapiro, Rosenstone, & the National Election Study, 2004). The study conducts personal interviews after presidential and congressional elections, but modification to this design in certain years allows testing of the hypotheses of the current

Table 1 Distribution of self-reported and validated voting, 1978, 1980, 1984, 1986, and 1990. Validation

Reported voting Self-report Did not report voting

Data source: American National Election Studies.

Voted

Did not note

Verified voter 4625 47.0% Underreporter 72 0.7%

Overreporter 1158 11.8% Admitted nonvoter 3986 40.5%

4697 47.7%

5144 52.3%

5783 58.8% 4058 41.2% 9841

424

P.S. Brenner / The Social Science Journal 49 (2012) 421–429

project.1 The majority of post-election interviews are completed by the end of November and nearly all are completed by the end of December. Surveys are primarily administered in person in the respondent’s home. The current project includes data from five surveys: three conducted after congressional elections (1978, 1986, 1990) and two surveys conducted both before and after the presidential elections in 1980 and 1984.2 Moreover, in the 1984 post-election survey, an experiment comparing two survey modes – personal and telephone interviews – are used to test the appropriateness of telephone data collection for the ANES. These five years of data are selected because they are the only years available that contain the outcome of a reverse record check verifying the voting claim of respondents, and a measure of the presence of others during the interview. 2.1. Dependent variables The overreporting of voting behavior is calculated from a comparison of the survey self-report and the outcome of the validation procedure. This comparison produces a four-category nominal variable: (1) validated voters, (2) overreporters, (3) admitted non-voters, and (4) underreporters (Table 1). Validated voters and overreporters both report having voted; the reports of the former are verified by the validation procedure but the latter cannot be verified. Admitted nonvoters neither vote nor claim to vote. A final category, underreporters, does not claim to have voted but records show they did. This last category is less than one percent of respondents and is therefore not addressed. The dependent variable is dichotomized in two ways to test the two hypotheses based in identity theory. First, in order to test Hypothesis 1, overreporting is predicted for nonvoters (column 2 in Table 1), allowing for comparison between overreporters (=1) and admitted nonvoters (=0). Second, in order to test Hypothesis 2, overreporting is predicted for self-reported voters (row 1 in Table 1), allowing for comparison between overreporters (=1) and validated voters (=0). 2.2. Measures of identity commitment The first identity commitment variable measures extensive commitment to a political identity, placing the respondent into a network of other politically involved or interested individuals. This form of commitment is operationalized as participation in the political process using a set of dichotomous measures of the respondent’s political behaviors: engagement in political meetings or organizations, working or volunteering for party or candidate, and displaying some advertisement (e.g., wearing a button) in support of a candidate. Due to low participation rates in relatively high-level activities, this index is dichotomized to

1 ANES began in 1948. More information on the ANES is available at: http://www.electionstudies.org. 2 1982 cannot be used as voting claims were not validated after the election and 1988 cannot be used as the item measuring the presence of others was recorded during the pre-election interview, but not during the post-election interview.

reflect participation in any or none of these activities, and the mean level of participation is 13% after dichotomization (Appendix A). Intensive identity commitment reflects the strength of one’s connection to a political identity. This form of commitment includes the daily activities one performs to maintain his or her political identity, operationalized here as the use of media to keep informed about political campaigns. A summary index is created using four variables (˛ = 0.70), identifying how many television programs were viewed, radio programs heard, and newspaper and magazine articles read about the campaign.3 The mean value of the scale is 3.8 (2.8 s.d.) with a maximum value of 12 (7 in 1990; fn 3). Following Bernstein et al. (2001), partisanship is included in these analyses as measured by strength of identification with a political party in four levels: strong, weak, independent but lean toward a party, and completely independent or apolitical. This indictor of pressure fits into the model as a form of intensive commitment, integrating the social pressure and identity explanations. The mean level of partisanship is 1.8 (1.0 s.d.), just below weak. It is possible that the same identity based bias in the self-report of voting also biases reported commitment behaviors. Considering this possibility, an interviewerrated measure of political knowledge indicator variable is also included. This variable is based on interviewer observations, judging the “respondent’s general level of information about politics and public affairs” on a five-point rating scale from “very high” to “very low.” Exploratory analyses suggest this item reflects the respondent’s actual level of political knowledge poorly4 but appears to be a plausible measure of the overall strength of the respondent’s political identity. The mean value of the scale is 2.9 (1.1 s.d). 2.3. Factors related to conventional notions of social desirability Two factors are included to test hypotheses related to conventional understandings of the nature of socially desirable responding as rooted in the interaction between the interviewer and respondent. The first measures the presence of other adults during the post-election interview. This variable allows for a test of the privacy hypothesis: if the respondent’s overreport is generated by an external social pressure, the presence of another individual may increase the respondent’s propensity to overreport. Very young children have been excluded as their presence may not influence the reporting of the respondent.5 Due to coding limitations, the age of this distinction is drawn at six years of age. The variable is dichotomized with zero

3 Instead of a quantity judgment, respondents answered a question in 1990 about the amount of attention they paid to newspapers. While this is a different measure, it appears to operate similarly enough for the purposes of this study to be included in the summary media use index. 4 Point-biserial correlation coefficient of interviewer rating and knowledge of party controlling Congress (0.34); knowledge of candidate running for House in respondent’s district (0.12). 5 Including or omitting children in the presence variable does not alter the findings.

P.S. Brenner / The Social Science Journal 49 (2012) 421–429

indicating that no other individuals are present, and one indicating the presence of at least one other individual. About 40% of interviews were conducted with at least one other individual present. The second measure, the mode of administration, tests the distance hypothesis and is only available in 1984. The random assignment of respondents to different administration modes allows for a test of the causal effect of mode. If external social pressure encourages the respondent to overreport, respondents in personal interviews should have a higher propensity to overreport than those assigned to telephone interviews. Since experimental factors are collinear, models are estimated separately. 2.4. Demographics, social pressure indicators Many of the demographic variables used by Bernstein et al. (2001) and others are included. Individuals of higher status should be more likely to view themselves as connected to a political identity and feel a greater level of pressure to vote. Both income and years of education are included as interval measures of SES. Religiosity, measured as frequency of religious service attendance, is included as it is positively correlated with overreporting. Being contacted and encouraged to vote by a party or candidate encourages overreporting. As such, a dichotomous indicator of contact is included. New England, Mid-Atlantic, North Central, South, Border States, Mountain, and Pacific residence variables are included. Finally, an ethnicity measure is included, as nonwhites are argued to be more likely to overreport their voting participation (Abramson & Claggett, 1992; Silver, Anderson, et al., 1986). Race is included as a set of dummy variables: White, non-Latino (reference category), Black non-Latino, Latino/a, and other. Gender, age, marital status, and parenthood are also included. Age and number of children are continuous, and marital status is included in currently, previously, or never married categories. 2.5. Analysis plan Analyses pool data from all five years, and logistic regression models are estimated to test the effect of political identity commitment indicators on the propensity of nonvoters and self-reported voters to overreport. Results are discussed as odds ratios, comparing overreporters with other nonvoters in the primary analysis and with other self-reported voters in the subsequent analyses. As a comparison, indicators of the conventional understanding of social desirability effect are also included, as are dummy variables indicating the year of data collection. 3. Results

425

both extensive and intensive commitments to a political identity are associated with a higher propensity to overreport voting. In the full model using every year of available data, respondents who participate in extensive commitment activities have nearly three times the odds (ˇ = 1.02) of overreporting (Model 1 in Table 2). In addition, each intensive commitment activity respondents participate in is associated with a 15% increase in the odds of overreporting (ˇ = 0.14).6 Finally, a one-unit increase in partisanship yields a 32% increase in the odds of overreporting. The percentage of cases correctly predicted by the model is 81.1%, a small improvement over the predictive power of Bernstein et al. (76.5%) and over a model that assumes the modal value for each respondent (78.9%). The point-biserial correlation between the predicted probability and actual value of overreporting (0.43) is also higher than that reported by Bernstein et al. (0.34). The effect of presence of others (Model 1) is statistically significant but in the direction opposite that hypothesized by the conventional understandings of social desirability (ˇ = −0.30). Rather than indicating the presence of others during the interview increases nonvoters’ propensity to respond in a socially desirable manner, the presence of others is associated with accurate reporting. Respondents whose interviews are completed in the presence of others have about a 25% reduction in the odds of overreporting compared to those respondents whose interviews are private. These findings control for respondent age, marital and parenthood status, as well as demographic variables that may explain the presence of others in the household; they also include social pressure indicators, such as income, education, and race.7 These controls do not change the general findings. Like the presence of others, a personal interview should encourage overreporting amongst nonvoters (Model 2). Research comparing telephone and personal modes suggests that uncertainty regarding the authenticity of the phone survey causes the respondent to feel suspicious, leading to responses skewed in a socially desirable manner. However, the design of the 1984 ANES survey may relieve that suspicion as post-election personal and phone interviews are preceded by a pre-election personal interview. Answering the question of de Leeuw and van der Zouwen (1988), Groves (1979), and others, it appears that when relieved of this suspicion, telephone respondents do not differ from personal interview respondents in their overreporting of voting participation. Model 2 does show some weakening of extensive commitment (participation) to marginal significance likely due to the limitation of this analysis to a small sample from only one year of data. Notably, both the measure of intensive commitment activity and partisanship remain strongly predictive of overreporting.

3.1. Predicting overreporting amongst nonvoters The first set of models predicts overreporting amongst nonvoters – admitted nonvoters and overreporters – including identity commitments and either the presence of another individual during the personal interview (Model 1) or survey mode (Model 2) as predictors. As hypothesized,

6 An interaction was tested between these two key independent variables in each model but it was not statistically significant. 7 Interaction effects between presence of others and both identity commitments and the demographic variables associated with family size (children, marital status, education, income) are tested. None are statistically significant and are, therefore, not included.

426

P.S. Brenner / The Social Science Journal 49 (2012) 421–429

Table 2 Logistic regression models predicting overreporting, by subgroup and year. Nonvoters

Self-reported voters

Model 1 coeff. Social presence Personal interview Commitments Extensive Intensive Political knowledge Partisanship Contact Marital status Married Previously married Never married Male Race White Black Latino/a Other Income Education Age Children Religious service attendance Region New England Mid-Atlantic Midwest South Mountain Pacific Year 1978 1980 1984 1986 1990 Constant

−.297

Model 2 s.e.

p .108

**

1.019 .139 .281 .231 .369

.165 .023 .062 .056 .124

***

−.024 .107

coeff.

Model 3 s.e.

p

−.236 .014

.223

.666 .099 .360 .405 .705

.364 .047 .147 .139 .287

.177 .202

−.183 .103

.158

.108

.613 .082 −.721 .022 .068 .017 .107 .127

.143 .251 .395 .011 .023 .004 .048 .038

.230 −.361

.256 .176

−.237 −.398 .220

.130 .285 .180

−.235 −.353

.188 .194

−.963 −.699 −4.885

.182 .196 .412

Number of obs. LR X2 (df)

2950 524.8(26)

* ***

*

s.e.

p

0.13

.181

.332 .383

−.179 .299

.176 .195

−.014 .466

.283 .310

.197

.238

.178

.098

.435

.192

*

.353 .199 −.690 .038 .115 .012 .038 .197

.314 .440 .843 .022 .051 .009 .106 .086

.829 .549 .840 −.015 −.019 −.019 .076 −.136

.137 .253 .419 .010 .020 .004 .046 .033

.686 .300 −.028 −.024 .040 −.022 −.161 −.131

.276 .368 .791 .018 .040 .007 .096 .067

*

−.929 −.230

.661 .379

.071 −.059

.213 .161

−.492 .029

.553 .307

−.067 −1.327 .019

.280 .689 .400

.385 −.501 −.142

.118 .263 .159

***

.553 −.829 −.111

.230 .637 .309

.774 .719

.167 .164

***

.259 .650 −.191

.161 .177 .378

* ** ** *

*

*

*** *** ***

coeff.

*

.245 .038 .120 .104 .212

**

***

.102

−.159 −.034 −.208 −.025 .078

***

**

p

.127 .020 .054 .051 .105

***

*

Model 4 s.e.

−.126 .002 −.077 −.078 −.166

***

***

coeff.

−5.867

***

***

.862 525 112.8(22)

***

3491 228.0(26)

*** * *

***

***

**

*

***

***

−.376 ***

.700 1166 63.8(22)

***

Data source: National Election Studies. * p ≤ 0.05. ** p ≤ 0.01. *** p ≤ 0.001.

3.2. Predicting overreporting amongst self-reported voters The second set of models predicts overreporting amongst self-reported voters – verified voters and overreporters – using the same indicators of identity commitment, interview privacy, and survey mode. As hypothesized, respondents who claim to vote engage in similar levels of identity commitment behaviors and are the same in strength of party affiliation, whether or not they actually voted. It is possible this inability to distinguish between verified voting claims and overreports may be a problem with measurement of identity commitment. In short, correlation between the bias in the identity measures and the selfreport of voting may lead to this hypothesized null finding.

Therefore, an interviewer rating of the respondent’s political knowledge is included in each model as an indicator of the respondent’s political identity. While a proxy measure has clear measurement weaknesses, one of its strengths is that it is less susceptible to the social desirability bias that can plague self-reports.8 As would be predicted by the identity hypothesis, political knowledge does not rise to conventional levels of statistical significance in comparisons amongst self-reported voters. As in the previous set of models, the presence of others during the survey interview is negatively associated

8 The interviewer will, however, base his or her judgment on interaction with the respondent, and as such, this item likely taps into the self-concept projected by the respondent.

P.S. Brenner / The Social Science Journal 49 (2012) 421–429

with overreporting (ˇ = −.24; p ≤ .05). Self-reported voters whose interview is conducted in the presence of another individual have about 80% the odds of overreporting compared to respondents whose interviews are not interrupted. Moreover, respondents who claim to have voted in a telephone interview are no more or less likely to overreport than respondents in face-to-face interviews (ˇ = 0.13, n.s.). 4. Discussion This study explains the overreporting of voting amongst nonvoters as the effect of an identity process. Self-reported voting is viewed as a measure of the salience of a political identity, regardless of actual behavior. This understanding includes Bernstein et al.’s (2001) concept of pressure, particularly the measure of partisanship, as a form of intensive commitment to a political identity. However, identity theory does not require that the source of pressure be an external other, like one’s friends, family, or others in a social group. Rather, the importance placed on voting is a social norm, but this norm is internalized, and the pressure to behave in accordance comes from within. One need not feel shame, or the threat of shame, to behave in accordance with an identity. Rather, the motivation for voting claims is shifted to an internal interaction with the self or a generalized other. Symbolic interactionist theory posits that much of the interaction that shapes our selves is of an internal nature; that is, the conversation we have with ourselves informs our self-concepts and shapes our identities (Mead, 1934). The respondent does not claim to have voted because of the pressure of a highly educated social circle or because of the presence of an interviewer sitting across a table. Rather, the respondent claims to have voted due to the effect of his or her self-concept as the sort of person who does. These models bear out this point. Even controlling for the demographic indicators of pressure proposed by Bernstein et al., the extensive and intensive identity commitments remain predictive of overreporting. Moreover, the strength of the effects of the extensive and intensive identity commitments suggests that identity theory provides a plausible explanation of the phenomenon of overreporting. These commitment behaviors are strongly predictive of overreporting even after accounting for survey mode and the presence of other adults. This approach can be applied to survey reporting of other normative behaviors, like church attendance. Brenner (2011) applies a similar model to the overreporting of church attendance, arguing that extensive and intensive commitments predict both actual and overreported church attendance. Future research can expand on and further test the identity model on the overreporting of proenvironmental and health related behaviors. Measurement issues add a caveat to these findings. First, the measures of identity commitment available in the ANES are not ideal. Moreover, the commitment variables used here may be biased in the same direction as the self-report of voting and their coefficients may simply be picking up this systematic error. However, participation rates for these commitment activities is around seven percent for

427

attendance at political meetings, around four percent for volunteering, and around eight percent for advertising a campaign. These activity rates are likely too low to reflect substantial overreporting on the scale of the overreport of voting participation, suggesting that bias is not a significant problem. Moreover, the strength of the effect of the interviewer judgment suggests these associations are not driven only by correlated biases. Second, memory problems may lead to inadvertent error in the self-reports. However, memory errors may work in conjunction with an identity approach to encourage overreporting (Belli et al., 1999). If the respondent does not remember if s/he voted in the last election, and if the respondent sees him- or herself as the sort of person who votes, the respondent’s self-concept as a voter may be used to impute the information missing from his or her memory. That said, the post-election interviews occurred soon after the election and memory issues are likely held to a minimum. Finally, the presence of another individual during the interview could have the opposite effect of that suggested here. The present other could know of the respondent’s actual voting behavior, the respondent would know that they know, and would report accurately to avoid lying in front of this knowledgeable other. If so, the presence of another individual in the interview can have a significant, negative effect in predicting overreporting amongst nonvoters. A weakness of these data is their inability to distinguish the age and the relationship of the other individual to the respondent. One might expect a different reaction if the other is a knowledgeable spouse rather than a preteenage child. In addition, there is no way to know when the interruption occurs. There may be a difference between an interruption long before the voting question is asked and the presence of another individual during or just before this question. Future research can fruitfully address these weaknesses.

5. Conclusion Identity theory gives the student of voting behavior another tool with which to examine the discrepancy between self-reports of voting and voting record data. Prior theoretical and modeling approaches have relied on notions of social desirability in survey reporting and related understandings of social pressure as a motivation to overreport voting. However, both of these approaches fall short in that they assume a primary role for the other. The respondent reports having voted to appear virtuous to the interviewer or to prevent the shame of disappointing important others. Neither of these approaches, however, explains why self-reports of voting vary relatively little between survey modes, nor why the presence of other has little effect on the reporting of socially desirable behavior. Moreover, conventional explanations of social desirability ignore the respondent’s own self concept. Identity theory fills this void by explaining how identity commitments – both extensive and intensive – can lead to differentials in identity salience, resulting in overreporting of voting participation.

428

P.S. Brenner / The Social Science Journal 49 (2012) 421–429

An identity approach was used to explain of overreporting in terms of extensive and intensive commitments to a political identity. As hypothesized, these commitments help to distinguish between the two types of nonvoters – those who honestly admit that they did not vote and those who claim that they did. Moreover, in contradiction to conventional wisdom, neither increasing the privacy of the interview nor increasing the distance between interviewer and respondent improved the validity of the self-report of voting. In sum, overreporting appears to be more about the respondent’s self-concept as the sort of person who votes than concern over self-presentation to the interviewer or important others. Appendix A. Descriptive statistics Variable name Dependent variable Overreporting (=1) Amongst nonvoters Amongst self-reported voters Key independent variables Social presence Personal interview Commitments Extensive Intensive Partisanship Political knowledge Contact Other independent variables Marital status Never married Married Previously married Sex (Male) Race/ethnicity White Black Latino/a Other Income Education Age Children Religious service attendance Region New England Mid-Atlantic North Central South, Border states Mountain Pacific

Mean/proportion

s.d

Range

22.5% 20.0%

0–1 0–1

38.5% 49.2%

0–1 0–1

0.13 3.8 1.8 2.9 0.25

2.8 1.0 1.1

0–1 0–12 0–3 1–5 0–1

6.1 3.0 17.8 1.1 1.4

1–23 0–17 17–98 0–10 0–4

16.8% 58.8% 24.3% 44.0% 82.4% 11.9% 3.9% 1.8% 13.1 12.3 44.2 0.8 2.2 5.3% 14.2% 26.5% 34.8% 5.1% 14.1%

Data source: American National Election Studies, 1978, 1980, 1984, 1988, and 1990.

References Abramson, P. R., & Claggett, W. (1992). The quality of record keeping and racial differences in validated turnout. Journal of Politics, 54(3), 871–880. Abelson, R. P., Loftus, E. F., & Greenwald, A. G. (1992). Attempts to improve the accuracy of self-reports of voting. In J. M. Tanur (Ed.), Questions about questions (pp. 138–153). New York: Russell Sage. Anderson, B. A., & Silver, B. D. (1986). Measurement and the mismeasurement of the validity of the self-reported vote. American Journal of Political Science, 30(4), 771–785. Andersson, H. E., & Granberg, D. (1997). On the validity and reliability of self-reported vote: Validity without reliability? Quality and Quantity, 31, 127–140.

Aquilino, W. S. (1994). Interview mode effects in surveys of drug and alcohol use: A field experiment. Public Opinion Quarterly, 58(2), 210–240. Belli, R. F., Traugott, M. W., & Beckmann, M. N. (2001). What leads to voting overreports? Contrasts of overreporters to validated voters and admitted nonvoters in the American National Election Studies. Journal of Official Statistics, 17(4), 479–498. Belli, R. F., Traugott, S., & Rosenstone, S. J. (1994). Reducing overreporting of voter turnout: An experiment using a source monitoring framework. NES Technical Report 35. Belli, R. F., Traugott, M. W., Young, M., & McGonagle, K. A. (1999). Reducing vote overreporting in surveys: Social desirability, memory failure, and source monitoring. Public Opinion Quarterly, 63(1), 90–108. Bernstein, R., Chadha, A., & Montjoy, R. (2001). Overreporting voting: Why it happens and why it matters. Public Opinion Quarterly, 65(1), 22–44. Brenner, P. S. (2012). Investigating the effect of bias in survey measures of church attendance. Sociology of Religion, http://dx.doi.org/10.1093/socrel/srs042 Brenner, P. S. (2011). Investigating the biasing effect of identity in selfreports of socially desirable behavior. Sociological Focus, 44(1), 55–75. Burke, P. J. (2003). Relationships among multiple identities. In P. Burke, T. J. Owens, R. Serpe, & P. A. Thoits (Eds.), Advances in identity theory and research (pp. 195–214). New York: Kluwer-Plenum. Burke, P. J. (1980). The self: Measurement implications from a symbolic interactionist perspective. Social Psychology Quarterly, 43(1), 18–29. Burke, P. J., & Tully, J. (1977). The measurement of role/identity. Social Forces, 55(4), 272–285. Cassel, C. A. (2003). Overreporting and electoral participation research. American Politics Research, 31(1), 81–92. Colombotos, J. (1965). The effects of personal vs. telephone interviews on socially acceptable responses. Public Opinion Quarterly, 29(3), 457–458. Couper, M. P., Singer, E., & Tourangeau, R. (2003). Understanding the effects of audio-CASI on self-reports of sensitive behavior. Public Opinion Quarterly, 67(3), 385–395. DeBell, M., & Figueroa, L. (2011). Results of a survey experiment on frequency reporting: Religious service attendance from the 2010 ANES Panel Recontact Survey. In Paper presented at the 66th annual conference of the American Association for Public Opinion Research Phoenix, AZ, de Leeuw, E. D., & van der Zouwen, J. (1988). Data quality in telephone and face-to-face surveys: A comparative meta-analysis. In R. M. Groves, P. P. Biemer, L. E. Lyberg, J. T. Massey, W. L. Nicholls, & J. Waksberg (Eds.), Telephone survey methodology (pp. 283–299). New York: Wiley. Duff, B., Hanmer, M. J., Park, W. H., & White, I. K. (2007). Good excuses: Understanding who votes with an improved turnout question. Public Opinion Quarterly, 71(1), 67–90. Granberg, D., & Holmberg, S. (1991). Self-reported turnout and voter validation. American Journal of Political Science, 35(2), 448–459. Green, D. P., & Shachar, R. (2000). Habit formation and political behavior: Evidence of consuetude in voter turnout. British Journal of Political Science, 30, 561–573. Groves, R. M. (1979). Actors and questions in telephone and personal interview surveys. Public Opinion Quarterly, 43(2), 190–205. Groves, R. M. (1990). Theories and methods of telephone surveys. Annual Review of Sociology, 16, 221–240. Groves, R. M., & Kahn, R. L. (1979). Surveys by telephone: A national comparison with personal interviews. New York: Academic Press. Higgins, E. T. (1987). Self-discrepancy: A theory relating to self and affect. Psychological Review, 94(3), 319–340. Hill, K. Q., & Hurley, P. A. (1984). Nonvoters in voters’ clothing: The impact of voting behavior misreporting on voting behavior research. Social Science Quarterly, 65, 199–206. Holbrook, A. L., Green, M. C., & Krosnick, J. A. (2003). Telephone versus face-to-face interviewing of national probability samples with long questionnaires: Comparisons of respondent satisficing and social desirability bias. Public Opinion Quarterly, 67(1), 79–125. Jobe, J. B., Pratt, W. F., Tourangeau, R., Baldwin, A. K., & Rasinski, K. A. (1997). Effects of interview mode on sensitive questions in a fertility survey. In L. Lyberg, P. Biemer, M. Collins, E. de Leeuw, C. Dippo, N. Schwarz, & D. Trewin (Eds.), Survey measurement and process quality (pp. 311–329). New York: Wiley. Kanazawa, S. (2005). Who lies on surveys, and what can we do about it? The Journal of Social, Political, and Economic Studies, 30(3), 361–370. Karp, J. A., & Brockington, D. (2005). Social desirability and response validity: A comparative analysis of overreporting voter turnout in five countries. Journal of Politics, 67(3), 825–840. Mead, G. H. (1934). Mind, self and society. Chicago: University of Chicago Press.

P.S. Brenner / The Social Science Journal 49 (2012) 421–429 Moon, Y. (1998). Impression management in computer-based interviews: The effects of input modality, output modality, and distance. Public Opinion Quarterly, 62(4), 610–622. Presser, S. (1990). Can changes in contexts reduce vote overreporting in surveys? Public Opinion Quarterly, 54(4), 586–593. Presser, S., & Traugott, M. W. (1992). Little white lies and social science models: Correlated response errors in a panel study of voting. Public Opinion Quarterly, 56(1), 77–86. Richman, W., Kiesler, S., Weisband, S., & Drasgow, F. (1999). A meta-analytic study of social desirability distortion in computeradministered questionnaires, traditional questionnaires, and interviews. Journal of Applied Psychology, 84(5), 754–775. Rogers, T. F. (1976). Interviews by telephone and in person: Quality of responses and field performance. Public Opinion Quarterly, 40(1), 51–65. Sapiro, V., Rosenstone, S. J., & the National Election Studies (2004). AMERICAN NATIONAL ELECTION STUDIES CUMULATIVE DATA FILE, 1948–2002 [Computer file]. 12th ICPSR version. Ann Arbor, MI: University of Michigan, Center for Political Studies [producer], 2004. Ann Arbor, MI: Inter-university Consortium for Political and Social Research [distributor], 2004.

429

Sigelman, L. (1982). The nonvoting voter in voting research. American Journal of Political Science, 26(1), 47–56. Sigelman, L., & Berry, W. D. (1982). Cost and the calculus of voting. Political Behavior, 4(4), 419–428. Silver, B. D., Abramson, P. R., & Anderson, B. A. (1986). The presence of others and overreporting of voting in American national elections. Public Opinion Quarterly, 50(2), 228–239. Silver, B. D., Anderson, B. A., & Abramson, P. R. (1986). Who overreports voting? American Political Science Review, 80(2), 613–634. Stryker, S. (1980). Symbolic interactionism: A social structural version. Caldwell, NJ: Blackburn Press. Stryker, S., & Serpe, R. T. (1982). Commitment, identity salience, and role behavior: Theory and research example. In W. Ickes, & E. Knowles (Eds.), Personality, roles, and social behavior (pp. 192–216). New York: Springer-Verlag. Tittle, C. R., & Hill, R. J. (1967). The accuracy of self-reported data and prediction of political activity. Public Opinion Quarterly, 31(1), 103–106. Tourangeau, R., & Smith, T. W. (1996). Asking sensitive questions: The impact of data collection mode, question format, and question context. Public Opinion Quarterly, 60(2), 275–304.