Interviews Casey A. Klofstad Harvard University, Cambridge, Massachusetts, USA
Glossary close-ended question A question in an interview to which the respondent chooses his or her answer from a set list of choices. error Any instance in which the data collected through an interview are not an accurate representation of what the researcher thinks he or she is measuring or any instance in which data are not completely representative of the population under study. focus group A group interview; a moderator leads a group of similar people in discussion on a topic of interest to the researcher. The moderator encourages participants to share their opinions with one another in order to collect data on the topic under study. incentive A way of increasing the benefits of participating in an interview, relative to the costs, in order to increase participation. Incentives can be solidary or material. open-ended question A question in an interview to which the respondent must create his or her own answer because a set list of choices is not provided. personal interview A one-on-one conversation between an interviewer and a respondent; akin to the work of journalists. questionnaire A predetermined list of interview questions that is either filled out by the interviewee or the researcher. Synonyms include interview schedule and instrument. respondent A participant in an interview. In some fields, the word subject can be an analogous term. response rate The number of completed interviews as a percentage of the entire sample size. response scale The set of answers a respondent chooses from in answering a close-ended question. sample The subset of a population that is selected to participate in the interview.
An interview is a method of directly collecting data on the attitudes and behaviors of individuals through questions
posed by the researcher and answers provided by a respondent.
Introduction What is an Interview? An interview is a method of collecting data on the attitudes and behaviors of individuals through some form of conversation between the researcher and a respondent. Unlike research methods that rely on less direct sources of information about social phenomena (for example, indirect observation, archival research, and formal theoretical approaches), interviews are unique because data are collected directly from the individuals who are creating and experiencing the social phenomenon under study. Direct measurement of social phenomena has its advantages. The data collected are timely. Information can be collected on topics for which archival or other preexisting data sources do not exist, are unreliable, or are incomplete. The social measures derived from interviews, in theory, directly reflect the attitudes and behaviors of individuals in society. And, unlike an archive or a set of economic indicators, the researcher can interact with the subject under study in order to gain a richer understanding of social phenomena. However, interviews can be time-consuming and costly to implement correctly and often require hiring the services of trained professionals. Interview subjects may be less likely to reveal certain types of sensitive or detailed information to a researcher. Interviews also can create a tendency to focus too much on the individual and not enough on his or her social context. Also, as discussed later in this article, direct measurement of social phenomena through interviews can also invite biases and errors in social measurement.
Encyclopedia of Social Measurement, Volume 2 Ó2005, Elsevier Inc. All Rights Reserved.
359
360
Interviews
Types of Interviews
issues through focus groups can also allow the researcher to untangle complicated topics.
Personal Interviews Personal interviews, akin to the work of journalists, involve one-on-one conversations with a set of respondents. An example here would be a researcher who is interested in the role of lobbyists in policymaking interviewing a number of lobbyists and members of Congress. This type of interview has a tendency to be less structured than the other types discussed in this article because the process can be more spontaneous and open-ended. The interview schedule may be defined in some manner by the interviewer before the interview takes place, but the conversation may deviate from this preset format as the researcher discovers new information. Because of the relatively large amount of work involved in comprehensively studying each respondent, this type of interview often forces the researcher to focus on interviewing a smaller number of respondents. However, this procedure also allows the researcher to obtain rich narrative information from those few individuals.
Focus Groups A focus group is conducted by engaging a group of respondents who are of interest to the researcher in a group conversation. The individuals are often similar to one another in some way that is germane to the research question (for example, a group of teachers could be gathered to conduct a focus group on the resource needs of educators). During the discussion, participants are encouraged by the moderator to share their opinions, ‘‘without pressuring participants to vote or reach consensus’’ (Krueger and Casey, 2000: p. 4). The moderator strives to create a nonjudgmental environment in order to make the participants more willing to express their opinions openly. It is standard practice for the researcher to conduct many focus group sessions in order to discover potentially meaningful trends across the groups. Like the personal interview, this type of interview can be structured a priori. However, because focus group respondents are engaged in free discussion, the researcher does not have complete control over the path of the conversation. In addition, because of the cost and effort involved in gathering and running focus groups, the method often allows the researcher to interview a relatively small number of individuals. Therefore, focus groups are not commonly used when the researcher needs to generate data that are statistically representative of a large population. However, the more open mode of a focus group discussion allows the researcher to find out what matters are of greater interest to respondents. Such data can also be used to develop more formally structured questionnaires (see below). The ability to dig deeper into
Questionnaires A questionnaire is a predetermined list of interview questions that is filled out by either the respondent or the researcher. This form of interview is highly structured because the path of the conversation is determined before it begins and is not deviated from. As questionnaires came into wider use, these types of interviews were likely to be conducted in person, usually in the respondent’s home. However, with the spread of telecommunications technology during the 1970s, along with decreasing success and increasing costs associated with administering questionnaires face-to-face, these types of interviews began to be increasingly conducted through the mail and over the telephone. Self-administered questionnaires have also been conducted over the Internet. However, because the number of individuals in the general public that have access to the Internet is relatively small, this method is controversial in certain applications. A key benefit of questionnaires is that they can be administered to many people. This allows the researcher to make more reliable inferences about the entire population under study. However, the tradeoff involved is that the researcher is usually not able to obtain as much rich narrative information as with other methods, because respondents are often reluctant or unable to provide detailed or indepth information on a predefined questionnaire. Either respondents can be unwilling to supply such information or the format of a questionnaire can be too constrictive to allow for the collection of such detailed information.
What Type of Interview is the Best? In the end, choosing what type of interview method to use comes down to determining what the particular needs of the researchproject areandhowthose needsare bestmetwithin budgetary constraints. There are, therefore, many different issues that can be taken into consideration when deciding what type of interview procedure to use. However, one very important way to determine what type of interview technique to utilize is to determine what type of data is needed and how generalizable the data need tobe. Research designs that allow the researcher to interview many people (questionnaire-based studies) can have the benefit of allowing the researcher, if enough interviews are completed, to make reliable statistical inferences about the population under study. However, questionaire-based interviews can be less effective if the researcher does not know what specific types of questions to ask or if the researcher needs to
Interviews
obtain a sizable amount of deep narrative information about specific types of respondents. Interview designs that commonly yield fewer interviews (focus groups and personal interviews) typically have the benefit of allowing the researcher to gather a large amount of rich narrative data from a smaller subset of the population. The method also allows the researcher to learn about topics that he or she might not have much knowledge on yet. However, this method is likely to be less useful if the researcher is looking to make more generalizable statistical inferences about a large population. It is important to note that there are cases where a mixed-mode approach may be useful. For example, a researcher may want to learn about how many individuals feel about a certain issue, but he or she may also be unsure what specific issues are important to cover in the interviews. In this case, it may be useful to begin with a series of focus groups in order to learn what questions to ask in questionnaire-based interviews conducted on a larger sample. In these and other cases, results from one style of interview can serve to augment and inform the results of another. However, it is important to consider that different interview modes can produce incompatible findings. For example, the type of person that is more likely to complete a telephone interview might be different from the type of person that is more likely to respond to a mail survey.
Issues in Interview Design and Execution Types of Interview Questions: Closed- and Open-Ended There are two ways to structure the responses that respondents offer: open- and close-ended questions. Openended questions are ‘‘fill-in-the-blank’’ items, where the respondent offers his or her own answer. Close-ended questions are those that give the respondent a finite set of specified responses from which to choose. In general, close-ended questions are more appropriate when the respondent has a specific answer to give (for example, gender), when the researcher has a predefined set of answers in mind to a question, when detailed narrative information is not needed, or when there is a finite number of ways to answer a question (for example, gender). Openended items are more useful when the respondent is able to provide a ready-made answer of his or her own, when the researcher is not certain what answers to provide or wants to conduct more exploratory research, when the researcher wants to obtain narrative data, or when the number of potential responses to a question is known but large and difficult to specify a priori for the respondent (for example, occupation). Both types of questions can
361
be used in the same interview because the different types allow researchers to accomplish different goals. Use of Response Scales in Close-Ended Questions A key issue to consider in using close-ended questions is how to structure response scales. The type of scale chosen should reflect what the question is trying to measure. Ordered scales ask the respondent to place himself or herself on an opinion continuum. For example, Likerttype scales are 4- or 5-point scales that probe for feelings on issues in reference to agreement or disagreement with a statement (for example, ‘‘agree,’’ ‘‘somewhat agree,’’ ‘‘somewhat disagree,’’ ‘‘disagree’’). The Likert-type scale can also be modified to measure other types of opinions and behaviors (for example, a scale that runs from ‘‘Poor’’ to ‘‘Excellent’’ to measure how individuals rate a governmental program or from ‘‘All of the Time’’ to ‘‘Never’’ to see how often individuals engage in a certain behavior). Feeling thermometers allow the respondent to choose a number from 0 to 100 in order to rate how ‘‘warm’’ or ‘‘cold’’ he or she feels toward a certain concept, policy, person, or the like. Polar point scales ask the respondent to place himself or herself on a numbered scale, where the ends signify opposite opinions. Unlike the scales just discussed, unordered or nominal scales offer different classes or categories to choose from (for example, types of sports that the respondent plays, or, of the options listed, which public policy should be funded).
Question Ordering It is important to consider how to order questions in an interview. Research on ‘‘context effects’’ in interviews has shown that the content of a question can affect responses to the questions that follow it. This occurs because respondents create their answers to interview questions, in part, based on what material they have been thinking about beforehand. As one example of such context effects, in thinking about how a respondent might answer a question about Bill Clinton’s integrity, Moore shows that if a ‘‘ . . . question about Clinton’s integrity is preceded by a similar question about Al Gore’s integrity . . . it is likely that some people, to varying degrees, will include their assessment of Gore as a part of the comparative standard by which they judge Clinton’’ (2002, p. 82). In short, the order of questions matters because individuals make use of the context that they are in when they form responses.
Potential Sources of Error Measurement Error Measurement error, a case in which the respondent has somehow provided inaccurate or uninterpretable information, has many potential sources.
362
Interviews
Question Wording In thinking about the errors that question wording can produce, it is helpful to consider the following two questions: 1993 Roper Poll Question: ‘‘Does it seem possible or does it seem impossible to you that the Nazi extermination of the Jews never happened?’’ 1993 Gallup Poll Question: ‘‘Does it seem possible to you that the Nazi extermination of the Jews never happened or do you feel certain that it happened?’’
These questions, although seemingly similar, produced strikingly different results. The Roper poll showed that over one-third of Americans were either unsure or thought that the Holocaust was a hoax. In sharp contrast, the Gallup poll question showed that only 10% of the population were Holocaust doubters. What this example highlights is how question wording can significantly impact the data collected. As Dillman aptly points out, ‘‘The goal of writing a survey question . . . is to develop a query that every potential respondent will interpret in the same way, be able to respond to accurately, and be willing to answer’’ (2000, p. 32). In other words, in order to reduce measurement error, researchers need to construct questions that are understandable and that give the same ‘‘stimulus’’ to each respondent. To achieve this, Dillman suggests that numerous questions need to be considered when wording questions: Will the respondent be able to understand the question? Does the respondent have the knowledge to answer the question? Does the question wording somehow ‘‘frame’’ the question in a certain manner? Does the wording bias, a priori, how the respondent is likely to answer? Will the respondent be willing to answer the question or is the question too personal or sensitive in nature? Are all of the questions worded in a fairly consistent manner? Issues with Respondents Measurement can also be in error in terms of how the respondent understands and responds to the question. Respondents may not understand a question, may not know how to use the scales provided, may become fatigued or disinterested in the interview, or may feel that the researcher wants them to answer in a certain ‘‘socially desirable’’ way. Any and all of these factors can lead to inaccurate measurement. Issues with Interviewers Measurement error can also occur because of the traits or behaviors of the interviewer. These errors can occur because of the intentional actions of the interviewer, e.g., if questions are not read accurately or if respondents are encouraged to answer questions in a specific way by the interviewer. Research has also shown that there are various unintentional sources of measurement error caused by interviewers.
For example, a 1993 study by Kane and Macaulay shows that in some cases, respondents provide more ‘‘pro-woman’’ responses when interviewed by a female. A 1991 study by Finkel et al. finds similar results due to race. Numerous other examples exist.
Sampling and Coverage Error Interview data can also be imprecise because of errors in sampling and coverage. Sampling error is the product of collecting data from only some, and not all, of the individuals in a population. If one were to interview every individual in a population, the social measures constructed from those data would be a statistically accurate representation of the population with 100% confidence. One would know how everyone in the population responded to the question and hence there would be no error in one’s assessment of how the population responded to the question. However, if every member of a population is not interviewed, statistically random error creeps into the measures calculated from interview data because one cannot know with 100% confidence how the population as a whole would have responded to the interview questions. Sampling error accordingly reduces confidence in the precision of measurement. Because of this, the size or margin of this random sampling error must be presented whenever presenting interview results. Similarly, coverage errors occur when each individual in the population does not have an equal chance of being included in the sample. For example, consider a study of political participation that has generated a sample from voter registration records. This sample would clearly be biased because it failed to cover individuals who are not politically active, in this case nonvoters.
Nonresponse Error Nonresponse errors result when the individuals who complete the interview are somehow systematically different than those who were unable to be contacted and those who chose not to participate. For example, consider an interview project that is being conducted on political participation. Based on its content, individuals who are more politically active may be more interested in participating in the interview, leaving the attitudes and behaviors of those who are less politically active out of the analysis. Thus, in order to reduce such errors, it is essential to conduct interviews with as many individuals in the sample as possible. It is also helpful to examine interview data carefully to see whether the individuals who are choosing to respond are somehow different than the population at large in a way that is germane to the topic under study.
Interviews
Getting People to Respond: The Key to Any Interview Project Maximizing response rate is vital to the success of any interviewer project. As alluded to in the section above on sampling and coverage error, higher response rates translate into more precise and representative measures of the population under study. Scores of ideas about how to increase respondent participation exist, but these various techniques can be easily summarized through the concepts of costs and benefits. Social relationships are defined by a system of social exchange; individuals engage with others socially if the benefits that others provide exceed the costs of social interaction. Thus, in order to encourage a potential respondent to participate in an interview, the costs of participation need to be reduced and the benefits need to be increased. Numerous techniques are utilized to reduce costs, including trying to gain the trust of the respondent, wording questions in a way that makes them easy to understand and respond to, and laying out self-administered questionnaires in a way that makes them easy to follow. Benefits are increased through offering the respondent material benefits (for example, money, a souvenir) or solidary benefits (for example, communicating how participation will aid the researcher in resolving an important public policy issue of interest to the respondent). Although reducing costs and increasing benefits is effective, the greatest challenge facing those who use interview techniques will be to adapt and apply this theory of social exchange to new technologies and a changing society. It usually takes a great amount of effort and skill on the part of many professionals in order to finish an interview project with a high response rate. However, it is becoming harder to maintain high response rates. Three related reasons likely account for falling response rates. The first is technology: from caller ID to answering machines and Internet spam filters, households are becoming more equipped to block out messages from strangers. Second, direct marketing through the mail, telephone, and Internet has left the public ‘‘fatigued’’ and less willing to converse with or respond to strangers. Finally, researchers have also documented that the American public has become more withdrawn and less socially engaged since the mid-1900s.
Acknowledgments Thanks are extended to Robert Lee (University of California Berkeley Survey Research Center), Jordan
363
Petchenik (State of Wisconsin Department of Natural Resources), John Stevenson (University of Wisconsin Madison Survey Center), and an anonymous reviewer for comments.
See Also the Following Articles Focus Groups Surveys
Further Reading American Association for Public Opinion Research Web Site. Refer to numerous documents on how to design and conduct interviews. http://www.aapor.com Crabb, P. B. (1999). The use of answering machines and caller ID to regulate home privacy. Environ. Behav. 31, 657 670. Dillman, D. (1978). Mail and Telephone Surveys: The Total Design Method. Wiley, New York. Dillman, D. (2000). Mail and Internet Surveys: The Tailored Design Method. Wiley, New York. Finkel, S. E., Guterbock, T. M., and Borg, M. J. (1991). Raceof-interviewer effects in a preelection poll: Virginia 1989. Public Opin. Q. 55, 313 330. Fox, R. J., Crask, M. R., and Kim, J. (1988). Mail survey response rate: A meta-analysis of selected techniques for inducing response. Public Opin. Q. 52, 467 491. Kane, E. W., and Macaulay, L. J. (1993). Interviewer gender and gender attitudes. Public Opin. Q. 57, 1 28. Krueger, R. A., and Casey, M. A. (2000). Focus Groups: A Practical Guide for Applied Research. Sage, Thousand Oaks, CA. Mertinko, E., Novotney, L. C., Baker, T. K., and Lange, J. (2000). Evaluating Your Program: A Beginner’s SelfEvaluation Workbook for Mentoring Programs. Information Technology International, Inc., Potomac, MD. Available at http://www.itiincorporated.com/publications.htm Moore, D. W. (2002). Measuring new types of question-order effects. Public Opin. Q. 66, 80 91. Putnam, R. (2000). Bowling Alone: The Collapse and Revival of American Community. Simon & Schuster, New York. Sharp, L. M., and Frankel, J. (1983). Respondent burden: A test of some common assumptions. Public Opin. Q. 47, 36 53. Snyder, M. L. (1990). Self-monitoring of expressive behavior. In Social Psychology Readings (A. G. Halberstadt and S. L. Ellyson, eds.), pp. 67 79. McGraw-Hill, New York. Yammarino, F. J., Skinner, S. J., and Childers, T. L. (1991). Understanding mail survey response behavior: A metaanalysis. Public Opin. Q. 55, 613 639.