The accuracy of lay integrity assessments in simulated employment interviews

The accuracy of lay integrity assessments in simulated employment interviews

Journal of Research in Personality 41 (2007) 540–557 www.elsevier.com/locate/jrp The accuracy of lay integrity assessments in simulated employment in...

253KB Sizes 1 Downloads 20 Views

Journal of Research in Personality 41 (2007) 540–557 www.elsevier.com/locate/jrp

The accuracy of lay integrity assessments in simulated employment interviews Robert J. Townsend 1, Stacy Caron Bacigalupi 1, Melinda C. Blackman ¤,1 Department of Psychology, California State University, Fullerton, CA 92834, USA Available online 9 August 2006

Abstract Two studies were conducted to determine if lay judges could accurately assess another individual’s integrity level when using overt and covert integrity inventories. In Study 1, participants took part in simulated employment interviews and then both the participants and lay interviewers completed an overt integrity test comparable to the Reid Report integrity survey [Reid London House. (2004). Abbreviated Reid Report. Minneapolis, MN: NCS Pearson]. Self-lay judge agreement and peer-lay judge agreement were used as the criteria for accuracy. In Study 2, participants took part in either a simulated structured, unstructured or informal employment interview format and completed both overt and covert integrity inventories. The results suggest that lay judges (as well as acquaintances) are fairly accurate in assessing others’ integrity levels based upon a very brief 10-min interaction with an individual, when using either an overt or covert integrity inventory. The Wndings also suggest that the informal interview format can signiWcantly enhance the accuracy of a lay-judge’s assessment of the participant’s integrity level when a covert measure of integrity is used. © 2006 Elsevier Inc. All rights reserved. Keyword: Integrity assessments

*

1

Corresponding author. E-mail address: [email protected] (M.C. Blackman). All authors contributed equally to this work.

0092-6566/$ - see front matter © 2006 Elsevier Inc. All rights reserved. doi:10.1016/j.jrp.2006.06.010

R.J. Townsend et al. / Journal of Research in Personality 41 (2007) 540–557

541

1. Introduction The goal of the present studies was to determine whether lay judges can accurately judge another individual’s level of integrity. We hope that the premise and Wndings of this research will later be examined in applied contexts to determine if employers’/interviewers’ assessments of job candidates’ integrity levels are as predictive as the results from both overt and covert integrity tests, ultimately providing the employer with an alternative to costly integrity testing fees. 1.1. Measuring the construct of integrity HuVcutt, Conway, Roth, and Stone (2001) constructed a comprehensive taxonomy of seven categories that employers might need to accurately assess when interviewing job candidates: mental capability, knowledge and skills, basic personality tendencies, applied social skills, interests and preferences, organizational Wt, and physical attributes. Four of these seven categories are directly related to assessing the applicant’s personality (e.g. basic personality tendencies, applied social skills, interests and preferences, and organizational Wt). Within these four categories, the construct of integrity and its accurate assessment during the job interview play an integral and necessary role in successful hiring decisions (Posthuma, Morgeson, & Campion, 2002). DeWning and accurately measuring the construct of integrity has become increasingly important for organizations. As such, recent studies have found that nearly one-third of all employees admit to stealing from their employers, costing organizations billions of dollars a year. (Boye & Slora, 1993; Boye & Wasserman, 1996; Brown, Jones, Terris, & SteVy, 1987; LoBello & Sims, 1993; Murphy & Lee, 1994; Werner, Jones, & SteVy, 1989). To this end, pre-employment integrity test screening has become increasingly common in the workplace, and many employers require that applicants take these tests even before reaching the in-person interview process (Werner et al., 1989). Much debate has occurred over the past several years about how to deWne the construct of integrity as it relates to the workplace. Ones, Viswesvaran, and Schmidt (1993, 1993) found through a meta-analysis that a linear composite of three of the Big Five personality factors, conscientiousness, agreeableness and emotional stability, described integrity better than any one dimension, thus making the concept of integrity a higher-order, multi-dimensional construct. Goldberg, Grenier, Guion, Sechrest, and Wing (1991), however, concluded that many of the subcomponents of these three Big Five Factors (e.g. self-control, orderliness, dependability, and conformity) can be subsumed under the broader concept of trustworthiness. It is important to note that research has shown that broader deWnitions of integrity, result in increased incremental predictive validity when predicting employee performance (Ones et al., 1993, 1993). The most common and well-researched paper and pencil method used to assess an individual’s integrity level is the overt integrity test (Ones et al., 1993, 1993). Overt integrity tests ask direct questions eliciting admissions of theft, and also assess self-reported honesty (e.g. “How many times have you stolen something from your workplace?”) Initially the overt integrity test was developed to predict employee theft, but over time the integrity test was found to predict a variety of counterproductive behaviors (CPBs), including absenteeism, violation of safety rules, and drug and alcohol use on the job (Sackett & Wanek, 1996).

542

R.J. Townsend et al. / Journal of Research in Personality 41 (2007) 540–557

An alternative method to measure an individual’s integrity level is to use a covert integrity test. Covert integrity tests are personality-based measures of underlying personality constructs that are thought to predict counterproductive behaviors (e.g. “To what extent does the following characteristic describe you? I am moralistic and like to judge myself and/or others in terms of what is right and wrong”). Ones et al. (1993, 1993) concluded in their meta-analytic review that such overt and covert integrity tests are predictive of counterproductive behavior in the workplace. Both overt and covert integrity tests have been shown to be highly reliable and valid in predicting counterproductive behaviors (Ones et al., 1993, 1993). O’Bannon, Goldinger, and Appleby (1989, as cited in LoBello and Sims, 1993) report that over 2.5 billion integrity tests are administered to individuals each year, and there are approximately 40 diVerent integrity tests available for use. According to Brown et al. (1987), one of the most widely used and reliable overt integrity tests is the Reid Report (Reid London House, 2004), which is used at large organizations such as Home Depot and Linens n’ Things. Correlations between the Reid Report and many other overt integrity tests are consistently high (e.g. the correlation between the Stanton survey and the Reid is .89) indicating that the tests are measuring a common construct (Viswesvaran & Ones, 1997b). 1.2. Lay judges’ assessments of integrity Some organizations eschew the use of covert and overt measurements of integrity, and instead rely only on a lay judge’s assessment of an individual’s integrity level. There are several reasons why an organization might choose this route. It might be the case that some organizations are ignorant of or even skeptical about integrity inventories. Also, organizations may rely on lay assessments or gut impressions of a job candidate’s honesty because of the extensive costs of integrity tests and scoring (Reid London House, 2004). Regardless of an organization’s rationale for relying on lay judgments of integrity, more needs to be known about the accuracy level of lay persons’ integrity judgments. The aim of Study 1 is to explore whether lay judges can, to some degree, accurately judge another person’s integrity level when using an overt integrity measure. Previous research on the accuracy of personality trait judgments has shown that lay judges are fairly accurate in judging others’ personalities even after a brief 5-min interaction (Blackman, 2002a, 2002b; Blackman & Funder, 1998; Funder & Colvin, 1988; Furr & Funder, 1999; Schneider, Hastorf, & Ellsworth, 1979). In addition, the level of self-interviewer agreement signiWcantly increases after augmenting the amount of interactions with the target individual (Blackman & Funder, 1998). Funder and Dobroth (1987) found that the more visible the trait being judged, the more accurately this trait is judged. For instance, judges are signiWcantly more accurate in determining a target subject’s talkativeness than determining the extent to which he or she daydreams because these two traits diVer dramatically in the visible cues that they emit to a viewer. Because the construct of integrity is multi-faceted, we believe that the cues that a target subject would emit would be varied and diVuse, making the construct less visible to the lay judge. Based on the results from Funder and Dobroth (1987), we suspected that the construct of integrity would be a “hard to judge” trait, thus making it diYcult for lay judges to accurately judge.

R.J. Townsend et al. / Journal of Research in Personality 41 (2007) 540–557

543

1.3. Study 1 Very little light has been shed on the extent to which lay judges can accurately assess a target individual’s level of integrity and predict their integrity-related behaviors through the course of a brief interview. Keeping this in mind, the present study was conducted to determine the extent to which interviewers, following a brief interview, can assess target subjects’ integrity levels via an overt integrity test. Self-interviewer agreement will be used as the criteria for accuracy. Tangential issues, such as peers’ ratings of their acquaintances’ integrity (via the overt integrity inventory) and gender diVerences with regard to integrity levels will also be tested. Because the current study is exploratory and scant research exists on this topic, we did not formulate any hypotheses for Study 1. 2. Study 1 2.1. Method 2.1.1. Participants One hundred participants (70 females, 24 males, and 6 gender not reported) from the psychology department at a large university participated in the study. Each student received course credit for participating. 2.1.2. Materials The SPI, the Substance abuse, Production loss, and Interpersonal problems Inventory, (Caron, 2003) was used as the measure of overt integrity. The SPI consists of 39 questions regarding an individual’s level of integrity and the likelihood of engaging in various counterproductive behaviors (e.g. “Should an individual who has stolen money from their workplace just a few times be given a second chance rather than be Wred from that organization?”).2 The SPI has Wve answer options that the participant can choose (e.g. “DeWnitely,” “Maybe,” “Not Sure,” “Probably Not,” and “DeWnitely not”). High scores on the inventory indicate high levels of integrity. The SPI has been found to be a valid predictor of counterproductive behavior and very comparable to The Reid Report (Reid London House, 2004), with total SPI scores correlating .89, p < .01, with total scores on the Reid Report (Caron, 2003). The Reid Report was not used in the present study because of the high cost of purchasing it and the company’s policy of barred access to the scoring protocols. The participants were also asked to complete a social desirability assessment, the Balanced Inventory of Desirable Responding (BIDR; Paulhus, 1984, 1988, as cited in Paulhus, 1991). The BIDR is a 40-item assessment of social desirability. The statements were rated on a 7-point scale, where 1 is not true, 4 is somewhat true, and 7 is very true. 2.1.3. Procedure Upon arriving at the laboratory, participants signed an informed consent form and were then asked to complete the SPI Inventory and then the BIDR. Demographic information, such as gender and age, was collected as well. Next, participants were given an

2

For the SPI inventory and scoring protocol please email [email protected].

544

R.J. Townsend et al. / Journal of Research in Personality 41 (2007) 540–557

additional copy of the SPI Inventory to give to a well-acquainted peer/family member, whom they had known for a long period of time. Peer/family members were instructed to complete the inventory on the participants’ personality and to keep their ratings conWdential. After completing the SPI, peer respondents placed the ratings in a provided addressed and stamped envelope with their signature across the seal and mailed it to the Psychology Department. Next, participants took part in a 10-min simulated interview for a clerical position in the Psychology Department OYce. A simulated employment interview was chosen as the sample of behavior on which judges of personality would base their judgments on because it would positively motivate the participants to become more attentive and engaged in the study. Also, this type of context would facilitate related follow-up studies in applied business settings. Participants received a job description prior to the interview. Trained female research assistants conducted the interviews and used structured interview questions that were based on a job analysis for a similar position in the Psychology Department OYce. Each of the 11 questions was developed to elicit information about the applicants’ integrity levels and personality traits (e.g. “If the oYce manager asked you to Xerox a test for a class that you were currently a student in, what would you do?” “If you were hired for this position, after a month of working, what would I see as your weakest personality trait?”)3. After each interview was complete, the participants were debriefed and the interviewers completed the SPI Inventory on the applicant. 2.2. Data analysis 2.2.1. Social desirability The participants’ BIDR social desirability scores were correlated with their SPI Inventory scores to determine if social desirability might be a contributing factor in the SPI scores. The results indicated that the participants’ self-rated integrity scores were signiWcantly correlated with scores on the BIDR, r (53) D .30, p < .05. Due to this social desirability eVect, partial correlations were computed for all analyses involving SPI self-integrity ratings. 2.2.2. SPI correlations Each participant’s self SPI ratings were correlated with the interviewer’s SPI ratings of that participant. An average of all self-interviewer SPI agreement correlations was then calculated. Likewise, each participant’s self SPI ratings were correlated with their corresponding peer’s SPI ratings of their integrity level. Again, an average of all of the self-peer SPI agreement correlations was then calculated. In the same manner, the average interviewer–peer SPI agreement correlation was calculated. To test whether the three correlations (self-interviewer, self-peer and Interviewer–Peer) were signiWcantly diVerent from each other, the McNemar T-test for dependent correlations was used (McNemar, 1955).

3

For the structured interview questions please email [email protected].

R.J. Townsend et al. / Journal of Research in Personality 41 (2007) 540–557

545

2.3. Results 2.3.1. Reliability analyses To assess the SPI Inventory’s reliability, Cronbach’s  was used. Cronbach’s  for the SPI was .83 in the present sample. To investigate items not contributing to the reliability of the test, corrected item-whole correlations were examined. Eleven of the corrected itemwhole correlations were below .24. Those 11 items were deleted and the Wnal analyses were run using only the remaining 28 items. For the remaining 28 items, Cronbach’s  was .86, demonstrating strong inter-relatedness among the items. 2.3.2. SPI agreement correlations Forty-Wve participants had incomplete data because many peers did not turn in their assessments. Therefore, of the 100 participants, only 55 participants with complete data were used in the following analyses. Scores on the self-rated SPI Inventory ranged from 58 to 131 (M D 97.67), scores on the interviewer-rated SPI Inventory ranged from 56 to 129 (M D 97.43), and scores on the peer-rated SPI Inventory ranged from 55 to 130 (M D 98.6). The lowest possible score an individual could obtain on the SPI Inventory was 28, and the maximum possible score an individual could obtain was 140. The average agreement correlation between self-interviewer SPI scores was signiWcant, r (53) D .27, p < .05. The results also indicated that self SPI scores and peer SPI scores were signiWcantly correlated, r (53) D .41, p < .01. However, interviewer SPI scores and peer SPI scores were not signiWcantly correlated, r (53) D ¡.01, n.s. To assess whether self-interviewer and self-peer correlations were signiWcantly diVerent from one another, a dependent t-test was conducted. The results indicated that the correlations were not signiWcantly diVerent from each other, t (52) D .76, n.s. Gender diVerences with regard to integrity levels were examined next. No signiWcant diVerences were found in the self-reported integrity levels of males and females in this sample, t (92) D 1.42, n.s. When comparing males and females on interviewer-rated integrity levels, females were rated as having a higher level of integrity than males, t (91) D 2.13, p < .05. However, when peers rated their applicant’s integrity, no signiWcant gender diVerences were found, t (56) D .63, n.s. 2.4. Discussion To determine if lay individuals could assess another person’s level of integrity via an overt integrity inventory, participants’ self-integrity ratings on the SPI were correlated with peer-assessed integrity ratings and with interviewer-assessed integrity ratings. Although the concept of integrity is multi-faceted and presumed to be a diYcult-to-judge trait (Funder & Dobroth, 1987), we found that peers and interviewers were able to assess participants’ integrity levels with substantial accuracy. The average level of self-peer agreement attained was quite impressive (.41) and comparable to the levels of self-peer agreement on general personality traits found by Ready, Clark, and Watson (2000) (average .47, range, .27–.62). We were surprised that the agreement correlations were so substantial, considering our lay interviewers and peers made their ratings of the target via an overt integrity test, in which the majority of the questions asked the interviewer to predict a speciWc behavioral response that the target would hypothetically make in a speciWc situation. We surmise that this type of task would be cognitively taxing and challenging to any judge of personality.

546

R.J. Townsend et al. / Journal of Research in Personality 41 (2007) 540–557

It is interesting to note that the average self-peer agreement correlations (.41) were higher than the self-interviewer agreement correlations (.27), though not signiWcantly diVerent. Funder and Colvin’s (1988) results parallel this Wnding. Funder and Colvin investigated how well strangers and “informants” could assess a participant’s personality using the Q-sort method. The informants in Funder and Colvin’s study were individuals the participants had known well and for quite some time. Funder and Colvin found that the correlation between self-assessments and informant-assessments was greater than the correlation between self and stranger. This Wnding is consistent with the current study’s Wnding that self-peer agreement was substantially higher than self-interviewer agreement. Colvin and Funder (1991) extended their research Wndings by looking at how well acquaintances and strangers could predict a participant’s personality. They found that acquaintances’ ability to judge a participant’s personality is greater than a stranger’s ability when using general behavioral patterns as a basis for judgment. In addition, Colvin and Funder looked at how well acquaintances and strangers predicted a participant’s behavior in a particular situation. They found that acquaintances did no better than strangers when the situation was speciWc and the acquaintance may not have seen the participant in that speciWc situation but the stranger had. Colvin and Funder’s Wnding might help to explain why the interviewers’ ratings of participants in the current study were so strongly correlated with participants’ self-ratings. Interviewers had asked speciWc work-related questions, perhaps raising their awareness of the participants’ integrity levels on the job. 3. Study 2 3.1. Aim of Study 2 Study 1 provides support for the hypothesis that lay-judges and peers can judge and predict integrity-related behaviors with substantial accuracy in target subjects. The purpose of Study 2 was twofold. Our Wrst goal for this study was to explore the degree of accuracy that lay judges attain when assessing a target subject’s integrity level via a covert integrity measure. Our second goal was to expand on the Wndings of Study 1 by examining how to enhance the accuracy of lay integrity assessments using both overt and covert integrity inventories. We hypothesize, based on past research, that the type of interview format used can aVect the accuracy of a lay judge’s assessment of a target’s integrity level (Blackman, 2002a; Blackman & Funder, 2002; Funder, 1995). More speciWcally, Study 2 will determine the inXuence that the structured, unstructured, and informal interviews have upon the accuracy of lay judgments of integrity. 3.2. The realistic accuracy model To understand factors that can enhance and moderate accurate personality judgment, we now turn to the Realistic Accuracy Model (Funder, 1995, 1999, 2001). The Realistic Accuracy Model suggests four stages of information processing that are necessary for personality to be accurately judged. First, a target must display information that is relevant to the trait to be judged. Typically, this information takes the form of some kind of behavior. Second, this behavioral information must be available to the judge. Third, the interviewer must detect the relevant information. Finally, the interviewer must utilize this information correctly. One of the implications of this model is that accurate personality judgment can

R.J. Townsend et al. / Journal of Research in Personality 41 (2007) 540–557

547

be diYcult and may be short circuited at various stages. All four steps must be completed before accurate judgment can be achieved. If any step fails, then accurate personality judgment will fail as well. The Realistic Accuracy Model readily applies to the job interview. In the Wrst stage, the job candidate must display behavior that is relevant to the trait to be judged. For example, if the trait to be judged in an employment interview is “promptness,” then the relevant behaviors might be the candidate arriving on time or slightly early (leading to a high rating on this trait), more than 10 min late (leading to a low rating), or in between (leading to a medium rating). If the candidate does not provide a behavior relevant to “promptness” the process of accurate personality judgment cannot occur because there is no information on which to base a personality judgment. Second, the behavioral information of the candidate must be available to the interviewer. For example, if the job candidate arrives 5 min early for the interview, but the interviewer arrives 30 min late, then the process of judging the candidate’s promptness will be impossible until the candidate displays another behavior relevant to promptness. Third, the interviewer must detect the information. A job candidate can only be judged “prompt” if the interviewer notices the candidate’s time of arrival. Finally, the interviewer must correctly use this information to diagnose the trait to be judged. It would be inaccurate for an interviewer to use gregarious behavior to judge promptness. An added complication to the job interview is that the job candidate may be actively trying to present inauthentic traits and managing his or her behavior. One potential moderator of accurate personality judgment in the Realistic Accuracy Model is the quality of information that is available to the interviewer (Funder, 1999). Quality of information involves the relevance of the behavior to the trait that is being judged. Behaviors emitted in some contexts may be relevant and informative about traits, whereas behaviors emitted in other contexts may be less informative and relevant. For example, Ickes, Snyder, and Garcia (1997) observed that some “strong” situations restrict behavior in ways that minimize individual diVerences (and hence, restrict properties and characteristics of the target). In contrast, “weak” situations, in which behavior is free to vary, permit individual diVerences in behavior that are more diagnostic of personality traits. Previous research Wndings support this hypothesis that context variables can moderate the diagnostic relevance of available behaviors. Andersen (1984) found that hearing a person talk about his or her thoughts and feelings allows for more accurate personality judgments than hearing the same person talk about his or her hobbies and activities. Similarly, Blackman (1996) found that participants made signiWcantly more accurate personality judgments of university students who were interacting with a peer in an unstructured than in a structured conversation. This suggests that a setting in which you have a chance to learn about someone’s unconstrained thoughts or feelings is more likely to foster accurate judgment of that person’s personality than a setting which constrains or structures the types of behaviors an individual is to exhibit (Funder, 1999). 3.3. The employment interview and personality judgment The distinction between “strong” and “weak” situations can be readily applied to the structured, unstructured, and informal employment interviews. In a highly structured interview, all interviewers must ask all applicants the same set of questions and are not allowed to use follow-up or probe questions (Blackman & Funder, 2002). Additionally,

548

R.J. Townsend et al. / Journal of Research in Personality 41 (2007) 540–557

structured interview questions are designed to decrease discussion of irrelevant topics by focusing solely on job-related content (Campion, Palmer, & Campion, 1997). Finally, in a structured interview interviewers use a standardized form to rate and score the applicant’s answers (Blackman & Funder, 2002). In contrast, the unstructured interview imposes few limitations on the questions asked and may consist of spontaneous exchanges between the interviewer and candidate. Unlike the structured interview, the questions asked during an unstructured interview are not necessarily focused on job-related content, and follow-up and probe questions are permitted and may be numerous (Blackman & Funder, 2002). The informal employment interview is a weak situation similar to the unstructured interview in that it imposes few limitations on the questions asked. The informal interview can occur in a variety of diVerent contexts such as in a coVee shop, over dinner, or while touring an organization’s premises. This interview format will be discussed in more detail in the upcoming paragraphs. Research has devoted attention to the validity of the employment interview as a predictor of certain aspects of job performance. Previous research Wndings have shown that the structured interview is better than the unstructured interview at assessing the abilities and skills that are needed by an employee to successfully fulWll all required job functions (Campion, Campion, & Hudson, 1994; Campion et al., 1997; HuVcutt, Roth, & McDaniel, 1996; Schmidt & Hunter, 1998). These Wndings make sense, as structured interview questions are based on an analysis of the skills and abilities that are needed for a position. For example, a structured interview is ideal for assessing a job candidate’s degree of computer proWciency. In this case, a structured interview might ask each candidate identical questions about previous computer experience; knowledge of software packages, programming languages, and so on. On the other hand, research suggests that the unstructured interview may be superior to the structured interview when accurate personality judgment is the goal. A recent study by Blackman (2002a) conWrmed that the unstructured interview yielded more accurate judgments of personality than the structured interview. In this study, mock job interviews were conducted using either a structured or an unstructured interview format. In the structured interview condition, interviewers were required to ask a list of standardized personality relevant questions. Interviewers in the unstructured condition were not required to ask a minimum amount of questions and were instructed to keep the length of the interview to less than 10 min. Participants completed the California Q-set (Bem & Funder, 1978; Block, 1978) to assess the job candidate’s personality. The results indicated that interviewers in the unstructured, weak condition were signiWcantly more accurate than interviewers in the structured, strong condition at judging the job candidates’ personality traits. The literature points to informal personality assessment as a common occurrence in the employment interview process when assessing person–organization Wt (e.g. Chatman, 1991; Parsons, Cable, & Wilkerson, 1999; Rynes & Gerhart, 1990; Van Vianen & Van Schie, 1995). Organizations that want to determine if a job applicant has the values and personality that will Wt the organizational culture conduct informal interviews (e.g. showing the candidate around the organization, introducing the candidate to the incumbents and taking the candidate out to meals) to see the candidate in a variety of situations and assess the candidate’s Wt to the organization (Chatman, 1991). Past research, however, has not focused on the informal interview as a means to assess an applicant’s integrity level (Parsons et al., 1999). The unique aspect about an informal interview is

R.J. Townsend et al. / Journal of Research in Personality 41 (2007) 540–557

549

that it does not require an applicant to participate in the same activities as the structured or unstructured interview. For example, the informal interview may take place over dinner or while on a walk around the organization’s grounds, and the conversation may even revolve around topics not directly related to the organization. These activities lead to increased opportunities to display personality relevant behaviors in comparison to the unstructured interview, where the applicant sits in a chair in an oYce. An informal interview may result in less restrictive behavior than an interview that occurs in an employer’s oYce. The Realistic Accuracy Model (RAM) suggests that this less restrictive situation may result in more accurate personality judgment than either the unstructured or structured interview. 3.4. Hypotheses for Study 2 It is hypothesized based on the literature presented that: (1) Interviewers and peers alike will attain signiWcant degrees of accuracy when assessing their targets via a covert measure of integrity. Self-interviewer, Interviewer–Peer and self-peer agreement will be used as the criteria for accuracy. (2) Informal interviews will produce signiWcantly higher self-interviewer and Interviewer–Peer agreement scores than will unstructured or structured interviews for both overt and covert assessments of integrity. For similar reasons the unstructured interview, with its free-Xowing conversations, will manifest higher levels of agreement than the structured interviews. 3.5. Method 3.5.1. Participants One hundred and twenty participants (88 females, 30 males, and 2 gender unknown) from the psychology department at a large university, participated in the study. Each student received course credit for participating. 3.5.2. Materials The California Adult Q-set (Bem & Funder, 1978; Block, 1978) was used as the covert integrity measure. The California Q-set is an inventory consisting of 100 mid-level traits (e.g. “is talkative,” “is dependable,” “likes to daydream”). Participants rated each trait by assigning values ranging from 1 (not characteristic of the target) to 5 (extremely characteristic of the target). Seven items from the Q-set were found to be signiWcantly correlated at the p < .05 level with the SPI inventory and were subsequently used as the covert integrity index for this study. The 7 items were as follows: Item 2-Is a genuinely dependable and responsible person, (r D .29), Item 23-Extrapunitive; tends to transfer or project blame, tends to blame others for own failure or faults (r D ¡.31), Item 30-Gives up and withdraws where possible in the face of frustration and adversity (r D ¡.23), Item 36-Is subtly negativistic; tends to undermine and obstruct or sabotage (r D .¡21), Item 37-Is guileful and deceitful, manipulative, opportunistic, takes advantage of people and situations (r D ¡.27), Item 41-Is moralistic, judges self and others strongly in terms of right and wrong (r D .30) and Item 70-Behaves in an ethically consistent manner, is consistent with own personal standards (r D .34). Cronbach’s  for this seven-item index was .82.

550

R.J. Townsend et al. / Journal of Research in Personality 41 (2007) 540–557

3.5.3. Procedure Participants were scheduled in same-sex dyads. Acquaintances were asked not to sign up together. Upon arriving, participants signed an informed consent form. Once the participants agreed to participate, it was then conWrmed that they were not previously acquainted. The research assistant randomly assigned the participants to the role of job applicant or interviewer and to the interview type that he or she was to participate in (structured, unstructured, or informal). The interviewer was escorted outside to wait for further direction, while the job applicant was given the California Q-set rating form version to rate his or her own personality characteristics on all of the 100 traits. After completing the Q-set, the participant completed the SPI Inventory. When the participant Wnished responding to the two surveys, he or she was asked for the address of a well-acquainted peer, whom he or she had known for a long period of time or a family member who would be willing to complete the 100-item Q-set and the SPI about the candidate’s personality. Data from an acquaintance was collected to provide an additional perspective of the applicant’s personality characteristics. At a later date, the Q-set form and SPI were mailed to the acquaintance with instructions to conWdentially complete the surveys about the participant and to mail them back in the envelope provided with their signature across the seal. This aspect of the procedure diVered from that used in Study 1 so that the surveys would be sent directly to the peer/family member rather than hand delivered to the peer/family member by the target subject. This change in procedure was made to help minimize the chance that target subjects would inXuence the peer’s/family member’s ratings. The job applicant and interviewer were then reunited in the room that the applicant completed the surveys. The job applicant was informed that he or she would take part in a 10-min mock interview for a clerical position in the Psychology Department OYce. Each participant received a job description prior to the interview (the same description as in Study 1). 3.5.4. Interviewers The interviewers were trained for 15 min on how to conduct a professional interview while the job applicants were completing the surveys. The interviewers were given a copy of the SPI and Q-set to look over and were told that the main emphasis of the interview was to assess the job candidate’s personality and integrity as it relates to the job description that they were given. 3.5.5. Structured interviews Those interviewers assigned to the structured interview condition were given the same list of 11 questions used in Study 1, to ask their applicants. The interviewers were told to ask only these questions of the applicant in the speciWed order. The interviewers were also told not to ask any follow-up questions, to keep the interview as close as possible to 10 min in length, and to record the length of the interview in minutes on a piece of paper. The interviewer was seated behind a desk while the applicant was seated in a chair in front of the desk. After each interview was complete, the applicant was thanked for his or her participation and debriefed. The interviewer then completed the SPI and the 100-item Q-set on the job applicant’s personality. Upon completing the two surveys, the interviewer was thanked and debriefed.

R.J. Townsend et al. / Journal of Research in Personality 41 (2007) 540–557

551

3.5.6. Unstructured interviews Interviewers assigned to the unstructured condition were trained in the same manner as the interviewers in the structured interview, with the exception that these individuals were not given a list of questions to ask. The interviewers were told to ask any questions that came to mind and to write down the exact questions that they asked while the interview was being conducted. The interviewers were instructed to keep in mind that the goal of the interview was to accurately assess the applicant’s integrity level and that asking questions related to the applicant’s personality and integrity could only beneWt their assessment of the applicant. The interviewers were told to keep the length of the interview as close as possible to 10 min, and record the length of the interview on a piece of paper. The interviewers were also told that they could ask follow-up questions. The interviewers were seated for the interview behind a desk, while the applicant was seated in a chair in front of the desk. After the interview, the interviewers completed the same surveys as in the structured condition. 3.5.7. Informal interviews The interviewers assigned to this condition were trained in the same manner as the previous two conditions and were told to ask whatever questions came to mind and write these questions down as well as the length of the interview. The interviewers were likewise reminded as well that the main emphasis of the interview was to accurately assess the candidate’s integrity level, so that interview questions related to integrity would beneWt their ultimate assessment of the candidate. However, the interviewers in this condition were told to conduct the interview while taking a 3 min walk to a nearby outdoor coVee stand on campus and for another 5 min while sitting with the applicant at the coVee stand. They were told that this was to be a casual interview, but to keep the interview as close as possible to 10 min. 3.6. Results 3.6.1. Covert integrity assessments and interview format A Pearson’s correlation coeYcient was calculated for each interview by correlating Applicant (Self) Q-set integrity ratings and Interviewer Q-set integrity ratings. Femalefemale dyads accounted for 41 Applicant–Interviewer (A–I) agreement scores and malemale dyads accounted for 14 A–I agreement scores. A second Pearson’s correlation coeYcient was calculated for each interview by correlating Interviewer Q-set integrity items and Peer Q-set integrity items. Fifteen of the A–I interviews were missing peer data, resulting in 40 Interviewer–Peer (I–P) personality agreement scores. Thirty female applicants had peer responses and ten male applicants had peer responses. A Wnal Pearson’s correlation coeYcient was calculated for each interview by correlating Applicant Q-set integrity items and Peer Q-set integrity items, resulting in 30 female Applicant–Peer (A–P) scores and 10 male A–P scores. A 2 £ 3 factorial ANOVA was performed to assess signiWcant diVerences and interactions between gender and Applicant-Interviewer Q-set agreement scores in the structured, unstructured, and informal interview conditions. The results indicated a signiWcant diVerence in scores between interview formats, F (2, 55) D 3.79, p < .05 and a signiWcant diVerence in scores between gender of dyads, F (1, 55) D 6.17, p < .05. The results did not indicate a signiWcant interaction between interview type and gender of dyad, F (2, 55) D 2.40, n.s. An a priori linear contrast test indicated a signiWcant linear trend of the means for the structured

552

R.J. Townsend et al. / Journal of Research in Personality 41 (2007) 540–557

condition (M D .64), the unstructured condition (M D .74) and the informal condition (M D .78), p D .01. These results support the hypothesis that the less constraining the interview format is, the higher self-interviewer agreement. Next, Interviewer–Peer Q-set agreement correlations in the structured, unstructured, and informal interview conditions were examined. A 2 £ 3 ANOVA was computed to determine if the interview format and the gender of the applicant aVected these correlations. A signiWcant diVerence in Interviewer–Peer agreement correlations between the structured interviews (M D .71), unstructured interviews (M D .80), and informal interviews (M D .72), was not found, F (2, 40) D 1.92, n.s. The results did not indicate a signiWcant diVerence in agreement correlations between male and female applicants, F (1, 40) D 3.13, n.s. or a signiWcant interaction between interview type and the gender of the applicant, F (2, 40) D 1.75, n.s. A Wnal 2 £ 3 factorial ANOVA was performed to examine Applicant–Peer Q-set agreement correlations in the structured, unstructured, and informal interview conditions and to determine if the gender of the applicant aVected these correlations. A signiWcant diVerence in Applicant–Peer agreement correlations between the structured interviews (M D .88), unstructured interviews (M D .80), and informal interviews (M D .69), was not found, F (2, 40) D 2.49, n.s. nor was a signiWcant interaction between interview type and applicant gender, F (2, 40) D .64, n.s. However, the results indicated a signiWcant diVerence in the agreement correlation that were made about male and female applicants, F (1, 40) D 5.42, p < .05. 3.6.2. Overt integrity assessments and interview format The second set of analyses focused on determining if the interviewer’s overt ratings of the applicant’s level of integrity were aVected by interview format. Cronbach’s coeYcient for the SPI integrity scale was .81. SPI scores were calculated based on the scoring routine from Study 1 using partial correlations when self SPI integrity ratings were analyzed. A Pearson’s correlation coeYcient was calculated for each interview by correlating Applicant SPI integrity items and Interviewer SPI integrity items. Female–female dyads accounted for 41 A–I agreement correlations and male–male dyads accounted for 13 A–I agreement correlations. A second Pearson’s correlation coeYcient was calculated for each interview by correlating Applicant SPI integrity items and Peer SPI integrity items. Fourteen of the A–P interviews were missing peer data, resulting in 41 Applicant–Peer (A–P) personality agreement correlations. Applicant–Interviewer SPI agreement scores across the structured, unstructured, and informal interview conditions and the gender of the dyad were examined through a 2 £ 3 ANOVA. A signiWcant diVerence in SPI agreement scores between the structured interviews (M D .43), unstructured interviews (M D .43), and informal interviews (M D .41), was not found, F (2, 54) D .03, n.s. Similarly, a signiWcant diVerence in agreement scores between male dyads (M D .37) and female dyads (M D .43) was not found, F (1, 54) D 1.11, n.s., nor was a signiWcant interaction between interview type and gender of the dyad, F (2, 54) D .28, n.s. Next, a 2 £ 3 factorial ANOVA was performed to detect possible signiWcant diVerences and interactions between Interviewer–Peer SPI agreement scores in the structured, unstructured, and informal interview conditions and the applicant’s gender. A signiWcant diVerence in scores between the structured interviews (M D .55), unstructured interviews (M D .46), and informal interviews (M D .53) was not found, F (2, 41) D 2.66, n.s. The results did not indicate a signiWcant diVerence in SPI agreement scores between male and female applicants, F (1, 41) D 1.47, n.s., nor a signiWcant interaction between interview type and

R.J. Townsend et al. / Journal of Research in Personality 41 (2007) 540–557

553

applicant gender, F (2, 41) D .71, n.s. The average Interviewer–Peer SPI correlation across the interview conditions was .55, signiWcantly higher than that for the Applicant–Interviewer data (.42). A Wnal 2 £ 3 factorial ANOVA was performed to determine if there were signiWcant diVerences and interactions between Applicant–Peer SPI agreement scores in the structured, unstructured, and informal interview conditions and whether the gender of the target subject aVected these correlations. The results did not indicate a signiWcant diVerence in SPI scores between the structured interviews (M D .57), unstructured interviews (M D .40), and informal interviews (M D .46), F (2, 41) D 1.02, n.s. Likewise, a signiWcant diVerence in scores between male (M D .38) and female (F D .49) applicants was not found, F (1, 41) D 2.68, n.s., nor a signiWcant interaction between interview type and applicant gender, F (2, 41) D .35, n.s. These non-signiWcant results were expected, as the applicant’s peer, did not participate in either the structured, unstructured or informal interview formats. (For agreement correlations from both Study 1 and Study 2 see Table 1). The results from Study 2 support the hypothesis that interviewers and peers can accurately judge a target subject’s integrity level with an impressive degree of accuracy when using either an overt or covert measure of integrity. Also, the interview format that the interviewer used was found to systematically aVect the resulting Applicant–Interviewer agreement correlations, when a covert integrity inventory was used. SpeciWcally, the more unstructured the interview, the higher the judgmental accuracy that the judges achieved with regard to the applicant. This eVect, however, did not occur when an overt integrity inventory was used. Another Wnding of interest in Study 2 was that the average level of Interviewer–Peer agreement (.55) on the SPI far surpassed the level of Interviewer–Peer SPI agreement obtained in Study 1 (¡.01). We believe that this large diVerence in agreement correlations between Study 1 and Study 2 is due to the change in the procedure for distributing the inventories to the peers. In Study 1 the peer inventories were hand-delivered to the peer via the applicant and in Study 2, the peer inventories were mailed directly to the peer. We believe that this procedural change in the method in which the peer inventories were distributed helped increase the conWdentiality of the ratings while reducing the chance that the applicant might tamper with the ratings.

Table 1 Average agreement correlations for Study 1 and Study 2 Integrity measure

Interview format Structured

Unstructured

Informal

Overt Self-interviewer (Study 1) Self-peer (Study 1) Interviewer–Peer (Study 1) Self-interviewer (Study 2) Interviewer–Peer (Study 2)

.27¤ .41¤ ¡.01 .43¤ .55¤

.43¤ .46¤

.41¤ .53¤

Covert Self-interviewer (Study 2) Interviewer–Peer (Study 2)

.64¤ .71¤

.74¤ .80¤

.78¤ .72¤

¤

SigniWcant at the p < .05 level, (two-tailed).

554

R.J. Townsend et al. / Journal of Research in Personality 41 (2007) 540–557

3.7. Discussion 3.7.1. Covert integrity assessments and interview format This study expanded on the Wndings of Study 1 and sought to uncover variables that might further facilitate the accuracy of lay judgments of integrity-related personality traits. Study 2 examined the structured, unstructured, and informal employment interviews to determine which format yielded more accurate assessments of covert and overt measures of integrity. This study found that, as we expected, more accurate covert integrity-assessments would be made if the interviewer used the informal interview format to obtain knowledge about the target subject. More speciWcally, we found that when conducting mock job interviews, the informal format produced signiWcantly more accurate lay judgments about the job applicant’s personality traits by the interviewer than either the unstructured or structured format, when applicant–interviewer agreement was the criterion. The study also found that the unstructured format produced signiWcantly more accurate lay judgments about the job applicant’s integrity-related traits by the interviewer than the structured format, thus supporting our original hypothesis. The study also examined peer ratings of the applicants’ integrity-related personality traits, a covert assessment, and correlated them with the interviewers’ ratings across the three diVerent interview formats. Impressive levels of peer–interviewer agreement were found across the structured, unstructured and informal formats (.71, .80, and .72, respectively). However, a signiWcant linear trend was not found across the interview formats. What is remarkable about these results is that the applicant–interviewer agreement correlations across the interview formats (.64, .74, and .78, respectively) were equally as strong as the levels of applicant-peer agreement across the three interview formats (.88, .80, and .69, respectively). In other words, interviewers were able to accurately assess the applicant’s integrity related traits as well as the well-acquainted peers did in just a brief 10-min interview. The results in general suggest that the informal interview, with its weak structure and free-Xowing exchanges, allows for more accurate judgment of integrity-related traits by a lay-judge when self-interviewer agreement is the criterion. As noted earlier, previous research on personality judgment between strangers supports the conclusion that weaker situations may result in more accurate personality judgments. Andersen (1984) found that observers of videotaped interviews made more accurate judgments about the personality of an individual when the individual was encouraged to talk freely about his or her thoughts and feelings than when he or she was limited to discussing speciWc information. Blackman (1996) similarly found that participants made signiWcantly more accurate personality judgments of university students who were interacting with a peer in an unstructured than in a structured conversation. 3.7.2. Overt integrity assessment and interview format Study 2 also revealed that interviewers and peers alike can manifest signiWcant degrees of applicant–interviewer agreement (.42 average across interview formats) and interviewer–peer agreement (.55 average across interview formats) about the target subject’s speciWc integrity level as measured by the SPI overt integrity survey. These agreement levels are not as strong, however, as those manifested by judges using a covert integrity test. We believe that these lower levels of agreement on the SPI are due to the speciWc and hypothetical nature of the questions that were included in the overt integrity test.

R.J. Townsend et al. / Journal of Research in Personality 41 (2007) 540–557

555

The SPI results did not support the hypothesis that the informal interview, with its unconstrained format, would produce the most accurate integrity judgments. In fact, the structured, unstructured and informal interview formats produced equally strong integrity assessments. This Wnding may be due to a procedural artifact. When the lay interviewers, in each of the interview formats, received instructions about the study, they were given a copy of the SPI integrity inventory and the Q-set to look over before being reunited with the applicants. It may just be that the interviewers in the unstructured and informal conditions, recalled the items from the SPI (as these items are already in question format) more readily than those 100 items from the Q-set and incorporated these speciWc natured questions into their respective interview formats. Since the interview questions in the structured format are very similar in speciWcity to the SPI items, this might account for why interviewers in all three interview formats produced equally strong overt integrity judgments. Perhaps if the interviewers had not seen a copy of the SPI or Q-set prior to the interview, a linear trend for judgmental accuracy would have appeared as hypothesized. 4. General discussion The results of Study 1 and Study 2 suggest that lay judges can assess a target individual’s integrity level with a substantial degree of accuracy after only a 10-min interaction. What is quite remarkable is that these lay judges were able to not only assess the target’s integrity level when using a covert integrity measure, but also an overt integrity measure in which the judge was required to make speciWc predictions about the target’s behavior in various hypothetical situations. Additionally, peers and family members of the target subjects were able to make these accurate assessments, even though the construct of integrity is considered a “hard to judge” trait (Funder & Dobroth, 1987). Study 2 examined the eVect of altering interview context as a way to enhance the level of accuracy of integrity judgments. The results of this study suggested that for the covert measure of integrity, the type of interview format used systematically aVected the accuracy of the lay judge’s subsequent judgments. More speciWcally, lay judges who utilized the informal interview were found to obtain the highest degree of self-interviewer agreement, with the unstructured and structured interview formats producing somewhat lower levels of self-interviewer agreement with regard to the construct of integrity. We believe that the free-Xowing structure of the informal interview puts the target subject at ease and perhaps “oV guard,” and allows more diagnostic cues that are relevant to the construct of integrity, to be emitted by the target. By having signiWcantly more relevant cues available, the lay judge will have a higher likelihood of making an accurate personality judgment. It is important to keep in mind the limitations of these studies. Study 1 and Study 2 were intended to be basic exploratory studies, conducted with college students in a simulated business setting. It is possible that professional recruiters and interviewers are, in general, better judges of personality than college students. Also, the results of the present studies should be interpreted with caution because of the potential for unreliability of self-report data. Finally, limitations of using mock interviews for assessment should be noted. Participants in a lab may act and respond diVerently than job candidates interviewing for an actual position in a company. The results of these studies should allow lay judges and employers alike to take some comfort in knowing that it is possible in a 10-min interview to judge an individual’s integrity level with a good deal of accuracy. We hope that this research will serve as a

556

R.J. Townsend et al. / Journal of Research in Personality 41 (2007) 540–557

springboard for ourselves and other researchers who would like to incorporate these premises and Wndings into an applied study. References Andersen, S. M. (1984). Self knowledge and social inference: the diagnosticity of cognitive/aVective and behavioral data. Journal of Personality and Social Psychology, 46, 294–307. Bem, D. J., & Funder, D. C. (1978). Predicting more of the people more of the time: Assessing the personality of situations. Psychological Review, 85, 485–501. Blackman, M. C. (1996). The eVect of information on the accuracy and consensus of personality judgments. (Doctoral dissertation, University of California, Riverside, 1996). Dissertation Abstracts International, 57, 4074. Blackman, M. C. (2002a). Personality judgment and the utility of the unstructured employment interview. Basic and Applied Social Psychology, 24, 241–250. Blackman, M. C. (2002b). The employment interview via the telephone: Are we sacriWcing accurate personality judgment for cost eYciency? Journal of Research in Personality, 36(3), 208–223. Blackman, M. C., & Funder, D. C. (1998). The eVect of information consensus and accuracy in personality judgment. Journal of Experimental Social Psychology, 34, 164–181. Blackman, M. C., & Funder, D. C. (2002). EVective interview practices for accurately assessing counterproductive traits. International Journal of Selection and Assessment, 10, 109–116. Block, J. (1978). The Q-sort method in personality assessment and psychiatric research. Palo Alto, CA: Consulting Psychologists. Boye, M. W., & Slora, K. B. (1993). The severity and prevalence of deviant employee activity within supermarkets. Journal of Business and Psychology, 8, 245–253. Boye, M. W., & Wasserman, A. R. (1996). Predicting counterproductivity among drug store applicants. Journal of Business and Psychology, 10, 337–349. Brown, T. S., Jones, J. W., Terris, W., & SteVy, B. D. (1987). The impact of pre-employment integrity testing on employee turnover and inventory shrinkage losses. Journal of Business and Psychology, 2, 136–149. Campion, M. A., Campion, J. E., & Hudson, P. J., Jr. (1994). Structuring interviewing: a note on incremental validity and alternative questions types. Journal of Applied Psychology, 79, 988–1002. Campion, M. A., Palmer, D. K., & Campion, J. E. (1997). A review of structure in the selection interview. Personnel Psychology, 50, 655–702. Caron, S. (2003). Personality characteristics related to counterproductive behaviors in the workplace. Masters Thesis: California State University, Fullerton. Chatman, J. A. (1991). Matching people and organizations: selection and socialization in public accounting Wrms. Administrative Science Quarterly, 36, 459–484. Colvin, C. R., & Funder, D. C. (1991). Predicting personality and behavior: a boundary on the acquaintanceship eVect. Journal of Personality and Social Psychology, 60, 884–894. Funder, D. C. (1995). On the accuracy of personality judgment: a realistic approach. Psychological Review, 102, 652–670. Funder, D. C. (1999). Personality judgment : A realistic approach to person perception. St. Louis: Academic Press. Funder, D. C. (2001). Accuracy in personality judgment. In B. W. Roberts & R. Hogan (Eds.), Personality psychology in the workplace (pp. 121–139). Washington DC: American Psychological Association. Funder, D. C., & Colvin, C. R. (1988). Friends and strangers: acquaintanceship, agreement, and the accuracy of personality judgment. Journal of Personality and Social Psychology, 55, 149–170. Funder, D. C., & Dobroth, K. M. (1987). DiVerences between traits: properties associated with inter-judge agreement. Journal of Personality and Social Psychology, 52, 409–418. Furr, R. M. & Funder, D. C. (1999). A comparison of Q-sort and Likert rating methods of personality assessment. Paper presented at the 79th Annual Convention of the Western Psychological Association, Irvine, CA. Goldberg, L. R., Grenier, J. R., Guion, R. M., Sechrest, L. B., & Wing, H. (1991). Questionnaires used in the prediction of trustworthiness in pre-employment selection decisions: an APA task force report. Washington, DC: American Psychological Association. HuVcutt, A. I., Conway, J. M., Roth, P. L., & Stone, N. J. (2001). IdentiWcation and meta-analytic assessment of psychological constructs measured in employment interviews. Journal of Applied Psychology, 86, 897–913.

R.J. Townsend et al. / Journal of Research in Personality 41 (2007) 540–557

557

HuVcutt, A. I., Roth, P. L., & McDaniel, M. A. (1996). A meta-analytic investigation of cognitive ability in employment interview evaluations: moderating characteristics and implications for incremental validity. Journal of Applied Psychology, 81, 459–473. Ickes, W., Snyder, M., & Garcia, S. (1997). Personality inXuences on the choice of situations. In R. Hogan, J. Johnson, & S. Briggs (Eds.), Handbook of personality psychology. San Diego, CA: Academic Press, Inc.. LoBello, S. G., & Sims, B. N. (1993). Fakability of a commercially produced pre-employment integrity test. Journal of Business and Psychology, 8, 265–273. McNemar, Q. (1955). Psychological statistics (2nd ed.). New York: Wiley. Murphy, K. R., & Lee, S. L. (1994). Personality variables related to integrity test scores: the role of conscientiousness. Journal of Business and Psychology, 8, 413–424. Ones, D. S., Schmidt, F. L. & Viswesvaran, C. (1993). The nomological net for integrity tests. In F. L. Schmidt (Chair), The construct of conscientiousness in personnel selection. Symposium conducted at the annual meeting of the Society for Industrial and Organizational Psychology: San Francisco. Ones, D. S., Viswesvaran, C., & Schmidt, F. L. (1993). Comprehensive meta-analysis of integrity test validities: Wndings and implications for personnel selection and theories of job performance. Journal of Applied Psychology, 78, 679–703. Parsons, C. K., Cable, D., & Wilkerson, J. M. (1999). Assessment of applicant work values through interviews: the impact of focus and functional relevance. Journal of Occupational and Organizational Psychology, 72, 561–575. Paulhus, D. L. (1991). Measurement and control of response bias. In J. P. Robinson, P. R. Shaver, & L. S. Wrightsman (Eds.), Measures of personality and social psychological attitudes (pp. 17–59). New York: Academic Press. Posthuma, R. A., Morgeson, F. P., & Campion, M. A. (2002). Beyond employment interview validity: a comprehensive narrative review of recent research and trends over time. Personnel Psychology, 55, 1–81. Ready, R. E., Clark, L., & Watson, D. (2000). Self- and peer-related personality: agreement, trait ratability, and the ‘self-based heuristic’. Journal of Research in Personality, 34(2), 208–224. Reid London House. (2004). Abbreviated Reid Report. Minneapolis, MN: NCS Pearson. Rynes, S., & Gerhart, B. (1990). Interviewer assessments of applicant “Wt”: an exploratory investigation. Personnel Psychology, 43, 13–35. Sackett, P., & Wanek, J. (1996). New developments in the use of measures of honesty integrity, conscientiousness, dependability, trustworthiness and reliability for personnel selection. Personnel Psychology, 47, 787–829. Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research Wndings. Psychology Bulletin, 124, 262–274. Schneider, D. J., Hastorf, A. H., & Ellsworth, P. C. (1979). Accuracy in person perception. In A. H. Hastorf, D. J. Schneider, & P. C. Ellsworth (Eds.), Person perception (2nd ed., pp. 204–222). Reading, MA: Addison-Wesley. Van Vianen, A. E. M., & Van Schie, E. C. M. (1995). Assessment of male and female behaviour in the employment interview. Journal of Community and Applied Social Psychology, 5, 243–257. Viswesvaran, C., & Ones, D. S. (1997b). Review of the stanton survey. Security Journal, 8, 167–169. Werner, S. H., Jones, J. W., & SteVy, B. D. (1989). The relationship between intelligence, honesty, and theft admissions. Educational and Psychological Measurement, 49, 921–927.