A survey of neuropsychologists’ beliefs and practices with respect to the assessment of effort

A survey of neuropsychologists’ beliefs and practices with respect to the assessment of effort

Archives of Clinical Neuropsychology 22 (2007) 213–223 A survey of neuropsychologists’ beliefs and practices with respect to the assessment of effort...

143KB Sizes 0 Downloads 23 Views

Archives of Clinical Neuropsychology 22 (2007) 213–223

A survey of neuropsychologists’ beliefs and practices with respect to the assessment of effort Michael J. Sharland a,b , Jeffrey D. Gfeller a,∗ a

Department of Psychology, Saint Louis University, 221 North Grand, St. Louis, MO63103, United States b Memphis VA Medical Center, 221 North Grand, St. Louis, MO63103, United States Accepted 27 December 2006

Abstract The current study investigated neuropsychologists’ beliefs and practices with respect to assessing effort and malingering by surveying a sample of NAN professional members and fellows (n = 712). The results from 188 (26.4%) returned surveys indicated that 57% of respondents frequently included measures of effort when conducting a neuropsychological evaluation. While a majority of respondents (52%) rarely or never provide a warning that effort indicators will be administered, 27% of respondents often or always provide such a warning. The five most frequently used measures of effort or response bias were the Test of Memory Malingering (TOMM), MMPI-2 F–K ratio, MMPI-2 FBS, Rey 15-item test, and the California Verbal Learning Test. However, the TOMM, Validity Indicator Profile, Word Memory Test, Victoria Symptom Validity Test, and the Computerized Assessment of Response Bias were rated as most accurate for detecting suboptimal effort. These results and other findings are presented and discussed. © 2007 National Academy of Neuropsychology. Published by Elsevier Ltd. All rights reserved. Keywords: Malingering; Effort; Survey; Neuropsychology

Frequently neuropsychologists are called upon to assess cognitive functioning in cases that may involve litigation or compensation of some sort (Reynolds, 1998). For many reasons, the results obtained on tests may be unreliable or invalid (Binder & Rohling, 1996). One factor that may produce invalid results is insufficient effort, making it important that neuropsychologists assess for suboptimal effort and malingering. There are diverse methods and measures one can use to assess for level of effort, and feigned cognitive impairment (Larrabee, 2003; Millis & Volinsky, 2001; Mittenberg, Aguila-Puentes, Patton, Canyock, & Heilbronner, 2002; Nies & Sweet, 1994; Pankratz, 1988; Reynolds, 1998). Some measures have better sensitivity, specificity, or have been researched more thoroughly (Greiffenstein, Baker, & Gola, 1996; Hartmann, 2002; Inman & Berry, 2002; Meyers & Volbrecht, 2003; Rose, Hall, Szalda-Petree, & Bach, 1998; Schretlen, Brandt, Krafft, & Van Gorp, 1991; Sweet & King, 2002; Thompson, 2002). Based on this literature, one might expect that a consensus has been reached among neuropsychologists regarding which measures are most effective, and which measures should be used regularly to assess suboptimal effort. Surprisingly, few studies have examined what measures neuropsychologists typically use, and which measures they consider effective in assessing for suboptimal effort. Rabin, Barr, and Burton (2005) recently conducted an in-depth survey of neuropsychologists’ practice characteristics and test usage. Doctoral-level members from the National Academy of Neuropsychology (NAN), the International Neuropsychological Society (INS), and APA Division 40, were surveyed such that a non-overlapping list of participants ∗

Corresponding author. Tel.: +1 314 977 2289; fax: +1 314 977 1006. E-mail address: [email protected] (J.D. Gfeller).

0887-6177/$ – see front matter © 2007 National Academy of Neuropsychology. Published by Elsevier Ltd. All rights reserved. doi:10.1016/j.acn.2006.12.004

214

M.J. Sharland, J.D. Gfeller / Archives of Clinical Neuropsychology 22 (2007) 213–223

was compiled, and approximately one-third of this sample pool were mailed surveys. While Rabin et al. surveyed neuropsychologists regarding the assessment instruments they most frequently used in general and for specific cognitive domains, they did not specifically survey neuropsychologists regarding measures used to assess for effort. Two measures of effort, however, were ranked in the top 40 memory assessment instruments, the Test of Memory Malingering (TOMM, 19th), and the Rey 15-item test (40th). No other specific measures of effort were listed in Rabin et al.’s findings. Mittenberg, Patton, Canyock, and Condit (2002) surveyed 144 members of the American Board of Clinical Neuropsychology (ABCN) regarding their perceptions of base rates of suboptimal effort across different areas of neuropsychology and methods for detecting suboptimal effort endorsed by neuropsychologists. Respondents indicated that in approximately 30% of civil cases, 20% of criminal cases, and 10% of medical cases conducted, examinees exhibit either symptom exaggeration or probable malingering. Furthermore, when questioned specifically about the methods used for determining symptom exaggeration/probable malingering, ABCN neuropsychologists typically examined multiple sources or methods to detect malingering. On average, respondents endorsed examining seven of nine indicators of suboptimal effort, including symptom validity tests. However, specific data on which measures were used most frequently was not reported. Lally (2003) conducted a survey of 187 diplomates in forensic psychology regarding which measures they believed were acceptable to use in various areas of forensic practice. One of the forensic areas examined was malingering, but the author did not differentiate between neurocognitive or psychiatric malingering. Fifty-three diplomates completed the section dealing with malingering. The respondents recommended using both the MMPI-2 and the Structured Interview of Reported Symptoms (SIRS) for detecting malingering. Other tests found to be acceptable in assessing malingering were the WAIS-III, Validity Indicator Profile (VIP), TOMM, Rey 15-item test, and Halstead Reitan Battery. Recently, Slick, Tan, Strauss, and Hultsch (2004) published a survey of purported experts’ practices in detecting malingering. “Experts” were defined as persons who had published two or more articles regarding assessment of effort or malingering within a designated 5-year period. Thirty-nine participants were identified for the study, of which 24 completed and returned the survey. Respondents indicated that they most frequently used the TOMM to detect suboptimal performance. This measure was followed by the Rey 15-item test and the Warrington Recognition Memory Test (RMT). Other measures of effort identified as being used to detect suboptimal performance included the WMT, VIP, Computerized Assessment of Response Bias (CARB), Portland Digit Recognition Test (PDRT), Victoria Symptom Validity Test (VSVT), and Digit Memory Test. The majority of respondents estimated the prevalence of definite malingering as occurring in 5–30% of cases, with the modal value as being between 10% and 20% of cases. The estimated base rate of possible malingering varied between 5% and 30%, with the modal value as occurring in between 5% and 10% of cases. Approximately half (54.2%) of respondents stated that they never warned examinees that measures sensitive to detecting suboptimal effort were utilized, while 37.5% of respondents reported that they always warned examinees. The respondents typically conveyed their opinion regarding suboptimal performance by stating that the test data was invalid (91%), test results suggested or indicated exaggeration (83%), or that test results were inconsistent with the severity of injury (96%). Of note, 46% of respondents often or always stated that test results suggest or indicate malingering, when such indications were present. The current study was conducted to assess information similar to that obtained by Slick et al. (2004) and Mittenberg, Patton, et al. (2002) in a larger and more representative sample of practicing neuropsychologists. To this end, the study surveyed a random sample of 712 professional members and fellows of NAN. The current study examined which tests and indicators were being utilized by neuropsychologists to detect suboptimal effort and how these indicators compared to the sample of neuropsychologists surveyed by Slick et al. Additionally, neuropsychologists were asked to rate a diverse list of 29 effort indicators regarding how well the indicators correctly classified examinees as giving adequate, or less than adequate effort. Finally, neuropsychologists were surveyed regarding how they dealt with informing examinees regarding the inclusion of effort indicators in an assessment, as well as how they dealt with communicating results when examinees were suspected of giving inadequate effort or malingering. 1. Method 1.1. Participants Surveys were sent to professional members and fellows of NAN. A random sample of approximately one-third yielded a sample size of 712 participants. In all, 712 surveys were mailed to both NAN professional members and

M.J. Sharland, J.D. Gfeller / Archives of Clinical Neuropsychology 22 (2007) 213–223

215

fellows. Two hundred and ten surveys were returned, or 29.5% of the total sample. Of the 210 returned surveys, 22 were missing data such that they could not be used in the final analyses. This left a final usable return rate of 188 surveys (26.4%). 1.2. Measures The survey questionnaires contained two sections, and were based in part on the methodology by Slick et al. (2004), and Mittenberg, Patton, et al. (2002). The first section was a demographics questionnaire similar to those used by Rabin et al. (2005) and Sweet, Moberg, and Suchy (2000). The second section requested information regarding practices and methods used to assess for effort. The survey asked respondents to rate the frequency with which they use specific measures to assess effort, followed by ratings of how accurately specific effort indicators classified individuals. The response types in the questionnaires varied. Most of the questions provided multiple choice responses; however, several open-ended questions were presented. For ratings regarding use of specific measures of effort, a Likert-type scale was used and participants were required to circle one of six answers: 1 = never, 2 = rarely, 3 = sometimes, 4 = often, 5 = always. In rating the classification accuracy of specific measures of effort, participants rated each measure on a scale of 1–10, where 1 = extremely poor classification accuracy and 10 = extremely good classification accuracy. If participants were unfamiliar or did not know the classification accuracy of a measure, they were asked to report DK = do not know. 1.3. Procedure A cover letter, a survey, and a return envelope with prepaid postage were mailed to 712 randomly selected clinical neuropsychologists who were NAN professional members or fellows. Six weeks after the initial mailing a followup postcard was sent to all participants, thanking those who returned surveys, and encouraging those who had not completed the survey to do so. 2. Results The mean age of the respondents was 51 years, with an average of 17 years practicing fully licensed. Thirty percent of the sample was female, with 90% having Ph.D.’s. The majority of respondents obtained doctoral degrees in the field of clinical psychology, and approximately 30% were board certified in neuropsychology. The participants were also generally evenly represented by geographic region. Demographic information from the current study, and demographic data from a recent comprehensive survey of the field of neuropsychology that sampled over 700 members of INS, NAN and Division 40 of the APA (Rabin et al., 2005) is summarized in Table 1. Although the current study used a more circumscribed sample (e.g. NAN professional members or fellows), the resulting demographics are similar to those of Rabin et al. in many respects. These findings lend support for the contention that the current survey results may be generalized to neuropsychologists currently engaged in professional practice. With respect to assessing effort, 56% of respondents indicated that they often or always include a measure of effort in a neuropsychological evaluation while 89% indicated that they often or always specifically encourage examinees to give their best effort when performing tests. Additionally, 22% of respondents reported they often or always give examinees some type of warning, prior to testing, that tests or indicators sensitive to detecting inadequate effort or symptom exaggeration will be administered, while 52% of respondents rarely or never give this warning (see Table 2). Respondents were also asked questions regarding their estimates of base rates for insufficient effort, malingering, and exaggeration of deficits, both in their own practice and in the field of neuropsychology in general. The respondents’ median estimate was that 10% of the persons evaluated in their own practice during the previous year “probably” gave insufficient effort,” while 5% “definitely gave insufficient effort.” In the field of neuropsychology in general, respondents’ median estimate was that 20% of examinees involved in civil litigation for monetary compensation deliberately exaggerate their deficits or feign cognitive impairment, and that approximately 5% of individuals exaggerate deficits or feign cognitive impairment when there is no ongoing litigation or possibility of monetary compensation. The range of estimates regarding base rates was considerable (see Table 3). Respondents were also questioned about the methods they use to communicate exaggerated or malingered deficits in reports or professional communications when examinees obtain test results that they believe are indicative of suboptimal

216

M.J. Sharland, J.D. Gfeller / Archives of Clinical Neuropsychology 22 (2007) 213–223

Table 1 Comparison of demographics between the current study and Rabin et al. (2005) Current study (% respondents)

Rabin et al. (% respondents)

Gender Male Female

70.2 29.8

59.7 40.3

Degree attained Ph.D. Psy.D. Ed.D.

89.9 7.9 2.1

87.4 9.4 3.2

Field of degree obtained Clinical psychology Counseling psychology Other Neuropsychology Neurosciences

66.1 12.7 11.0 7.9 2.1

62.1 11.2 9.5 10.7 1.5

Geographic region of practice South West Northeast North central

28.9 28.9 23.0 19.3

n/a n/a n/a n/a

Board certification ABPP/CN ABPN Total

18.7 11.2 29.9

16.3 5.5 21.8

Table 2 Rates of assessing for effort and warnings for effort Question How often do you include a measure to assess for level of effort in a neuropsychological evaluation? Prior to commencing testing do you specifically encourage examinees to give their best effort when performing tests? Prior to commencing testing, do you give examinees any type of warning regarding the fact that psychological tests may be sensitive to poor effort, exaggeration, or faking of deficits?

Never (%)

Rarely (%)

Sometimes (%)

Often (%)

Always (%)

4.8

11.1

28.6

30.7

24.9

0.5

3.7

6.9

19.0

69.8

22.2

30.2

25.9

10.6

11.1

Table 3 Estimated base rates of insufficient effort and malingering Based on the examinees you have seen for evaluation in the previous year, what percentage do you believe

Median (%)

Minimum (%)

Maximum (%)

Probably gave insufficient effort Definitely gave insufficient effort Probably were malingering Definitely were malingering Please estimate the percentage of examinees in general, who deliberately exaggerate their deficits or feign cognitive impairment in cases involving civil litigation or compensation Please estimate the percentage of examinees in general, who deliberately exaggerate deficits or feign cognitive impairment where there is no litigation or possibility of compensation

10 5 3 1 20

0 0 0 0 0

90 80 50 30 90

5

0

90

M.J. Sharland, J.D. Gfeller / Archives of Clinical Neuropsychology 22 (2007) 213–223

217

Table 4 Comparison of communication of suspect test results between Slick et al. (2004) and the current study When examinees obtain test results indicative of exaggerated deficits or malingering, how do you express this opinion in a report or professional communication? How often do you say that:

Slick et al. (percent of respondents)

Current study (percent of respondents)

Never

Rarely

Often

Always

Never

Rarely

Often

Always

Test results are inconsistent with severity of injury Test results suggest or indicate exaggeration No firm conclusions can be drawn Test data are invalid Test results suggest or indicate malingering

4.2 4.2 4.2 0.0 12.5

0.0 8.3 37.5 8.3 41.7

70.8 50.0 41.7 45.8 29.2

25.0 33.3 16.7 45.8 16.7

3.3 4.3 7.1 10.4 24.5

11.4 14.7 27.3 30.6 46.7

48.9 57.6 45.9 34.4 21.7

36.4 23.4 19.7 24.6 7.1

effort. This question was taken from Slick et al.’s (2004) survey. Eighty-five percent of the sample reported that they often or always state in reports that test results are inconsistent with the severity of injury; 81% of respondents indicated that they often or always say that test results suggest or indicate exaggeration. Sixty-six percent of respondents always or often state that no firm conclusions can be drawn from the tests results, while 59% of respondents always or often say that the test data are invalid. Of particular note, only 29% of the sample reported that they often or always state that test results suggest or indicate malingering, while 24% of the respondents never state that in a report or professional communication. These results, and those obtained by Slick et al. are summarized in Table 4. There are a number of ways to detect suboptimal effort in a neuropsychological evaluation. Mittenberg, Patton, et al. (2002) surveyed ABCN neuropsychologists regarding the methods they use to detect suboptimal effort. Some of the methods used were: comparing the severity of the injury with the test results, using empirical cut-offs on forced-choice tests, and examining implausible self-reported symptoms during the clinical interview. The respondents in the current study were asked the same questions regarding how frequently they use specific methods for detecting insufficient effort. The most frequently used methods for detecting poor effort or malingering were: comparing the severity of cognitive impairment with the severity of the condition when the two were inconsistent, and the use of discrepancies among records, self-report, and observed behaviors, with 88% and 87% of the sample often or always using these methods, respectively. The least frequently used methods for detecting suboptimal effort or malingering were: the use of scores on empirically derived discriminant function analyses (10% of respondents often or always using them), and the use of scores below empirical cut-offs on embedded measures of effort (46% of respondents often or always using this method). Table 5 lists the ten methods of detecting poor effort or malingering and the frequencies with which they are used. Table 5 Methods used to detect poor effort or malingering Method

Percentage of respondents Never

Severity of cognitive impairment inconsistent with the condition Discrepancies among records, self-report, and observed behavior Pattern of cognitive impairment inconsistent with the condition Implausible self-reported symptoms in interview Implausible changes in test scores across repeated examinations Scores below empirical cut-offs on measures specific to the assessment of effort/malingering Scores below empirical cut-offs on forced choice tests Validity scales on objective personality tests Scores below empirical cut-offs on embedded measures of effort Scores on empirically derived discriminant function analyses indicative of poor effort

Rarely

Sometimes

Often

Always

0

2.7

9.1

34.8

53.5

0.5

1.6

10.8

40.3

46.8

0

2.7

10.7

39.0

47.6

1.1 1.1

3.7 7.0

15.0 28.0

33.2 27.4

47.1 36.6

8.6

11.2

16.6

41.7

21.9

3.8 5.9 12.4

10.2 13.4 13.4

24.7 25.7 28.0

34.4 29.9 30.1

26.9 25.1 16.1

37.2

28.1

23.8

8.6

2.2

218

M.J. Sharland, J.D. Gfeller / Archives of Clinical Neuropsychology 22 (2007) 213–223

Table 6 Comparison of effort indicators between Mittenberg, Patton, et al. (2002) and the current study Mittenberg, Patton, et al.

Current Study

Indicator (most frequently used to least frequently used) 1. Severity of cognitive impairment inconsistent with condition 2. Pattern of cognitive test performance inconsistent with condition 3. Scores below empirical cutoffs on forced choice tests 4. Discrepancies among records, self-report, and observed behavior 5. Implausible self-reported symptoms in interview 6. Scores below empirical cutoffs on other malingering tests 7. Implausible changes in test scores across repeated examinations

Indicator (most frequently used to least frequently used) 1. Severity of cognitive impairment inconsistent with condition 2. Pattern of cognitive test performance inconsistent with condition 3. Discrepancies among records, self-report, and observed behavior 4. Implausible self-reported symptoms in interview 5. Implausible changes in test scores across repeated examinations 6. Scores below empirical cutoffs on forced choice tests 7. Scores below empirical cutoffs on measures specific to the assessment of effort/malingering (e.g., Validity Indicator Profile) 8. Validity scales on objective personality tests (e.g., MMPI-2) 9. Scores below empirical cutoffs on embedded measures of effort (e.g., Reliable Digit Span) 10. Scores on empirically derived discriminant function analyses indicative of poor effort

8. Scores above validity scale cutoffs on objective personality tests 9. Scores below chance on forced choice tests

The rankings of methods used for assessing suboptimal effort between the current study and those of Mittenberg, Patton, et al. (2002) are summarized in Table 6. A notable similarity between the results of Mittenberg, Patton et al. and the current study is that both groups relied most frequently on qualitative or subjective methods to identify inadequate effort or symptom exaggeration, namely inconsistencies between the severity of cognitive impairment and the condition, and inconsistencies between the condition and patterns of cognitive performance. However when examined more closely, 57% of the Mittenberg, Patton, et al. sample reported using scores below empirical cutoffs on forced choice measures to support impressions of probable malingering, while 61% of participants in the current study stated that they often or always used this same indicator to assess for poor effort or malingering. Therefore, empirical cutoffs on forced choice measures appear to be used as frequently between the two populations surveyed, but other indicators appear to be used more commonly by the neuropsychologists surveyed in the current study. To determine which specific measures of effort were used, a list of 29 empirically studied measures of effort was provided in the survey and respondents were asked to rate the frequency with which they utilized the indicator or test to detect cognitive deception, suboptimal effort, or malingering. Participants were provided space to write in any methods they used which were not contained in the list. The most frequently used measure to detect poor effort was the Test of Memory Malingering, with 75.3% of the respondents indicating that they use the measure, and 63% of the sample stating that they often or always use it. The next most frequently used measures were the MMPI-2 F–K ratio (76.5%, 46% often or always use), the MMPI-2 FBS scale (75.1%, 43% often or always use), the Rey 15-item test (74.4%, 42% often or always use), and the California Verbal Learning Test (CVLT; 63.1%, 43% often or always use). The least frequently used measures to detect poor effort or malingering were the WMS-R/WMS-III discriminant function (15.8%, 3.3% often or always use), WAIS-R/WAIS-III discriminant function (18.5%, 3.8% often or always use), Halstead Reitan Battery discriminant function (16.8%, 4.3% often or always use), the Warrington Recognition Memory Test (23.4%, 5.4% often or always use), and the Victoria Symptom Validity Test (18.1%, 10% often or always use). These results are summarized in Table 7. Finally, participants rated the overall classification accuracy of the 29 measures of suboptimal effort, using a 10-point scale, with one being the least accurate and ten being the most accurate. If participants were not familiar enough with a specific measure to rate its classification accuracy, they were asked to indicate this. The measure rated as being most accurate for classifying individuals as giving adequate or inadequate effort was the TOMM, with a mean rating of 7.5 out of 10. The other measures rated as highly accurate in classifying individuals included the VIP, the WMT, the VSVT, and the CARB. Of particular interest is that the five measures rated as being most accurate were all measures specifically designed to detect suboptimal effort. Furthermore, two of these measures (VSVT and CARB) were in the ten least used measures to detect suboptimal effort, while only one, the TOMM, was among the ten most frequently used measures. One might wonder why measures rated as being highly accurate were not used more frequently. To explain this phenomena, the extent to which individuals were familiar with a specific measure was examined. These findings indicated that 79% of the participants were familiar enough with the TOMM

M.J. Sharland, J.D. Gfeller / Archives of Clinical Neuropsychology 22 (2007) 213–223

219

Table 7 Frequency of use of measures to detect malingering Measure

Test of Memory Malingering MMPI-2 F–K ratio MMPI-2 FBS scale Rey 15-item test California Verbal Learning Test WAIS-R/WAIS-III Reliable Digit Span Rey Complex Figure Test Wisconsin Card Sorting Test Trail Making Test: TMT-A/TMT-B ratio Booklet Category Test: Subtest I and II errors Rey Auditory Verbal Learning Test WMS-III Aud. Recog. Raw Score Word Memory Test Validity Indicator Profile WAIS-R/WAIS-III Vocab-Digit Span Difference WMS-R General Memory/Attention Concentration Difference WMS-III Faces Raw Score Dot Counting Test Seashore Rhythm Test Memory Assessment Scales Portland Digit Recognition Test Booklet Category Test: Bolter Items Computerized Assessment of Response Bias Hiscock Digit Memory Test Victoria Symptom Validity Test Warrington Recognition Memory Test HRB discriminant function WAIS-R/WAIS-III discriminant function WMS-R/WMS-III discriminant function

Percentage of respondents Never

Rarely

Often

Always

24.7 23.5 24.9 25.5 36.8 37.5 35.3 38.9 46.2 52.7 56.0 52.2 59.3 59.2 57.9 58.5 57.8 61.4 66.3 70.7 67.8 69.2 74.3 74.5 82.0 76.5 82.6 81.5 84.2

12.4 30.1 31.9 32.6 20.3 24.5 28.8 32.4 26.1 23.9 17.9 26.1 15.9 20.1 23.0 23.5 28.6 24.5 17.9 15.5 22.8 21.1 13.1 17.4 8.2 18.0 13.0 14.7 12.5

37.1 34.4 28.6 29.3 29.1 25.0 26.6 21.1 20.1 16.3 20.7 16.3 15.9 15.2 14.8 15.8 9.2 11.4 10.9 8.8 7.2 6.5 9.3 4.9 6.6 3.8 3.8 3.8 3.3

25.8 12.0 14.6 12.5 13.7 13.0 9.2 7.6 7.6 7.1 5.4 5.4 8.8 5.4 4.4 2.2 4.3 2.7 4.9 5.0 2.2 3.2 3.3 3.3 3.3 1.6 0.5 0.0 0.0

to rate its classification accuracy, while only 29% and 33% of respondents were familiar enough with the VSVT and CARB to rate their accuracy (see Table 8). 3. Discussion The current study surveyed a large sample of neuropsychologists regarding their beliefs and practices with respect to the assessment of effort. To accomplish this, 712 surveys were randomly mailed to approximately one-third of the professional members and fellows of NAN. Two hundred and ten (29%) surveys were returned with 188 useable responses (26%) being obtained. With respect to the assessment of effort, 56% of respondents stated that they often or always include a measure to assess for level of effort in a neuropsychological evaluation. Only 22% of respondents, however, often or always provide any type of warning prior to commencing testing regarding the fact that psychological tests may be sensitive to poor effort, exaggeration, or faking of deficits. In fact, a majority of respondents (52%) indicated they never or rarely provide any type of warning. This finding is consistent with the disparate opinions reported by Slick et al.’s (2004) survey of experts. The dilemma regarding whether or not to warn examinees has also been debated in the literature (Johnson & Lesniak-Karpiak, 1997; Youngjohn, Lees-Haley, & Binder, 1999). Regarding the issue of warning, neuropsychologists are encouraged to review NAN’s official statement regarding “Independent and Court-Ordered Forensic Neuropsychological Examinations” (Bush, 2005) that addresses this issue. The statement contains a sample Informed Consent document that instructs examinees to give their best effort during the testing. The document also explicitly indicates, “Part of the examination will address the accuracy of your responses, as well as the degree of effort you exert on the tests” (page 1005). Neuropsychologists may also wish to review Victor and

220

M.J. Sharland, J.D. Gfeller / Archives of Clinical Neuropsychology 22 (2007) 213–223

Table 8 Classification accuracy and familiarity with specific measures of effort Measure

Mean

Standard deviation

DK responses (%)

Test of Memory Malingering Validity Indicator Profile Word Memory Test Victoria Symptom Validity Test Computerized Assessment of Response Bias Portland Digit Recognition Test Hiscock Digit Memory Test MMPI-2 FBS Scale California Verbal Learning Test Warrington Recognition Memory Test WAIS-R/WAIS-III Reliable Digit Span Rey Auditory Verbal Learning Test Booklet Category Test: Subtest I and II errors WMS-III Aud. Recog. Raw Score MMPI-2 F–K ratio Dot Counting Test Rey 15-item test Memory Assessment Scales Booklet Category Test: Bolter Items Rey Complex Figure Test WMS-III Faces Raw Score WAIS-R/WAIS-III Vocab-Digit Span Difference Trail Making Test: TMT-A/TMT-B ratio WMS-R GMI/ACI Difference Wisconsin Card Sorting Test HRB discriminant function WAIS-R/WAIS-III discriminant function WMS-R/WMS-III discriminant function Seashore Rhythm Test

7.5 7.0 7.0 6.9 6.6 6.3 5.9 5.8 5.8 5.7 5.6 5.3 5.2 5.2 5.1 5.0 5.0 4.9 4.9 4.9 4.8 4.7 4.7 4.7 4.5 4.5 4.2 4.2 4.1

1.8 2.1 2.3 2.1 2.0 2.1 2.3 2.3 1.9 2.3 2.0 1.9 2.3 1.9 2.1 2.2 2.2 2.2 2.1 2.2 1.9 1.8 2.0 2.0 2.0 2.3 2.0 2.0 2.0

20.8 54.0 54.0 70.5 66.7 51.4 66.1 24.3 34.8 69.9 47.7 54.8 55.4 56.8 17.5 48.9 18.6 74.0 79.7 45.8 59.1 56.3 55.1 59.7 42.0 76.7 81.8 81.3 65.5

Abeles’ (2004) article regarding the ethical considerations for both attorneys and clinicians concerning client coaching and issues of informed consent regarding the detection of inadequate effort. These authors recommend that examinees be instructed to give their best effort, but that clinicians should not specifically warn that malingering indicators will be administered. The current results indicated that respondents’ estimations of base-rates regarding inadequate effort varied depending on the wording of specific survey questions (e.g., gave insufficient effort, were malingering, feigned cognitive impairment). Respondents’ perceived base-rates of probable and definite malingering were lower when compared to their base-rate estimates of probable and definite insufficient effort. Additionally, respondents’ estimates of base-rates of deliberate exaggeration or feigned cognitive impairment were approximately four times higher for cases that involved litigation and monetary compensation when compared to cases where there was no possibility of monetary compensation. Overall, the current base-rate estimates are generally consistent with those reported by Mittenberg, Patton, et al. (2004) and Slick et al. (2004) that indicate a substantial minority of litigants are believed to give insufficient effort during neuropsychological testing. When examinees obtain results that are indicative of insufficient effort, respondents in the current study stated that they reported these findings through reports or professional communications in a number of ways. The most frequent method for reporting suspect test results was to indicate that test results were inconsistent with the severity of the injury, while the least frequently used statement was that test results suggest or indicate malingering. Both respondents in the current study and those surveyed by Slick et al. (2004) most often report suspect test results as being inconsistent with the severity of the injury. Participants in the Slick et al. survey were more likely to describe test data as invalid (92% often or always), as compared to respondents in the current study (59% often or always). Participants in the current study were similar to the experts in the Slick et al. survey in describing test results as suggesting or indicating exaggeration (81% and 83%, respectively). Finally, individuals in the current study were less likely to describe test

M.J. Sharland, J.D. Gfeller / Archives of Clinical Neuropsychology 22 (2007) 213–223

221

results as suggestive or indicative of malingering, compared to individuals surveyed by Slick et al. Specifically, 29% of individuals in the current study often or always state that test results are indicative of malingering, while nearly half (46%) of individuals surveyed by Slick et al. often or always state that test results are indicative of malingering. With respect to measures specifically designed to detect suboptimal effort, Slick et al. (2004) found that the TOMM and the Rey 15-item test were the most frequently used measures to detect suboptimal effort. These results were comparable in the current study, as the TOMM and the Rey 15-item test were the two most highly ranked performance based tests of suboptimal effort. While the Rey 15-item test was frequently utilized, respondents in the current survey appeared to be aware of the literature critical of the test’s lack of sensitivity and specificity (e.g., Spreen & Strauss, 1998), as the test was ranked 17th when respondents provided classification accuracy ratings for the various measures. Respondents in the current study also frequently used effort indicators derived from traditional neuropsychological tests, as these types of “embedded” indicators comprised 6 of the top 10 most commonly used measures of effort. This finding mirrors the results of Slick et al. who reported that their experts routinely relied on effort indicators derived from conventional neuropsychological tests. The above findings have particular relevance for states that continue to use the Frye standard for admissibility of evidence. Briefly, the Frye standard states that scientific evidence is admissible in court as long as there is general acceptance of the test or method by the scientific community (Mossman, 2003). States that continue to use the Frye standard include: New York, Pennsylvania, Maryland, Florida, Alabama, Illinois, Michigan, Minnesota, North Dakota, Kansas, Arizona, California, Washington, and Hawaii (Cheng & Yoon, 2005). The results of this current study indicate that frequently used measures like the TOMM, MMPI-2 F–K ratio, MMPI-2 FBS, and Rey 15-item test would all meet Frye standards for admissibility as approximately three-quarters of the sample surveyed stated that they used these measures to detect suboptimal effort. As respondents would not use a measure that they considered lacking in clinical utility, one can assume that the majority of neuropsychologists surveyed have some degree of confidence in these measures to detect symptom exaggeration or suboptimal effort. The TOMM was not only the most frequently used measure of effort, but it was also rated as having the best classification accuracy in discriminating between insufficient versus adequate effort. Of particular interest is that the five measures rated highest with respect to classification accuracy, were all measures specifically designed to detect suboptimal effort. Furthermore, two of these measures (VSVT and CARB) were among the 10 least frequently used measures to detect suboptimal effort, while only the TOMM was one of the ten most frequently used measures. As indicated previously, the extent to which individuals were familiar with the various measures helps explain this phenomenon. For example, 79% of the participants were familiar enough with the TOMM to rate its classification accuracy, while only 29% and 33% of respondents were familiar enough with the VSVT and the CARB, respectively, to rate their accuracy. The VSVT and CARB are also computer-administered tests, and this fact may partially account for their less frequent utilization. Some limitations to the current study merit brief discussion. Most notably the response rate obtained by the current study (29% returned with 26% useable surveys) is lower than desired. This response rate is below those obtained by other studies (Mittenberg, Patton, et al., 2002; Rabin et al., 2005; Slick et al., 2004). One possible reason for a lower return rate is that the assessment of suboptimal effort is not a universal concern among neuropsychologists. Another possible reason for lower return rates is that incentives were not provided to potential participants. Rabin and colleagues provided small incentives to potential participants (i.e., book tabs with a brain logo, and bookmarks). While the usable return rate was lower than desirable, the demographic information collected in the current sample was similar to the demographics obtained from the Rabin et al. (2005) survey in a number of respects which supports the contention that the current findings are generally representative of neuropsychologists engaged in professional practice. Another limitation of the current study was that the questions asked were not always identical to those asked in previous studies (Mittenberg, Patton, et al., 2002; Rabin et al., 2005; Slick et al., 2004). Therefore, comparisons between studies were not always exact. For example, Mittenberg, Patton, and colleagues surveyed board-certified neuropsychologists regarding base rates of symptom exaggeration and malingering with respect to their own case loads and litigation. The current study did not limit base rate information to litigation cases; therefore, base rates for suboptimal effort in participants’ own caseload may be lower than those obtained by Mittenberg, Patton, et al. Moreover, Mittenberg, Patton, et al. separated base rates of malingering by work setting. This was not done in the current study. The goals of the current study were not identical to those of either Mittenberg, Patton, et al. or Slick et al.; hence, some of the information obtained in the current study varied from them. To allow for comparisons between studies, questions were asked using the same wording and response scale, when possible. If a previous study provided a

222

M.J. Sharland, J.D. Gfeller / Archives of Clinical Neuropsychology 22 (2007) 213–223

four-item Likert-like response scale, then that format was utilized in the current study, unless it was deemed to prevent the accumulation of important information. A final limitation of the survey is that respondents were not asked to indicate the number or percentage of forensic evaluations they completed within the last year. Many of the respondents may perform relatively few neuropsychological evaluations within a legal context. If so, this might account for that fact that significant percentages of the respondents lacked familiarity with many of the measures of effort, and for the lower estimates of cases that involve malingering or insufficient effort, relative to other estimates reported in the literature (Larrabee, 2003; Mittenberg, Patton, et al., 2002). In spite of the limitations noted above, the results of the current survey provide neuropsychologists with useful information regarding the contemporary practices and opinions of their professional peers regarding the methods and measures used to assess effort, symptom exaggeration, or malingering. The results also provide information regarding the perceived base rates of malingering in litigation and non-litigation contexts, the diverse opinions regarding whether to “warn” examinees about the inclusion of measures of effort prior to testing, and how neuropsychologists communicate results when examinees are suspected of giving insufficient effort. References Binder, L. M. (2002). The Portland Digit Recognition Test: A review of validation data and clinical use. Journal of Forensic Neuropsychology, 2, 27–41. Binder, L. M., & Rohling, M. L. (1996). Money matters: A meta-analytic review of the effects of financial incentives on recovery after closed head injury. American Journal of Psychiatry, 153, 7–10. Bush, S. S. (2005). Independent and court-ordered forensic neuropsychological examinations: Official statement of the National Academy of Neuropsychology. Archives of Clinical Neuropsychology, 20, 997–1007. Cheng, E. K., & Yoon, A. H. (2005). Does Frye or Daubert matter? A study of scientific admissibility standards. Virginia Law Review, 91, 471–513. Greiffenstein, M. F., Baker, W. J., & Gola, T. (1996). Comparison of multiple scoring methods for Rey’s malingered amnesia measures. Archives of Clinical Neuropsychology, 11, 283–293. Hartmann, D. (2002). The unexamined lie is a lie worth fibbing. Neuropsychological malingering and the Word Memory Test. Archives of Clinical Neuropsychology, 17, 709–714. Inman, T. H., & Berry, D. T. (2002). Cross-validation of indicators of malingering: A comparison of nine neuropsychological tests, four tests of malingering, and behavioral observations. Archives of Clinical Neuropsychology, 17, 1–23. Johnson, J. L., & Lesniak-Karpiak, K. (1997). The effect of warning on malingering on memory and motor tasks in college samples. Archives of Clinical Neuropsychology, 12, 231–238. Lally, S. (2003). What tests are acceptable for use in forensic evaluations? A survey of experts. Professional Psychology: Research and Practice, 34, 491–498. Larrabee, G. J. (2003). Detection of malingering using atypical performance patterns on standard neuropsychological tests. The Clinical Neuropsychologist, 17, 410–425. Meyers, J. E., & Volbrecht, M. E. (2003). A validation of multiple malingering detection methods in a large clinical sample. Archives of Clinical Neuropsychology, 18, 261–276. Millis, S. R., & Volinsky, C. T. (2001). Assessment of response bias in mild head injury: Beyond malingering tests. Journal of Clinical & Experimental Neuropsychology, 23, 809–828. Mittenberg, W., Aguila-Puentes, G., Patton, C., Canyock, E., & Heilbronner, R. (2002). Neuropsychological profiling of symptom exaggeration and malingering. Journal of Forensic Neuropsychology, 3, 227–240. Mittenberg, W., Patton, C., Canyock, E., & Condit, D. (2002). Base rates of malingering and symptom exaggeration. Journal of Clinical & Experimental Neuropsychology, 24, 809–828. Mossman, D. (2003). Daubert, Cognitive malingering, and test accuracy. Law and Human Behavior, 27, 229–249. Nies, K. J., & Sweet, J. J. (1994). Neuropsychological assessment and malingering: A critical review of past and present strategies. Archives of Clinical Neuropsychology, 9, 501–552. Pankratz, L. (1988). Malingering on intellectual and neuropsychological measures. In R. Rogers (Ed.), Clinical assessment of malingering and deception (pp. 169–194). New York: Guilford Press. Rabin, L., Barr, W., & Burton, L. (2005). Assessment practices of clinical neuropsychologists in the United States and Canada: A survey of INS, NAN, and APA Division 40 members. Archives of Clinical Neuropsychology, 20, 33–65. Reynolds, C. (1998). Detection of malingering during head injury litigation. New York: Plenum Press. Rose, F. E., Hall, S., Szalda-Petree, A. D., & Bach, P. J. (1998). A comparison of four tests of malingering and the effects of coaching. Archives of Clinical Neuropsychology, 13, 349–363. Schretlen, D., Brandt, J., Krafft, L., & Van Gorp, W. (1991). Some caveats in using the Rey 15-item memory test to detect malingered amnesia. Psychological Assessment, 3, 667–672. Slick, D., Tan, J., Strauss, E., & Hultsch, D. (2004). Detecting malingering: A survey of experts’ practices. Archives of Clinical Neuropsychology, 19, 465–473.

M.J. Sharland, J.D. Gfeller / Archives of Clinical Neuropsychology 22 (2007) 213–223

223

Spreen, O., & Strauss, E. (1998). A compendium of neuropsychological tests: Administration, norms and commentary (2nd ed.). London: Oxford University Press. Sweet, J., & King, J. (2002). Category Test validity indicators: Overview and practice recommendations. Journal of Forensic Neuropsychology, 3(1–2), 241–274. Sweet, J., Moberg, J., & Suchy, Y. (2000). Ten-year follow-up survey of clinical neuropsychologists: Part I. Practices and beliefs. The Clinical Neuropsychologist, 14, 18–37. Thompson, G. B. (2002). The Victoria Symptom Validity Test: An enhanced test of symptom validity. Journal of Forensic Neuropsychology, 2(3–4), 43–67. Victor, T. T., & Abeles, N. (2004). Coaching clients to take psychological and neuropsychological tests: A clash of ethical obligations. Professional Psychology: Research and Practice, 35, 373–379. Youngjohn, J. R., Lees-Haley, P. R., & Binder, L. M. (1999). Comment: Warning malingerers produces more sophisticated malingering. Archives of Clinical Neuropsychology, 14, 511–515.