Convergent and divergent validity of the Gordon Diagnostic System in adults

Convergent and divergent validity of the Gordon Diagnostic System in adults

Archives of Clinical Neuropsychology 22 (2007) 37–44 Convergent and divergent validity of the Gordon Diagnostic System in adults Noelle E. Carlozzi a...

130KB Sizes 0 Downloads 31 Views

Archives of Clinical Neuropsychology 22 (2007) 37–44

Convergent and divergent validity of the Gordon Diagnostic System in adults Noelle E. Carlozzi a,b,c,∗ , Michael David Horner b,c a

Department of Psychological and Brain Sciences, Indiana University, 1101 East Tenth Street, Bloomington, IN 47405, United States b Ralph H. Johnson Department of Veteran Affairs Medical Center, 109 Bee Street, Charleston, SC 29401, United States c Medical University of South Carolina, Charleston, SC, United States Accepted 6 August 2006

Abstract The present study examined the convergent and divergent validity of the Gordon Diagnostic System (GDS) as a measure of attention in adults by examining correlations between GDS scores and scores on other attentional and non-attentional measures in 77 veterans (4 women and 73 men) referred for neuropsychological evaluation. Scores on the GDS were neither significantly correlated with scores on other attentional nor non-attentional measures. Participants were then divided into two groups, those who scored lower (<1S.D. below the published normative mean) and higher on the GDS for the Vigilance and Distractibility tasks separately. Participants with lower GDS scores on the Vigilance task performed more poorly on the Trailmaking Test, Part B than those with higher GDS scores. There were no other group differences on tests of attentional or non-attentional functions. These results do not provide strong support for the convergent and divergent validity of the GDS as a measure of attention in adults. © 2006 National Academy of Neuropsychology. Published by Elsevier Ltd. All rights reserved. Keywords: Neuropsychology; Neuropsychological assessment; Attention; Sustained attention; Vigilance; Psychometrics

The Gordon Diagnostic System (GDS; Gordon, 1983, 1987) is a computerized measure of sustained and focused attention that is often used in the assessment of children and adults. There are two adult tasks: Vigilance, in which participants respond by pushing a button only when a specified sequence of numbers appears on the screen, but not to distractor items; and Distractibility, which is identical except that extraneous numbers appear elsewhere on the screen during the task. While several studies have examined the reliability and validity of the GDS in pediatric samples (Aylward, Verhulst, & Bell, 1988; DuPaul, Anastopoulos, Shelton, Guevremont, & Metevia, 1992; El-Sayed et al., 1999; Gordon & Mettelman, 1988; Grant, Ilai, Nussbaum, & Bigler, 1990; Grodzinsky & Barkley, 1999; Mayes & Calhoun, 2002; Mayes, Calhoun, & Crowell, 2001; Rielly, Cunningham, Richards, Elbard, & Mahoney, 1999; Wherry et al., 1993), few have done so in adult samples (Burg, Burright, & Donovick, 1995; Horner, Teichner, Waite, & Harvey, 2000; Rasile, Burg, Burright, & Donovick, 1995). The three studies that have examined the GDS in adult samples have demonstrated some preliminary support for its concurrent validity. Rasile et al. (1995) examined correlations among scores on the GDS (Standard Delay Correct: a



Corresponding author. Tel.: +1 812 855 0318; fax: +1 812 856 3659. E-mail addresses: [email protected] (N.E. Carlozzi), [email protected] (M.D. Horner).

0887-6177/$ – see front matter © 2006 National Academy of Neuropsychology. Published by Elsevier Ltd. All rights reserved. doi:10.1016/j.acn.2006.08.012

38

N.E. Carlozzi, M.D. Horner / Archives of Clinical Neuropsychology 22 (2007) 37–44

measure of response inhibition; Vigilance Correct and Distractibility Correct) and other measures of attention including subtests from the WAIS-R (including Digit Span, Arithmetic and Digit Symbol), Kagan’s Matching Familiar Figures (a measure of sustained attention), and the Stroop Color-Word Interference Task. Results provided some support for the convergent validity of the GDS (specifically Standard Delay Correct and Distractibility Correct) with other measures of attention, although correlations in this study were weak. The lack of significant relationships between GDS Vigilance Correct and other measures of attentional function was attributed to the restricted range of performance (or ceiling effect) on this task (e.g., most participants earn perfect or near-perfect scores on this task). Burg et al. (1995) also examined the convergent validity of total correct and error commission scores from the GDS (Vigilance, Distractibility, and Standard Delay) with other measures of attentional function (Arithmetic subtest from the WAIS-R, Stroop, Digit Span Backwards from the WAIS-R, Paced Auditory Serial Addition Test, Digit Symbol from the WAIS-R, and the Recanti Time Estimation Task) in adults with traumatic brain injury and a non-injured control group. Results suggested moderate correlations of the GDS tasks with the other measures of attentional function, although patterns of correlations differed for brain-injured versus non-injured participants. Taken together these results provide support for the convergent and discriminant validity of the GDS. Horner et al. (2000) found limited support for the convergent and divergent validity of the GDS in a sample of 42 male substance abuse outpatients. Pearson correlations demonstrated significant relationships between Vigilance Correct and Mental Control (WMS-R), Stroop Word, Stroop Interference, Trails A and B, and between Distractibility Correct and Trails A and B. Significant relationships were not found with other measures of attentional function including Digit Span, Stroop Color, and the Symbol Digit Modalities Test. There were also unexpected correlations with measures of memory and executive function. The primary purpose of this paper was to examine the relationship between performance on the GDS and other measures of attentional function. We hypothesized significant relationships between scores on GDS Vigilance and Distractibility with scores on other measures of attentional function, but not with scores on measures of other domains of function (general intellectual functioning, executive functions, memory, language, and visuospatial functions). Relationships between scores on the GDS and measures of depression, anxiety, and other demographic variables were also examined to ensure that significant findings could not be solely attributed to such confounding variables. Thus, the purpose of the present study was to examine relationships among scores on the GDS and on other attentional and non-attentional measures. The utility of the GDS in diagnosing ADHD or other disorders was not addressed. 1. Method 1.1. Participants Eighty-three veterans referred for clinical neuropsychological assessment at a southeastern VA medical center were administered the GDS as part of their clinical evaluation. Six participants were eliminated due to insufficient effort during testing: three participants for failing the Test of Memory Malingering (Tombaugh, 1996), two for failing the Portland Digit Recognition Test (Binder, 1993) and one for questionable motivation as tasks became more demanding (determined by clinical judgment at the time of the assessment). Otherwise, all consecutively referred patients who had been administered the GDS (4 women and 73 men) were included in this study. One of these participants was administered only the Vigilance task only, while all others were administered both the Vigilance and Distractibility tasks from the GDS. All participants were referred, at least in part, for an evaluation of potential attention deficits. Primary referral sources were Primary Care, Neurology and Mental Health Services. Participants ranged in age from 21 to 67 years (mean = 40.6, S.D. = 12.1). The majority of participants were Caucasian (87.0%), with 6.5% African American and 5.2% of other racial groups (due to missing data, numbers do not sum to 100%). Participants’ education ranged from 8 to 19 years (mean = 13.2 years, S.D. = 2.1). On the basis of this evaluation, and not previous assessments by other hospital personnel, patients received clinical diagnoses. While many participants in our study received cognitive diagnoses including ADHD (29.9%), learning disorder (13%) or cognitive disorder (5.2%) diagnoses, 16.9% did not receive a clinical diagnosis and 2.6% were inconclusive. Further, many participants did not exhibit objective attention difficulties and instead exhibited psychiatric problems including mood disorders (46.8%), substance use disorders (19.5%), anxiety disorders (19.5%), personality disorders (6.5%), and psychotic disorders (2.3%; percentages do not sum to 100 because some participants were given multiple diagnoses). Many participants were also characterized by

N.E. Carlozzi, M.D. Horner / Archives of Clinical Neuropsychology 22 (2007) 37–44

39

multiple diagnoses: 37.7% had two diagnoses, 11.7% three, and 6.5% had four of more diagnoses. Further, a number of participants in our sample had a history of traumatic brain injuries (40.3%), 1.3% had experienced a cerebrovascular accident and 3.9% had other neurological disorders. 1.2. Measures All tests were administered as part of standard, routine clinical evaluation. Please note that the evaluations conducted in this study were based on a hypothesis testing model. Therefore, participants received individualized cognitive test batteries that varied as a function of the referral question. Due to this individualized approach to testing, analyses that are conducted throughout this paper will include different numbers of participants. The measures chosen for inclusion in this study were selected both to represent a range of areas of cognitive functioning and to ensure that there were a sufficient number of participants to allow for meaningful analysis. All measures were administered and scored according to standard instructions in each test’s manual. 1.2.1. Attentional function The Vigilance and Distractibility tasks of the Gordon Diagnostic System (GDS; Gordon, 1983) each consist of 30 presentations of target stimuli. Participants receive scores for the number of correctly identified targets and for the number of commission errors. Many participants also completed the Digit Span subtest of the WAIS-R (Wechsler, 1981), WAIS-III (Wechsler, 1997a) or WMS-III (Wechsler, 1997b). Digit Span is considered an index of focused attention (Lezak, Howieson, & Loring, 2004; Mirsky, Anthony, Duncan, Ahearn, & Kellam, 1991; van Zomeren & Brouwer, 1994). Participants read strings of numbers and are instructed either to repeat them or to repeat them in reverse order (digits backward). Scores were adjusted to account for different starting and ending points of the WAIS-R, WAIS-III and WMS-III. Correct responses earn a score of one point. The Mental Control subtest of the WMS-III (Wechsler, 1997a,b) is also considered a test of focused attention (Lezak et al., 2004; van Zomeren & Brouwer, 1994). It consists of a series of timed attention tasks including saying the alphabet, counting backward, and reciting days of the week and months of the year forward and backward. Scoring includes both accuracy and bonus points related to the response time. Scores can range from 0 to 40. A subset of participants was administered the Trailmaking Test (Reitan & Wolfson, 1985). Trails A is a timed task in which the participant connects numbers on a page, in order, as quickly as possibly; it is not only considered sensitive to deficits in sustained attention or information processing speed, but also requires visual scanning and psychomotor speed (Mirsky et al., 1991; Orsini, van Gorp, & Boone, 1988; Spreen & Strauss, 1998; van Zomeren & Brouwer, 1994). 1.2.2. Executive function In Trails B, participants alternate, in order, between numbers and letters. Part B requires executive functions and processing speed as well as focused, sustained, and divided attention. It is sensitive to impairment in a broad range of neuropsychological domains (Gaudino, Geisler, & Squires, 1995; Lezak et al., 2004; Mirsky et al., 1991; Orsini et al., 1988; Spreen & Strauss, 1998; van Zomeren & Brouwer, 1994). The Wisconsin Card Sorting Test (WCST; Heaton, Chelune, Talley, Kay, & Curtiss, 1993) is a measure of problem-solving and cognitive flexibility (Lezak et al., 2004; Mirsky et al., 1991). Participants attempt to match each response card to the correct stimulus card without having been told the correct sorting strategy. For the purposes of this study, the total number of errors was utilized as a measure of problem-solving. 1.2.3. Memory The Logical Memory-II subtest of the WMS-III (Wechsler, 1997a,b) examines memory for structured verbal information. 1.2.4. Intelligence The Wechsler Abbreviated Scale of Intelligence (WASI; Wechsler, 1999) consists of four subtests: Vocabulary, Block Design, Matrix Reasoning, and Similarities, and was used to estimate general intelligence.

40

N.E. Carlozzi, M.D. Horner / Archives of Clinical Neuropsychology 22 (2007) 37–44

1.2.5. Language Verbal Fluency (Spreen & Strauss, 1998) is a word generation task in which participants generate as many words as possible that begin with specified letters (F, A, S). Participants are allotted 1 min for each letter. The score is the total number of words generated. 1.2.6. Visuospatial function The Rey–Osterreith Complex Figure copy (ROCFT; Meyers & Meyers, 1995) is a measure of visuoconstruction (Lezak et al., 2004) in which participants copy a complex, geometric design. Higher scores reflect better visuoconstruction. 1.2.7. Mood The Minnesota Multiphasic Personality Inventory-II, Scales 2 and 7 (Hathaway & McKinley, 1983) was also administered to a subset of participants. Scale 2 assesses symptomatic depression such as general dissatisfaction, feelings of hopelessness, anhedonia, and psychomotor retardation. It was included in this study as an approximate index of depressive symptomatology. Scale 7 assesses symptoms such as excessive doubts, obsessions, compulsions, anxiety, and unrealistic fears. It was included in this study as an approximate index of anxiety. These scales were included in the present study to ensure that relationships between scores on the GDS and other measures were not attributable simply to depressive or anxiety symptoms. 2. Results The first set of analyses tested the hypothesis that scores on the GDS (Vigilance Correct, Vigilance Commissions, Distractibility Correct, and Distractibility Commissions) would be significantly correlated with scores on other tasks of attentional function, but not with scores on measures of other cognitive domains, depression, anxiety, and demographic variables. More specifically, we hypothesized that Vigilance Correct and Distractibility Correct would be positively correlated with Digit Span and Mental Control, and negatively correlated with Trails A. We also hypothesized that Vigilance Commissions and Distractibility Commissions would be negatively correlated with Digit Span and Mental Control, but positively correlated with Trails A. We did not anticipate relationships between any of the scores of the GDS and Trails B, WCST, WASI FSIQ, Verbal Fluency, ROCFT, LM-II, or MMPI-II Scales 2 or 7. Table 1 shows performance of the sample on neuropsychological tests. Correlations among scores on the GDS and other measures are shown in Table 2. Given the large number of comparisons, a Bonferroni correction was employed, lowering the acceptable alpha level to p = 0.004. Contrary to our hypotheses, there were no relationships between any of the GDS scores and other attentional measures. There were also some unanticipated relationships between scores on the GDS and non-attentional and mood measures. Specifically, there was a significant negative correlation between Distractibility Commissions and WASI FSIQ (r = −0.70, p = 0.002). Higher number Vigilance Correct was associated with better performance on Trails B (r = −0.28, p = 0.02), while fewer Vigilance Commissions were associated with higher WASI FSIQ (r = −0.52, p = 0.03) and lower scores on MMPI-II Scale 2 (r = 0.28, p = 0.03); but these findings did not survive Bonferroni correction. Inspection of Vigilance Correct scores indicated a ceiling effect, with most participants making few or no errors. As such ceiling effects can obscure relationships, an additional set of analyses was undertaken. Participants were divided into two groups, based on Vigilance Correct score: those with lower scores (at least 1S.D. below the published normative mean; N = 24), and those with higher scores (the remaining participants, N = 53). Other cut points considered were median split, and 1.5 and 2S.D. below the mean, but these were not utilized because they resulted in inadequate sample sizes. The two groups were then compared on cognitive, mood, and demographic variables using a series of independent samples t tests. (Analyses were not performed for WASI FSIQ due to inadequate sample sizes.) Given the large number of comparisons, Bonferroni corrections were again employed, lowering the acceptable alpha level to p = 0.004. Another set of analyses was also performed in which participants were divided into high and low scorers based on Distractibility Correct: those with lower scores (at least 1S.D. below the published normative mean; N = 16), and those with higher scores (the remaining participants; N = 60). (Analyses were not performed for WASI FSIQ, WCST and Verbal Fluency due to inadequate sample sizes.) Bonferroni corrections lowered the acceptable alpha level to p = 0.005.

N.E. Carlozzi, M.D. Horner / Archives of Clinical Neuropsychology 22 (2007) 37–44

41

Table 1 Means (and S.D.) of neuropsychological variables for the entire sample (N = 77) Measures

M (S.D.)

Attentional functions Vigilance Correct Vigilance Commissions Distractibility Correct Distractibility Commissions Digit Span WMS-III Mental Control Trails A

28.3 (3.1) 1.9 (7.9) 23.7 (8.2) 3.4 (9.4) 16.2 (3.9) 25.11 (7.5) 33.8 (14.8)

Non-attentional functions Trails B WCST WASI FSIQ Verbal fluency ROCFT LM-II

76.1 (31.1) 36.0 (26.0) 102.0 (10.2) 35.9 (9.9) 31.31 (4.0) 21.8 (8.2)

Affective MMPI-II Scale 2 MMPI-II Scale 7

68.9 (14.1) 59.7 (15.8)

Note: WMS-III: Wechsler Memory Scales-III; WCST: Wisconsin Card Sorting Test; WASI FSIQ: Wechsler Abbreviated Scale of Intelligence Full Scale IQ; COWAT, ROCFT: Rey–Osterreith Figure Copy Test; LMII: Logical Memory II; MMPI-II: Minnesota Multiphasic Personality Inventory.

Table 2 Pearson correlations of scores on the Gordon Diagnostic System with scores on measures of attention, other cognitive functions, mood, and demographic factors Measures

GDS Vigilance Correct

Vigilance Commissions

Distractibility Correct

Distractibility Commissions

Demographics Age Education

0.10 (77), p = 0.37 0.09 (77), p = 0.46

0.03 (77), p = 0.80 −0.11 (77), p = 0.35

−0.17 (77), p = 0.14 0.18 (76), p = 0.11

0.04 (77), p = 0.72 −0.11 (76), p = 0.36

Intercorrelations Vigilance Correct Vigilance Commissions Distractibility Correct

– – –

−0.74 (77), p = 0.001 – –

0.08 (76), p = 0.49 −0.12 (76), p = 0.30 –

−0.67 (76), p = 0.001 0.95 (76), p = 0.001 −0.27 (76), p = 0.017

Attentional functions Digit Span WMS-III Mental Control Trails A

0.09 (57), p = 0.51 0.06 (47), p = 0.68 −0.17 (68), p = 0.17

−0.12 (57), p = 0.36 −0.22 (47), p = 0.14 0.15 (68), p = 0.23

−0.20 (56), p = 0.15 0.07 (46), p = 0.63 −0.18 (67), p = 0.15

−0.11 (56), p = 0.41 −0.16 (46), p = 0.29 0.22 (67), p = 0.07

Non-attentional functions Trails B WCST WASI FSIQ Verbal Fluency ROCFT LM-II

−0.28 (68), p = 0.02 −0.04 (28), p = 0.85 0.03 (17), p = 0.91 p = 0.03 0.06 (31), p = 0.75 0.18 (43), p = 0.24 0.24 (38), p = 0.15

0.21 (68), p = 0.08 −0.02 (28), p = 0.93 −0.52 (17), p = 0.41 −0.02 (31), p = 0.91 −0.20 (43), p = 0.19 −0.26 (38), p = 0.12

−0.21 (67), p = 0.10 −0.29 (28), p = 0.14 0.22 (17), −0.06 (31), p = 0.76 0.03 (43), p = 0.84 0.17 (37), p = 0.31

0.23 (67), p = 0.07 0.01 (28), p = 0.97 −0.70 (17), p = 0.002 −0.02 (31), p = 0.92 −0.17 (43), p = 0.28 −0.19 (37), p = 0.27

Affective MMPI-II Scale 2 MMPI-II Scale 7

−0.07 (59), p = 0.60 −0.16 (59), p = 0.23

0.24 (59), p = 0.07 0.28 (59), p = 0.03

−0.05 (58), p = 0.72 −0.25 (58), p = 0.06

0.07 (58), p = 0.62 0.23 (58), p = 0.08

Note: Numbers in parentheses represent the sample size for each analysis; WMS-III: Wechsler Memory Scales-III; WCST: Wisconsin Card Sorting Test; WASI FSIQ: Wechsler Abbreviated Scale of Intelligence Full Scale IQ; COWAT, ROCFT: Rey–Osterreith Figure Copy Test; LM-II: Logical Memory II; MMPI-II: Minnesota Multiphasic Personality Inventory.

42

N.E. Carlozzi, M.D. Horner / Archives of Clinical Neuropsychology 22 (2007) 37–44

Table 3 Means and standard deviations of scores on neuropsychological tests for high and low scorers on the GDS adult Vigilance and adult Distractibility tasks Vigilance Correct

Digit Span WMS-III Mental Control Trails A Trails B WCST Verbal Fluency ROCFT LM-II MMPI-II Scale 2 MMPI-II Scale 7

Distractibility Correct

High scorers

Low scorers

High scorers

Low scorers

N

M (S.D.)

N

M (S.D.)

N

M (S.D.)

N

M (S.D.)

44 33 44 44 16 15 28 28 43 43

14.1 (3.7) 26.2 (7.0) 31.2 (11.3) 68.3 (24.0) 32.8 (25.4) 37.5 (9.4) 31.6 (4.2) 22.4 (8.5) 68.0 (13.9) 67.2 (15.2)

13 14 24 24 12 16 15 10 16 16

14.1 (5.3) 22.5 (8.2) 38.6 (19.1) 90.4 (37.5)* 40.3 (27.2) 34.4 (10.4) 30.8 (3.8) 20.2 (7.3) 71.2 (14.9) 74.3 (11.0)

43 35 53 53

13.6 (3.9) 24.8 (8.2) 33.1 (13.7) 74.5 (32.0)

13 11 14 14

15.7 (4.3) 26.1 (5.0) 36.4 (19.4) 82.9 (28.6)

32

30.9 (4.3)

11

32.5 (3.0)

46 46

70.4 (13.0) 68.7 (14.2)

12 12

65.1 (16.9) 71.7 (15.8)

Note: WMS-III: Wechsler Memory Scales-III; WCST: Wisconsin Card Sorting Test; WASI FSIQ: Wechsler Abbreviated Scale of Intelligence Full Scale IQ; COWAT, ROCFT: Rey–Osterreith Figure Copy Test; LM-II: Logical Memory II; MMPI-II: Minnesota Multiphasic Personality Inventory. * p < 0.004.

Contrary to hypotheses, high scorers on Vigilance Correct outperformed low scorers on Trails B (see Table 3). No other significant group differences were found (Table 3). 3. Discussion The present study examined the convergent and divergent validity of the GDS. Contrary to hypotheses, scores on the GDS were not significantly related to scores on other attentional measures. Higher number Vigilance Correct was associated only with better performance on Trails B. Correlational analyses indicated that fewer Distractibility Commissions were associated with higher FSIQ, but this relationship might have been due to the restricted range of GDS scores, and was not seen in analyses comparing low- and high-scorers on the GDS. The relative lack of relationships between GDS scores and scores on other attentional measures was unexpected. It is possible that the GDS is primarily sensitive to a dimension of attention that is not examined by the other measures. That is, the GDS assesses sustained and focused attention, possibly requiring processes different from the other attentional measures. Another possibility is that the lack of relationships on these measures might simply be an artifact of a highly specialized sample, since all participants were referred, at least in part, for evaluation of possible attention deficits. Alternatively, it is possible that the GDS, due to its psychometric properties, might not be adequately sensitive to attentional impairment in adults. It is also possible that the small sample sizes did not have enough power to detect differences. The significant relationship between vigilance and executive function (as measured by Trails B) warrants discussion. In general, performance on Trails B is sensitive to many different types of neuropsychological dysfunction, and poor performance might reflect deficits in visual scanning, attention, psychomotor speed, set shifting, or maintaining a complex response set (Lezak et al., 2004). Therefore, one possible explanation for the relationship between Trails B and Vigilance Correct is that they might both require sustained attention for completion. Another, more likely explanation is that analyses examining Trails B and Vigilance Correct had greater power to detect differences, as this analysis had the largest sample size in this series of analyses; the other analyses may not have had enough power to show differences. The sample size was relatively small; relationships might have been more powerful had a larger sample been examined. It is also possible that Trails B and Vigilance both tap into some general cognitive capacity, although this explanation is unlikely in the absence of other significant relationships with GDS scores. Finally, it is possible that this is simply a spurious finding due to the large number of tests that were conducted, and therefore needs replication. Many participants were referred, in part, for assessment of ADHD or other attentional dysfunction. This specialized referral question and the fact that many of the participants in our sample were experiencing psychiatric problems

N.E. Carlozzi, M.D. Horner / Archives of Clinical Neuropsychology 22 (2007) 37–44

43

makes it difficult to generalize findings to other samples. In addition, participants were generally between the ages of 30 and 50 and had high school education indicating a relatively homogenous sample. This further limits generalizability to other age groups or education levels. Other limitations are that some of the sample sizes for the group analyses were small and potentially underpowered, and that the sample sizes for each analysis differed due to individually tailored batteries. These variable sample sizes make it difficult to interpret the demographic and clinical characteristics of the sub-samples for these analyses since they may not accurately represent the overall description of our clinical sample. The present findings do not provide strong support for the convergent and divergent validity of the GDS as an attention measure in adults. Future research could examine whether the GDS, in fact, is sensitive to specific aspects of attention that are not captured by other attention tests, or whether it does not adequately discriminate attention difficulties in adults. Data from the present sample would suggest that performance on the GDS be interpreted with caution when assessing attention in adults. References Aylward, G. P., Verhulst, S. J., & Bell, S. (1988). The relationship between the GDS and DSM-III diagnoses: Introduction to the Accuracy Index (AI). ADHD/Hyperactivity Newsletter, 11, 2–4. Binder, L. (1993). Portland Digit Recognition Test manual (2nd ed.). Portland, Oregon: L. Binder. Burg, J. S., Burright, R. G., & Donovick, P. J. (1995). Performance data for traumatic brain-injured subjects on the Gordon Diagnostic System (GDS) tests of attention. Brain Injury, 9(4), 395–403. DuPaul, G. J., Anastopoulos, A. D., Shelton, T. L., Guevremont, D. C., & Metevia, L. (1992). Multimethod assessment of attention-deficit hyperactivity disorder: The diagnostic utility of clinic-based tests. Journal of Clinical Child Psychology, 21(4), 394–402. El-Sayed, E., van’t Hooft, I., Fried, I., Larsson, J.-O., Malmberg, K., & Rydelius, P.-A. (1999). Measurements of attention deficits and impulsivity: A Swedish study of the Gordon Diagnostic System. Acta Paediatrics, 88, 1262–1268. Gaudino, E. A., Geisler, M. W., & Squires, N. K. (1995). Construct validity in the Trail Making Test: What makes Part B harder? Journal of Clinical and Experimental Neuropsychology, 17, 529–535. Gordon, M. (1983). The Gordon Diagnostic System. DeWitt, NY: Gordon Systems. Gordon, M. (1987). How is a computerized attention test used in the diagnosis of attention deficit disorder? The Young Hyperactive Child, 19(1/2), 53–64. Gordon, M., & Mettelman, B. B. (1988). The assessment of attention. I. Standardization and reliability of a behavior-based measure. Journal of Clinical Psychology, 44(5), 682–690. Grant, M. L., Ilai, D., Nussbaum, N. L., & Bigler, E. D. (1990). The relationship between continuous performance tasks and neuropsychological in children with attention-deficit hyperactivity disorder. Perceptual and Motor Skills, 70, 435–445. Grodzinsky, G. M., & Barkley, R. A. (1999). Predictive power of frontal lobe tests in the diagnosis of attention deficit hyperactivity disorder. The Clinical Neuropsychologist, 13(1), 12–21. Hathaway, S. R., & McKinley, J. C. (1983). The Minnesota Multiphasic Personality Inventory manual. New York: Psychological Corporation. Heaton, R. K., Chelune, G. J., Talley, J. L., Kay, G. G., & Curtiss, G. (1993). Wisconsin Card Sorting Test. Manual revised and expanded. Odessa, FL: Psychological Assessment Resources. Horner, M. D., Teichner, G., Waite, R. B., & Harvey, R. T. (2000). Convergent and divergent validity of the Gordon Diagnostic System. Journal of the International Neuropsychological Society, 6, 147. Lezak, M. D., Howieson, D. B., & Loring, D. W. (2004). Neuropsychological assessment (4th ed.). New York: Oxford University Press. Mayes, S. D., & Calhoun, S. L. (2002). The Gordon Diagnostic System and the WISC-III Freedom from distractibility index: Validity in identifying clinic-referred children with and without ADHD. Psychological Reports, 91, 575–587. Mayes, S. D., Calhoun, S. L., & Crowell, E. W. (2001). Clinical validity and interpretation of the Gordon Diagnostic System in ADHD assessments. Child Neuropsychology, 7(1), 32–41. Meyers, J. E., & Meyers, K. R. (1995). Rey Complex Figure Test and Recognition Trial. Odessa, FL: Psychological Assessment Resources. Mirsky, A. F., Anthony, B. J., Duncan, C. C., Ahearn, M. B., & Kellam, S. G. (1991). Analysis of the elements of attention: A neuropsychological approach. Neuropsychological Review, 2, 109–145. Orsini, D. L., van Gorp, W. G., & Boone, K. B. (1988). The neuropsychology casebook. New York: Springer-Verlag. Rasile, D. A., Burg, J. S., Burright, R. G., & Donovick, P. J. (1995). The relationship between performance on the Gordon Diagnostic System and other measures of attention. International Journal of Psychiatry, 30(1), 35–45. Reitan, R. M., & Wolfson, D. (1985). The Halstead–Reitan neuropsychological test battery. Theory and clinical interpretation. Tuscan: Neuropsychology Press. Rielly, N. E., Cunningham, C. E., Richards, J. E., Elbard, H., & Mahoney, W. J. (1999). Detecting attention deficit hyperactivity disorder in a Communications Clinic: Diagnostic utility of the Gordon Diagnostic System. Journal of Clinical and Experimental Neuropsychology, 21(5), 685–700. Spreen, O., & Strauss, E. A. (1998). A compendium of neuropsychological tests: Administration, norms, and commentary (2nd ed.). London: Oxford University Press. Tombaugh, T. N. (1996). Test of memory malingering. Los Angeles: Western Psychological Services.

44

N.E. Carlozzi, M.D. Horner / Archives of Clinical Neuropsychology 22 (2007) 37–44

van Zomeren, A. H., & Brouwer, W. H. (1994). Clinical neuropsychology of attention. New York: Oxford University Press. Wechsler, D. (1981). Wechsler Memory Scale—Revised. New York, NY: The Psychological Corporation. Wechsler, D. (1997a). Wechsler Adult Intelligence Scale (3rd ed.). San Antonio, TX: The Psychological Corporation. Wechsler, D. (1997b). Wechsler Memory Scale (3rd ed.). San Antonio, TX: The Psychological Corporation. Wechsler, D. (1999). The Wechsler Abbreviated Scale of Intelligence (WASI). San Antonia, TX: The Psychological Corporation. Wherry, J. N., Paal, N., Jolly, J. B., Adam, B., Holloway, C., Everett, B., & Vaught, L. (1993). Concurrent and discriminant validity of the Gordon Diagnostic System: A preliminary study. Psychology in the Schools, 30, 29–36.