Ptwon. Prmted
in&u!.
N/f
Vol. 9. No. I. pp. 3539. .A11 rights reserved
1988
in Great Brttain.
ABILITIES
Copyright
c
0191-8869 88 S3.00 + 0.00 I988 Pergamon Journals Ltd
INVOLVED IN PERFORMANCE ON COMPETING TASKS
GERARD FOGARTY and LAZAR STANKOV Department of Psychology,
The University
of Sydney,
Sydney.
NSW 2006. Australia
(Receiced 21 August 1986)
Summary-It
has been suggested that situations requiring the division of attention between competing can tap abilities which are central to cognitive functioning. This paper attempted to determine whether there are identifiable characteristics in the single tests that will help to predict changes in general factor loadings when they are presented as components of competing tasks. The framework for the study was provided by the theory of fluid (Gf) and crystallized (Gc) intelligence. A battery of single and competing tasks was presented to 126 subjects. The competing tasks represented a variety of within and across factor combinations from different levels of the Gf/Gc hierarchy. Modality of presentation was also varied in some combinations. The results indicate that single and competing tasks measure the same broad ability of the Gf/Gc theory and that general factor loadings can decrease as well as increase in the competing task situation. There is also evidence that these tendencies depend, to some extent, on the degree to which the tasks require the same cognitive factors or use the same sensory modalities. Overall, it is assumed that competing tasks do make greater demands on general ability but that, unless the requirements of the single tests themselves are relatively small, performance breakdown, with an accompanying decrease in general factors loadings, is the likely outcome. activities
ABILITIES
INVOLVED
IN
PERFORMANCE
ON
COMPETING
TASKS
One of the more praiseworthy trends in the recent history of psychology has been an increased willingness among cognitive and differential psychologists to seek inspiration in theories and techniques developed outside their own fields. In particular, workers in these two, traditionally separate fields now realise that they have mutual interests and that there is much to be gained by studying the literature and techniques of the other field. One offshoot of this research into the relationship between cognitive and psychometric variables has been a renewal of interest in the nature of intelligence. Psychometricians have never been able to agree on explanations of the tendency for all cognitive tasks to correlate positively. In fact, two distinct interpretations of this tendency and a related presence of a general factor among cognitive tests, exist in the literature today. Historically, the first view to emerge was that abilities are unitary traits reflecting certain basic aspects of cognitive functioning. Spearman’s interpretations of the nature of ‘g’, Thurstone’s beliefs that primary mental abilities represented sophisticated versions of mental faculties and, among contemporary psychologists, Jensen’s (1982) and Eysenck’s (1982) statements about psychometric abilities. belong to this category. An alternative view can be traced to G. Thomson (see Thomson, 1948) who showed that the concept of a general factor can be derived from the assumption that there is a large number of separate abilities rather than a single unitary construct. E. Thorndike (1932) pointed out that stimulus-response bonds may qualify as these small and numerous abilities. Humphries (1979) is a contemporary representative of this point of view. The theory of fluid and crystallized intelligence is an extension of this conceptualization. Broad abilities of this theory-Gf, Gc, Gv, Ga, etc.-should be thought of as organizations or clusters of these elementary abilities (see Horn, 1985). These clusters have different genetic underpinnings, different courses of development, different implications for adaptation, etc. For these reasons, these different clusters should be considered separately: any experimental manipulation should be expected to affect some but not all of them. They should not be thought of as unitary constructs either. Being chiefly concerned with the study of elementary information processing units, cognitive psychology belongs naturally to the second view of the organization of abilities. Nevertheless, it 35
36
GERARD F~GARTY and LAZAR STAZKOL
has yielded one or two broad concepts which can be related directly to the notion of general intelligence in the ‘unitary construct’ sense of this word. The most important of these is the concept of attention. Some developments in attentional theory promise to be of value in our attempts to understand and measure intelligence. Cognitive psychologists no longer talk about attention as though it were simply a filter responsible for selecting some messages for processing and rejecting others. These days, following Kahneman (1973), attentional processes are seen as drawing upon a pool (or pools) of ‘attentional resources’. Questions about span. breadth and intensity of attention are now answered by reference to the distribution of these attentional resources. Hunt (1980), in searching for common ground between the cognitive and psychometric movements, draws a parallel between the concept of attentional resources and Spearman’s g. He argued that it is the only information processing concept that could be described as central to cognitive activity and hence is the only information processing term which could be regarded as the counterpart of g. The implications of such a comparison need to be explored. The introduction of the term “attentional resources” does suggest the possibility of new measurement operations for abilities. For example, the suggestion is now that, in order to measure the general factor. one need not be certain that the tests measure the eduction of relations and correlates, as in the psychometric tradition, but simply ensure that considerable demand is placed upon attentional resources. A common technique for varying the demand for attentional resources involves the simultaneous presentation of two cognitive tasks. These ‘competing tasks’ represent an intriguing new domain in differential psychology. Their appeal lies in the fact that they enable one to take two standard tasks and, by virtue of a simple experimental manipulation, create a situation which can best be described as involving high information load. This view, of course, stems from a conception of intelligence as a unitary construct. In fact. the supposition that competing tasks are an important way to study individual differences in cognitive performance can also derive from the view that broad abilities are samples from a pool of smaller abilities. For example, it may be argued that this experimental manipulation introduces an element of novelty. Although we encounter a vast range of competing tasks in the course of our lives it is usually the case that these tasks are initially both confusing and disorienting. Very often the components are simple tasks that we have learned to perform without any great expenditure of effort. When presented in competition with another task, however, the normal execution strategies are thrown into disarray. We have to develop new strategies of ‘aids’, that enable us to combine the tasks. Many theorists (e.g. Sternberg, 1981) regard this process of strategy formation, of restructuring of the groupings of smaller abilities, as a hallmark of intelligent behavior. It is apparent that competing tasks are important for the measurement of clusters of abilities that we have come to call intelligence irrespective of whether, as psychometricians. we accept the ‘unitary construct’ or the ‘sampling of elementary abilities’ position. Similarly, they are important whether or not, as cognitive psychologists, we want to postulate a general construct like attentional resources or subscribe to a position which does not have a use for such a construct. The suggestion that measures taken under conditions of divided attention provide a good estimate of general intelligence is certainly not new. The literature on individual differences contains studies, some dating back to the last century, which have included split attention tasks. Furthermore, some prominent theorists (Spearman, 1927; Thurstone, 1944) have indicated their belief that such tasks tap processes fundamental to cognitive functioning. Given this background it is surprising that measures of divided attention have not become an integral part of established psychometric batteries. Only in recent years has any serious attempt been made to study the relationship between measures of divided attention and traditional measures of intelligence test batteries. There are a number of possible reasons for the apparent lack of interest in these tasks. Firstly, tests requiring divided attention can be difficult to construct and administer. They can also be very confusing for the testees and, as a consequence. are potentially subject to a greater degree of measurement error than normal tests. Secondly, theorists are uncertain as to the nature of the abilities required in the divided attention situation. Some feel that a separate ‘time-sharing’ factor is involved whilst others argue that no such factor exists. Continuing debate over this hypothetical timesharing ability points to one of the major problems confronting those interested in the practical application of these measures: for a variety of reasons.
Competing
tasks
37
competing tasks do not appear to behave in a predictable fashion. This lack of predictability makes them a difficult subject for research. Lansman, Poltrock and Hunt (1983) go so far as to state that. with unpractised subjects, ability to perform two tasks simultaneously is specific to the particular combination of tasks employed. Aims of this study
In spite of the above difficulties, there are good reasons for undertaking a thorough investigation of the psychometric properties of competing tasks. Evidence from both the cognitive and psychometric fields converges to indicate that they demonstrate a means of affecting the amount of shared variance experimentally (Fogarty and Stankov, 1982; Stankov, 1983a). This tendency was recently demonstrated in a series of studies which found that components of competing tasks can have higher loadings on the general factor of a particular battery of tests than their single counterparts (Fogarty and Stankov, 1982; Stankov, 1983b). These same studies, however, have also given some indication that not all components of competing tasks can be expected to demonstrate this tendency. In their 1982 study Fogarty and Stankov noted that in two of the six competing tasks used there was no increase in correlations. The reasons for this noncompliance were not clear although it was observed that in these two instances the component tasks were highly correlated as single tests. A more recent study (Stankov and Fogarty, 1986) involving competing fluid intelligence tasks as a means of studying practice effects confirms that the increase in correlation will not always be noted in the first application of the competing task. What is lacking in this area is a study which includes a representative sample of tests capable of yielding adequate measures of the major broad ability clusters along with a number of quite diverse competing tasks. Most previous studies involving competing performance have included a limited sample of tasks chosen in a relatively non-systematic manner from the cognitive domain, perhaps on the assumption that one competing task will serve as well as another (see Peterson, 1969). The present study seeks to reverse this tendency by including a selection of tasks representing a variety of within and across factor combinations from different levels of an established theory of intelligence-the theory of fluid (Gf) and crystallized (Gc) intelligence (see Cattell, 1971; Horn, 1980, 1985; Horn and Stankov, 1982). In order to accomplish this aim, the present study employs not only combinations of auditory tests, as in the past, but also combinations involving both visual and auditory presentation. This will allow us to consider the importance of modality of presentation which, according to Norman (1976), is an important moderator of performance on competing tasks. Our aims in this study can best be summarized in the following three points: 1. To establish if components of the competing tasks measure the same broad abilities of the Gf/Gc theory as those measured by the corresponding single tests; 2. To explore conditions under which competing tasks, in comparison to the single tests, will have changed loadings on the higher-order general factor of the present battery of tests; 3. To gain information about the possible role of sensory modalities in performance on competing tasks. METHOD Subjects
A total of 126 subjects was involved in the study. Roughly one-third of these were Psychology I students who participated to gain extra marks in their course. The remainder of the subjects were drawn from the adult population around Sydney. Subjects who reported vision or hearing defects were excluded. Throughout the duration of the study every effort was made to ensure that the subject pool was quite varied with respect to age and education level. The average age of the subject pool was 26 years with a standard deviation of 9.9 years. The average education level was just below university standard. Description
of test battery
The battery of tests used in this study consisted of 20 single and 14 competing tasks. The 20
35
GERARD
F~GARTY and LAZAR STAWOX
single tests included sixteen which were used as components of competing tasks plus four additional markers for the broad factors general fluid intelligence (Gf), general crystallised intelligence (Gc), general visual function (Gv), and general auditory function (Ga). The construction of these various markers posed considerable problems since the tests had to be suitable for simultaneous presentation with another test. This problem was overcome by selecting tests from those described in the literature on individual differences and then modifying them to suit the requirements of this experiment. Essentially the same procedure was adopted in the Fogarty and Stankov (1982) study where the presentation requirements again imposed limitations on the format of the tests. Single Unless
tests
stated
otherwise,
the following
conditions
apply
to these single
tests:
1. The letter ‘A’ attached to the abbreviation of a test name indicates that it was presented in auditory form. The suffix ‘V’ indicates visual presentation. 2. The test consisted of 15 items. 3. The test was administered by computer. 4. There were no time limits, the computer did not proceed to the next item until the subject responded to the current item. 5. The stimuli for the various items were presented at the rate of one per second (sequential presentation). A brief description
of each test follows.
1. Number Series (NSA). Subjects are presented with a series of six numbers. Their task is to work out the rule governing the formation of the series and to type in the next number in the series. Source: this study. Original version: Thurstone (1938). Example:
“1 2 4 8 16 32”? (Ans = ‘64’)
2. Number Series (NSV). Parallel form of the above test presented visually. 3. Letter Reordering (LR). The letters R&T are presented to the subject. They may appear in any order. The subject has to note the order in which they appear. The letters are then repeated but usually in a different order, e.g. S,T,R. The subject has to give the order on the second presentation using the first presentation as a basis for comparison. The answer, in the example, would be ‘231’. Source: this study. Original version: Stankov and Horn (1980). 4. Tonal Counting (TC). Sequences of five, six, or seven notes are presented. Each sequence is formed from repetitions of three clearly identifiable notes: a low note, a medium note, and a high note. The subject has to report how many times each low note occurs, followed by the number of medium notes and, finally, the number of high notes. Source: this study. Original version: Stankov (1983b). 5. Sets (STA). Two sets of three letters are presented, e.g. D,R,O-A,R,O. Two of the letters are the same in each set. Subjects must name both the letter that is missing and the letter that replaces it. In this example, the answer is ‘DA’. Source: this study. Original version: Crawford and Stankov (1983). 6. Sets (STV). Parallel form presented visually. 7. Matrices (MATR). The subject has to choose from among five options the design which completes a matrix. Presented in paper and pencil form. Source: Cattell’s (1949) Culture Fair Test of Intelligence-Scale A, Level 3. 8. Spelling (SPA). Five-, six-, seven-letter words are spelled out, the subject must indicate whether the word is spelled correctly by typing ‘Y’ or ‘N’. They are not given the word so they must decide what the word is and whether or not it is spelled correctly. The words were selected from various spelling texts and, in many cases, involved violations of certain rules. Some irregular words were also included. Care was taken to avoid particularly obscure words. Source: this study. Example: 9. Spelling
“L I L I E S” (Ans = ‘Y’) (SPV).
Parallel
form presented
visually.
Competing tasks
39
10. Similarities (SMA). Subjects have to choose, from among four words, two that have similar meanings. Source: this study. Original version: Ekstrom, French, Harman, Bermen (1976). Example: TRY GET ATTEMPT
PLAY” (Ans = ‘13’)
11. Similarities (SMV). Parallel form presented visually. 12. Scrambled Words (SW). Subjects have to rearrange four letters to form a word. An attempt was made to vary the difficulty level of the items by choosing words from different sections of a word frequency list (Kucera and Francis, 1967). Source: this study. Original: Ekstrom et al. (1976). Example: “E T R E” (Arts = ‘tree’) 13. Hidden Words (HWA). A string of letters is presented. The string is either six letters (first 10 items) or eight letters (last 5 items) in length. Each string contains one four letter word which the subject must identify and report. The word itself is never scrambled although it can appear in any part of the string. Source: this study. Original: Ekstrom et al. (1976). Example: “S C R I S E” (Ans = ‘rise’) 14. Hidden Words (HWV). Parallel form presented visually. 15. Esoteric Word Analogies (ANAL). Subjects are asked to select, from among six options, the term that completes a verbal analogy. There were 30 items in this test. Subjects were asked to do as many as possible in a five-minute period. Presented in paper and pencil form. Source: J. L. Horn-unpublished test. Example: “Fire is to Hot as Ice is to (1) Pole (2) White (3) Cold (4) Cream (5) Born (6) NA” (Ans = ‘3’) 16. Memory Span (MS). The task is to reproduce digit strings which increase in length until the subject makes two successive errors. The score is the length of the longest correctly reproduced string. (Taken from the Wechsler Adult Intelligence Scale and administered in the usual fashion). 17. Hidden Figures (HF). Two figures were presented at the top left and the top right of the screen respectively. A large, more complex figure appeared in the lower half of the screen. The display lasted for 5 set after which time the subjects made one of the following responses: ‘O’-neither figure present in the larger pattern: ‘I’--figure 1 present; ‘2’-figure 2 present; or ‘It’-both figures present. Source: this study. Original version: Ekstrom et al. (1976). 18. Card Rotations (CR). Subjects must compare a form with a model and decide whether the form can be rotated so that it matches the model. Subjects were asked to solve as many as possible in a 3-min period. Paper and pencil test. Source: Ekstrom et al. (1976). 19. Tonal Memory (TM). A sequence consisting of three, four or five tones is presented. The sequence is then repeated with one of the tones changed. The subject must identify the position of the tone that changed. Source: this study. Original version: Seashore, Lewis and Saetveit (1960). 20. Chord Decomposition (CD). A three-note chord is followed by three individual notes. The subject has to decide whether the notes are the same as those played in the chord (S), or whether one has moved up (U) or down (D). Source: this study. Original version: Wing (1962). Competing tasks There are a number of techniques of presentation that will lead to competition between tasks. In many experiments some effort is made to present the elements which make up the two tasks simultaneously so that the subject is faced with the additional task of coping with competition at an input as well as at a processing level. This form of presentation was used wherever possible in this study. That is, the competing tasks involved not just the simultaneous presentation of two items but, wherever possible, the simultaneous presentation of the elements comprising the items. Apart from these construction details, there are some other important features which should be
GERARD F~GARTY and LAZAR STASROV
40
noted here. The most obvious way to study the effect of competing task performance is to take the single tests and combine them in various ways. Direct comparisons could then be made between the two conditions. Unfortunately, this is not really possible in an individual differences study where it is important to achieve a reasonable spread of scores in both single and competing task conditions. If it were possible to construct single tests that exhibited the desired spread of scores it is almost certain that these tests would prove to be much too difficult when combined with one another. In an attempt to overcome this problem, all tests were simplified somewhat before inclusion as a component of a competing task. This was usually achieved by retaining at least 70% of the items in the single test and by selecting easier items for the remaining items needed to complete the longer competing task. The solution still enables direct comparisons to be made should the need become evident. This can be done by ignoring the five additional items selected for the competing tasks. Unless otherwise stated, the following conditions apply to these competing tasks: a. In the auditory/auditory combinations b. All tests consisted of 20 items. C All tests were presented by computer.
one voice is male and the other
1. Number Series/Letter Reordering (NS/LR). Both tasks presented Series going to the left ear and Letter Reordering to the right. Example:
NS: LR:
2 4 6 8 R S T T
10 S
auditorily
SM: SW:
HIGH H
LOFTY T
STILL A
with the Number
12 (14) R (321)
2. Number Series/Letter Reordering. Number Series presented visually. presented auditorily. 3. Spelling/Tonal Counting (SP/TC). Both tasks presented auditorily with right ear and Tonal Counting to the left. The Tonal Counting task sequences of five or six tones, although some sequences contained seven be spelled always matched the length of the tonal task. 4. Spelling/Tonal Counting. Spelling presented visually, Tonal Counting 5. Similarities/Scrambled Words (SM/SW). Both tasks presented auditorily the left ear and Scrambled Words to the right. Example:
female.
FLYING E
Letter
Reordering
Spelling going to the consisted mostly of tones. The words to presented auditorily. with Similarities to
(12) (HATE)
Words. Similarities presented visually, Scrambled Words auditorily. 6. Similarities,‘Scrambled 7. Hidden Figures/Sets (HF/STA). Hidden Figures presented visually, Sets presented auditorily. The Hidden Figures display was timed to appear for the exact duration of the auditory task. (TM/STA). Both tasks presented auditorily with Tonal Memory to the 8. Tonal Memory/Sets left ear and Sets to the right. The Tonal Memory task was limited to three notes per sequence in order to align it with the Sets task. Sets presented visually, Tonal Memory presented auditorily. 9. Tonal Memory/Sets. Words (HF/HW). Hidden Figures presented visually, Hidden Words 10. Hidden Figures/Hidden presented auditorily. Once again, the display for the Hidden Figures task was timed to coincide with the auditory task. 11 Chord Decomposition/Hidden Words (CD/HW). Both tasks presented auditorily with Chord Decomposition to the left ear and Hidden Words to the right. A strict matching of the elements comprising the items was not possible, but the individual items began and ended simultaneously. 12. Chord Decomposition/Hidden Words. Hidden Words presented visually, Chord Decomposition presented auditorily. 13. Tonal Memory/Hidden Figures (TM/HF). Hidden Figures presented visually, Tonal Memory presented auditorily. The items were aligned so that they began and ended simultaneously. 14. Tonal Memory/Chord Decomposition (TM/CD). Both tasks presented auditorily with Tonal Memory to the left ear and Chord Decomposition to the right.
Competing
tasks
41
Table I. Tests used to form various combinations Broad Factor Gf(A!A) (A/V)
Gc(A,‘A) (A/V) Gv (A/V) Ga(A:A)
Gf
GC
NSA:LR NSV,LR
TCjSPA TC,iSPV SMA/SW SMV/SW
GV STA/H F HWA/HF HF,‘TM
Ga STA;TM STV:TM HWA,CD HWVCD TMCD
Comment on design One of the major purposes of the present study was to collect performance data on a wide range of competing tasks. The battery described above includes 14 competing tasks which cover a total of nine factorial combinations. This provides adequate opportunity for studying the resulting factor structure as well as allowing for some assessment of possible moderating effects on this trend due to modality of presentation. In all, there are five instances where a competing task is presented in both within-modality and across-modality forms. Table 1 shows how each of the 14 competing tasks fits into the overall design of the study. It can be seen that the design illustrated in Table 1 is incomplete: there are no visual/visual splits. There are two reasons for this: firstly, it is technically very difficult to present two visual tasks simultaneously. Secondly, to complete the design set out in Table 1 would have meant the expansion of a test battery that was already very large. Time considerations rendered this impossible. Since the total battery had to be pruned it was convenient to omit the problematic visual/visual combinations. Order of presentation of tests The order of presentation was systematically changed during the study. Five different orders were used with the positions of the tests shuffled after every 28 subjects. This ensured that performance on any given test was not unduly influenced by its position in the battery. Competing tasks were intermingled with the single tests. Due to the complex and novel nature of these tasks, however, care was taken to ensure that subjects had always completed the two relevant single tests before undertaking each competing task. This feature leaves wide open the possibility of practice effects but this drawback must be set against the obvious necessity for having subjects understand the nature of the two tests that they are trying to combine. Data collected For each computer-administered test, all items were scored as ‘1’ if correct, and ‘0’ otherwise. Total scores were kept for the paper-and-pencil tests. Details of scoring for the competing tasks must be prefaced by some remarks about their administration. In the case of competing tasks, subjects were instructed to attend to both of the tasks. The computer requested answers for each task separately; that is, subjects were asked to answer one task before answering the other. Inappropriate responses were usually detected and rejected. Subjects never knew which task they were going to be asked to answer first. The ordering of responses was randomised with the proviso that the two components each received an equal number of first-prompts over the whole test. Thus, in the Letter Reordering and Number Series combination, which consisted of 20 items, there was a total of 10 occasions on which the subject was required to respond to Letter Reordering before responding to the Number Series item. These responses are henceforth to be known as ‘Primary’. On the other 10 occasions, the answer to the Number Series task was requested first; thus, there are a set of Primary scores for it as well. Other responses are to be known as ‘Secondary’. It can be seen, therefore, that for each competing task there were four sets of scores: two Primary and two Secondary. Information relating to age, sex, years of musical training, and educational level was also collected. Equipment used All tests, with the exception of tests 7, 8, 16 and 18, were individually administered and were presented by an Apple II Europlus microcomputer. A Nakamichi LX-3 cassette tape recorder was
GERARD
42
F~CARTYand LAZARSTANKOV
connected
to the Apple via an interface built by Departmental technical staff. This interface enabled the Apple to control the presentation of auditory tests. It also made possible the simultaneous presentation of auditory and visual material. The auditory stimuli were delivered through Sony stereo headsets.
Procedure
Total testing time for the whole battery was about 5 h although, since the computer-administered tests were all self-paced, the time taken varied from subject to subject. Testing was broken up into two sessions with approximately the same number of tests in each session. A familiarisation program introduced the subjects to the main features of the keyboard as well as giving them the opportunity of practising typing, making corrections, inputting responses, and so on. All correct responses were signalled by a quiet but cheerful combination of two tones.
RESULTS Preliminary
analyses
and descriptive
statistics
for ail variables
There are two broad groupings of data: single test scores and competing task scores. For the single tests, total scores were used as the measure of performance. In the case of the competing tasks, primary and secondary scores were combined for the purpose of the present analyses. This means that each competing task yield two scores, one for each of the components. The initial stage of data analysis involved the calculation of reliability estimates for all new scales used in this study. This section of the analysis provided useful information about the extent of measurement error in the data. The basic aim was to ensure that all subsequent analyses were conducted on scales which have satisfactory reliability estimates. As a result of these reliability analyses, some items were deleted and the reliability of that scale was then reassessed. It was hoped that the tests constructed for this study would allow for the expression of individual differences. An inspection of Table 2 indicates that even though means and standard deviations vary considerably from test to test, most tests appear neither too easy nor too difficult for the sample used. This Table also displays the number of items in each test and reliability estimates (Cronbach’s coefficient alpha). It can be seen that preliminary analyses have been successful in ensuring that reliabilities of single and competing tasks are or about the same order of magnitude. Average (root-mean-square) reliabilities for single and competing tests are 0.695 and 0.715 respectively. Factorial structure
among single and competing
tasks
The next stage of analysis involved an investigation of the structure underlying the matrix of intercorrelations obtained with the present battery of tests. This was the most crucial stage in the analysis since the purpose of the study was to explore the consequences of combining tests from various levels of the Gf/Gc theory and to establish if single and competing tasks define the same psychometric factors. The tests for the present battery were selected to measure fluid intelligence (Gf), crystallized intelligence (Gc), broad visualization (Gv), and broad auditory function (Ga). In order to establish whether the hypothesized factorial structure was obtained, the correlation matrix between all 48 variables was subjected to a series of exploratory and confirmatory factor analyses. The confirmatory approach is particularly appropriate in the present study since each variable is intended to fill a well-defined cell in a matrix representing a variety of within and across factor combinations. Deviations from the expected factorial pattern could indicate that the expected design has not been achieved. McDonald’s (1980) COSAN program was used for this purpose. The proposed structure assumes that all tests are factorially simple and that components of competing tasks measure the same basic dimensions as their corresponding single tests. Accordingly, variables 1-16 have projected loadings on Gf, variables 17-34 on Gc, variables 35-39 on Gv,
Competing tasks Table 2. Means.
I. Number Series (NSA) 2. Number Series (NSV) 3. Number Series (NSA/LR) 4. Number Series (NSV/LR) 5. Letter Reordering (LR) 6. Letter Reordering (LR/NSA) 7. Letter Reordering (LR/NSV) 8. Tonal Counting (TC) 9. Tonal Counting (TC,‘SPA) IO. Tonal Counting (TCISPV) I I. Sets(STA) 12. Sets (STV) 13. Sets (STA/HF) 14. Sets (STA/TM) 15.Sets@TV/TM) 16. Matrices 17. Spelling (SPA) 18. Spelling (SPV) 19. Spelling (SPA/TC) 20. Spelling (SPV/TC) 21. Similarities (SMA) 22. Similarities (SMV) 23. Similarities (SMAISW) 24. Similarities (SMVjSW) 25. Scrambled Words (SW) 26. Scrambled Words (SW/SMA) 27. Scrambled Words (SW/SMV) 28. Hidden Words (HWA) 29. Hidden Words (HWV) 30. Hidden Words (HWA/HF) 3 I. Hidden Words (HWACD) 32. Hidden Words (HWV/CD) 33. Verbal Analogies 34. Memory Span 35. Hidden Figures (HF) 36. Hidden Figures (HF,STA) 37. Hidden Figures (HF,‘HW) 38. Hidden Figures (HFITM) 39. Card Rotations 40. Tonal Memory (TM) 41. Tonal Memory (TM;STA) 42. Tonal Memory (TM/XV) 43. Tonal Memory (TM:HF) 44. Tonal Memory (TM/CD) 45. Chord Decomp’n. (CD) 46. Chord Decomp’n. (CD/HWA) 47. Chord Decomp’n. (CD/HWV) 48. Chord Decomp’n. (CD/TM)
standard
deviations
8.59 7.46 8.37 9.02 II.53 7.16 IO.71 5.21 3.19 2.99 II.18 I I.18 12.68 10.71 II.34 5.57 5.33 7.85 9.32 I .09 9.65 10.75 7.49 a.25 9.19 5.24 IO.22 6.38 5.47 10.07 IO.51 13.09 15.74 13.09 6.63 II.37 12.12 14.19 52.11 I I .92 9.64 10.28 12.64 IO.71 8.02 7.29 8.24 3.63
43 and reliabilities
2.54 3.01 3.42 3.76 2.83 3.19 4.33 3.68 3.13 2.89 2.28 2.59 3.67 3.15 3.68 1.76 I .68 2.37 2.69 3.09 2.02 2.96 3.07 3.36 2.94 3.25 4.18 2.24 2.05 4.07 4.08 3.11 4.68 1.99 2.89 3.68 3.92 3.75 14.36 2.37 3.27 3.62 3.55 3.8 2.85 2.84 2.85 1.88
I3 I3 I9 la I5 20 I9 I5 I8 I8 I4 I4 I9 I8 I9 I3 a I3 I7 I8 I? I5 I9 I9 I5 I8 I9 I5 I4 I9 I9 I9 30 I8 I3 I9 I8 I8 80 I5 I7 I9 I7 I9 I5 I7 I8 I5
0.68 0.74 0.68 0.77 0.73 0.60 0.82 0.82 0.80 0.77 0.65 0.74 0.78 0.70 0.75 0.51 0.57 0.51 0.63 0.65 0.73 0.58 0.67 0.74 0.72 0.81 0.74 0.71 0.80 0.78 0.70
0.68 0.76 0.8 I 0.84 0.74 0.70 0.69 0.80 0.77 0.63 0.55 0.60 0.43
and variables 40-48 on Ga’. Notice that this factor structure represents an independent clusters solution, i.e. a version of simple structure that does not allow for overlap among the factors. Because of that, it can be expected that the fit of the model will not equal that achieved by a less restrictive overlapping pattern. The obtained maximum likelihood solution is shown in Table 3. This solution generated a Chi-square goodness-of-fit test of 1869.15. With df = 1074, this Chi-square is significant at the 0.05 level and, strictly speaking, the solution is not acceptable. To obtain an acceptable solution with the present data, it would be necessary to extract some eleven factors as indicated by the exploratory maximum likelihood analysis. However, although the four-factors exploratory maximum likelihood solution (COFA) produces a too high goodness-of-fit test (Chi-square = 1302.00, with df = 942) the distribution of residuals was satisfactory (the mean of the off-diagonal residuals was 0.05) and, most important, the pattern of loadings from the PROMAX solution was exceedingly similar to the confirmatory factor matrix of Table 3. The exploratory factor analytic procedures embodied in the Little Jiffy, Mark IV (LJIV) package of ‘The only departure from the original plan of the study concerns the switch of Memory Span from Gf to Gc. This change was made on the basis of exploratory analyses (which are not reported in this paper) and does not affect the overall design.
44
GERARD FOGARTY and Table 3. Confirmatory
LAZAR STASKO~
solution
Test I. 2. 3. 4. 5. 6. 7.
Number Series (NSA) Number Series (NSV) Number Series (NSA with LR) Number Series (NSV with LR) Letter Reordering (LR) Letter Reordering (LR with NSA) Letter Reordering (LR with NSV) 8. Tonal Counting (TC) 9. Tonal Counting (TC with SPA) IO. Tonal Counting (TC with SPV)
I I Sets(STA) 12. Sets (STV) 13. Sets (STA with HF) 14. Sets (STA with TM) 15. Sets (STV with TM) 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28.
Matrices Spelling (SPA) Spelling (SPV) Spelling (SPA with TC) Spelling (SPV with TC) Similarities @MA) Similarities (SMV) Similarities (SMA with SW) Similarities (SMV with SW) Scrambled Words (SW) Scrambled Words (SW with SMA) Scrambled Words (SW with SMV) Hidden Words (HWA)
with iour factors Gf
cc
43. 44. 45. 46. 47. 48. Factor
Ga
0.65 0.46 0.39 0.45 0.63 0.66 0.50 0.65 0.74 0.60 0.78 0.70 0.64 0.73 0.66 0.68 0.67 0.55
29. Hidden Words (HWV) 30. Hidden Words (HWA with HF) 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42.
G\
0.72 0.78 0.65 0.75 0.74 0.40 0.57 0.68 0.54 0.55 0.66 0.56 0.67 0.68 0.56 0.58
Hidden Words (HWA with CD) Hidden Words (HWV with CD) Verbal Analogies Memory Span Hidden Figures (HF) Hidden Figures (HF with STA) Hidden Figures (HF with HWA) Hidden Figures (HF with TM) Card Rotations Tonal Memory (TM) Tonal Memory (TM with STA) Tonal Memory (TM with STV) Tonal Memory (TM with HF) Tonal Memory (TM with CD) Chord Decomposition (CD) Chord Decomposition (CD with HWA) Chord Decomposition (CD with HWV) Chord Decomposition (CD with TM) intercorrelations: Gf GC GV Ga
0.60 0.72 0.74 0.82 0.35 0.60 0.70 0.82
0.73 0.58 0.52 0.59 0.58
0.37 Gf
Gc
Gv
Ga
I .oo 0.56 0.65 0.67
1.00 0.57 0.49
I .oo 0.50
I .oo
Kaiser and Rice (1974) also generated a solution akin to that of Table 3. For these reasons we do not present the exploratory factor analytic solutions here. There can be little doubt that the four factors in Table 3 represent broad fluid intelligence (Gf), broad crystallized intelligence (Gc), broad visualization (Gv), and broad auditory function (Ga) respectively. Tests which were selected as representative of these broad functions appear to have behaved in a typical way in the present battery. There are no exceptions. We can therefore conclude that single and competing tests measure the same factors?.
‘In these analyses primary and secondary scores have combined to yield a single measure of competing task performance for each variable. It is possible, of course, to separate these measures and. by so doing, extend the number of variables in the present battery to 76. Although the details are not reported, all the indications are that primary and secondary scores in the present study are measuring the same thing. These scores were highly correlated and produced identical patterns of factor !oadings in all solutions obtained. In the absence of any strong grounds for separating the measures. it was obviously desirable to continue the analyses with the combined scores only.
Competing
Changes
in loadings on the general factor
of present
tasks
45
battery
The next stage of data analysis involved an examination of the actual changes in the second-order loadings (or in the loadings on the general factor of this battery) as one moves from single to competing task performance. For this purpose we calculated the loadings on the second-order factor derived from the Schmid-Leiman (1957) transformation of the previously mentioned exploratory four-factor maximum likelihood solution (COFA). Procedures described by Stankov (1979) were used to obtain second-order loadings from the Little Jiffy solution (LJIV). The main point of interest here was to see whether there was overall support for the suggestion that measures taken under competing conditions have higher loadings on the general factor of this battery than the corresponding single test measures. On the basis of our previous results, this was not expected to happen for those pairs of tasks which had high correlations under the single presentation condition. The first two columns of Table 4 list the loadings of all variables on the second-order factors. Considerable similarities between the two sets of factor loadings indicate that factor analytic procedures based on quite different rationales lead to essentially the same overall conclusions with the present data. The third column of Table 4 represents Little Jiffy’s loadings corrected for attenuation. This correction was achieved by dividing each test’s factor loading with the square root of the product of the test’s reliability and the index of reliability from Little Jiffy’s output. Although correction for attenuation increases the overall level of factor loadings, as expected the overall pattern of these loadings is similar to the first two columns. An examination of Table 4 reveals some interesting patterns underlying the changes in the general factor loadings of this battery. For example, we may observe that although average (root-mean-square) loadings for single and competing tasks differ little, interesting trends emerge when we compare loadings for individual tests. The first point to note is that these loadings can decrease as well as increase in the competing task situation. The finding can be summed up, in general terms, as follows: general factor loadings tend to increase when single test loadings are low and, conversely, they decrease when the single test loadings are high. In other words, the magnitude and direction of changes are affected by the general factor saturations of the single tests involved. Increases in general factor loadings are likely to occur in combinations involving tests that define
Table 4. Loadings Single
on second order factor’
COFA
LJIV
Corrected
Competing
COFA
Number Series (NSA) Number Series (NSV) Letter Reordering (LRA)
0.63 0.73 0.64
0.58 0.63 0.60
0.74 0.76 0.73
Spelling (SPA) Spelling (SPV) Tonal Counting
0.39 0.26 0.62
0.42 0.27 0.57
0.61 0.38 0.65
0.48 0.53 0.61
0.5 I 0.53 0.60
0.66 0.65 0.73
0.58
0.55
0.71
NSA with LRA NSV with LRA LRA with NSA LRA with NSV SPA with TC SPV with TC TC with SPA TC with SPV SMA with SWA SMV with SWA SWA with SM.4 SWA with SMV STA with tiF STA with TM STV with TM HF with STA HF with HWA HF with TM TM with STA TM with STV TM with HF TM with CD HWA with CD HWV with CD HWA with HF CD with HWA CD with HWV CD with TM
0.56 0.70 0.36 0.47 0.19 0.25 0.55 0.53 0.44 0.56 0.43 0.55 0.61 0.58 0.50 0.5 I 0.52 0.57 0.48 0.52 0.60 0.48 0.41 0.60 0.53 0.37 0.30 0.25
0.54 0.61 0.34 0.47 0.25 0.28 0.47 0.48 0.44 0.55 0.43 0.56 0.58 0.57 0.45 0.46 0.47 0.51 0.43 0.48 0.53 0.41 0.54 0.43 0.54 0.37 0.27 0.26
0.68 0.72 0.46 0.54 0.36 0.37 0.55 0.58 0.60 0.70 0.52 0.65 0.68 0.71 0.54 0.55 0.54 0.58 0.54 0.60 0 62 0.49 0.63 0.50 0.68 0.52 0.37 0.41
0.49
0.47
0.57
Similarities Similarities Scrambled
(TC)
(SMA) (SMV) Words (SWA)
Sets (STA) Sets @TV) Hidden Figures
(HF)
0.52 0.40
0.46 0.34
0.56 0.43
Tonal Memory
(TM)
0.43
0.38
0.46
0.52 0.60
0.51 0.54
0.62 0.67
0.22
0.20
0.26
0.52
0.50
0.61
Hidden Words (HWA) Hidden Words (HWV) Chord
Decomposition
(CD)
_ (root-mean-sauarel correlations
Averape
a-
LJIV Corrected
,
‘COFA= Maximum likelihood analysis followed by VARIMAX and PROMAX-package developed by Roderick McDonald. LJIV = Little Jiffy Mark IV of Kaiser and Rice (1974). Corrected = loadings of the LJIV solutmn corrected for attenuation.
GERARD FOGARTYand LAZAR STANKOV
16
Table 5. Changes in general factor loadings across factorial combinations Ocher factor in combination Cf Factor I. 2. 3. 4 5. 6. 7
Gf(A:A) GF(A:V) Gc(A/A) Gc(AN) Gv Ga(A/A) Ga(A;V)
LJIV -0.15 -0.07 -0.17 0.01 0.12 0.05 0.10
GC Corrected
LJIV
Corrected
-O.l6(av) -O.l?(.w) -0.25 -0.01 0.12 0.08 0.14
-0.10 -0.09 -0.12 -0.01 0.13 0.17 0.07
-0.10 -0.07 -0.14(w) - 0.07 (av) 0.11 0.26 0.1 I
Gv LJIV 0.03
Ga
Corrected -0.03
0.03
0.06
0.15
0.16
LJIV
Corrected
0.02 -0.01 0.03 -0.11 0.17 0.05
0.00 -002 0.01 -0.17 0.15 0.09 (av)
perceptual factors, such as Gv and Ga, and decreases are likely with measures of the factors that involve higher mental processes, such as Gf and Gc. Another interesting feature of the present data is that they indicate the necessity to consider the combined demands of the tasks. A Gf marker such as the Sets test, which has a high general factor loading as a single test (0.58 in COFA solution), can show an increase under competing task conditions provided that it is paired with a marker for a perceptual factor. Table 5 shows how the general factor loadings of markers for various factors change according to the nature of the other variable in the combination. To simplify the presentation we consider changes involving “LJIV” and “Corrected” columns of Table 4 only. To assist in the interpretation of these figures, consider the entries in the first row and the LJIV columns. There were two Gf markers used to create the Gf/Gf pairings: Number Series (NS) and Letter Reordering (LR) tests. In the within modality presentation Number Series test showed a change of -0.04 and Letter Reordering test a change of -0.26. These two figures were averaged to yield an overall change of -0.15 for this cell. In the cross modality presentation, Number Series showed a change of - 0.02 and Letter Reordering a change of - 0.13. The average change is -0.07. All diagonal entries represent averages obtained in this fashion. Interpretation of the off-diagonal entries is more straightforward. In the first row and second LJIV column of Table 5, the -0.10 refers to the change observed in Tonal Counting (Gf marker test) when paired with Spelling (a Gc marker test). Immediately below is the corresponding figure from the cross-modality presentation. The remainder of the Table can be interpreted in similar fashion. Although it could be argued that the changes indicated in Table 5 for LJIV are not particularly pronounced, they are reinforced by the changes in loadings obtained through the application of the correction for unreliability formula. These changes seem to highlight the importance of competition for central processes as a major determinant of the final general factor loadings. Almost without exception, instances of decreased general factor loadings have occurred with combinations of factors involving higher mental processes. There is only one instance of a substantial decrease (-0.11 or -0.17) which does not involve a Gf/Gf, Gf,‘Gc. or Gc/Gc combination. This particular instance involved the Hidden Words (Gc marker) test when paired with Chord Decomposition (Ga marker) test in a between modality form of presentation. The pattern of changes associated with the Gv and Ga markers is noticeably different. In most cases these markers, and also the variables with which they are paired, showed an increase in loadings on the general factor of this battery.
Possible
role of modality
of presentation
The decidedly lopsided appearance of Table 5 makes it look as though the general factor saturations of the single tests is the only factor of importance when considering potential changes in the size of factor loadings under competing tasks conditions. On closer inspection, however, it can be seen that modality effects also help to determine the final general factor loadings. This is most obvious in the Gf/Gf, Gc/Gc, and Gf/Gc, combinations where the decrease in general factor loadings is less in the cross modality presentation. The present result should be noted and its nature should be further explored in future work.
Competing tasks SUMMARY
AND
47
CONCLUSIONS
The main outcomes of the present study point to the fact that competing presentations of well-established psychometric tests of cognitive abilities measure the same broad clusters of abilities as single tests. Thus, the use of competing tasks does not imply that an unknown ability or an unknown cluster of abilities is being tapped. Competing tasks can be employed in the same way as single tests. One of the aims of this study has been to assess the potential of competing tasks as measures of general intelligence. In a previous report (Fogarty and Stankov, 1982) it was argued that components of competing tasks, on the average, have higher loadings on the general factor of a given battery of tests that single tests. In the present study, single and competing tasks have about the same average loading on the general factor of this test battery. As a result of the present study, it cannot be argued that measures taken under competing task conditions will always show higher general factor loadings than single test measures. Instead, we now have at least two qualifying conditions for such an expectation. Firstly, we now know that if the combination involves two tests which typically have high general factor loadings, then competing task loadings on the general factor may decrease. On the other, hand, if neither of the single tests has a high loading on the general factor then either (or both) components may show an increase under competing conditions. This is an extension of our previous claim that if single test correlations were high, it may be difficult to increase correlations under competing conditions. Secondly, the present data indicate that although the complexity of the component tasks, defined in terms of general factor loadings, appears to play the major role in determining the competing task loadings, there is some evidence that task similarity can also play a part. Where there is a high degree of similarity between tasks, as in the case of within modality, decreases in general factor loadings are more pronounced. A drawback of this version of the competing task hypothesis is that its predictive value is somewhat limited. A difficulty centers around the inability of the hypothesis to offier specific guidelines as to what might be regarded as ‘high’ and ‘low’ combined general factor loadings. If one were to take two known measures of the general intelligence and combine them in the manner used in the present study, then one could predict a decrease in the general factor loadings. At the other end of the scale it is very likely that in a combination involving tests with low general factor loadings an increase will be observed. To this extent, the revised hypothesis is capable of yielding concrete predictions and is therefore testable. Nevertheless, there will be many task pairings which fall somewhere between these extremes and it is here that the deficiencies become obvious. Competing
tasks in recent studies of cognitive
abilities
In order to evaluate the usefulness of competing tasks in the study of human cognitive abilities it is necessary to place the present study in its proper context. Our recommendation to study and use these tasks in future work on psychometric abilities derives from research preceding the present study (Fogarty and Stankov, 1982; Stankov, 1983a), from a possibility of the existence of a timesharing factor, from a plausible account of the decrease in general factor loadings in the present study and, most importantly, from a series of recently completed studies of our own and of others. One of our recently completed study (Experiment 1 in Stankov and Fogarty, 1987) used the competing task-Number Series and Letter Reordering-which was also employed in the present study. An interesting feature of that work was that it involved extended practice. It was possible to observe that although there was sharp initial decrease in correlation between the components similar to what happened in the present study, the correlation increased with each practice session. By the last practice session, competing tasks had the same correlation as the single tests. Table 6 displays the trend. Table
6.
Correlational
changes
Stankov
and
across Fogarty. Practice
1
practice
xssions
(from
1987) Sessions 4
5
Single
0.54
0.52
0.57
0.45
0.53
Competing
0.22
0.29
0.24
0.45
0.51
Condition
2
3
48
GERARD F~GARTY and LAZAR STAXKO~
In another set of three experiments we used the same competing tasks as those reported in the present paper and we also gave WAIS-R to all our subjects (see Stankov, 1988). The tasks were: 1. Similarities and Scrambled Words; 2. Hidden Figures and Tonal Memory: and 3. Counting test presented under single and competing conditions. Our interest in these three additional studies was to establish if competing tasks have higher correlations with abilities tapped by the WAIS-R than single tests. This proved to be the case. Furthermore. when we formed composite scores from WAIS-R which correspond to fluid intelligence (Gf), crystallized intelligence (Gc), and short-term acquisition and retrieval function (SAR), some quite interesting correlations emerged. In all cases competing tasks had higher correlations with the three intelligence components than single tests but particularly pronounced were correlations with Gf and SAR. These two broad factors are presumed to be more affected by the competing task manipulation than Gc (Stankov, 1983b; Stankov, 1986; Stankov, Myors and Oliphant, 1987). This was the first time that we had used a well-known measure of intelligence as a criterion with which to correlate single and competing scores. The results clearly support the view that competing tasks can represent a better measure of the abilities tapped by intelligence tests than single tests. Finally, Vernon and Jensen (1984) reported that competing tasks involving a version of S. Stemberg’s Memory Scan task (state if a digit was present in the previously given set of 1, 2, 4. or 8 digits) and a Same-Different task (say if two words are same or different physically, e.g. DOG-DOG or DOG-LOG) have the highest loading on the first principal component of a battery of information processing tasks. Also, these competing versions show the highest correlation with the general factor of the Armed Services Vocational Aptitude Battery. Overall, this additional evidence points to the importance of competing tasks in the study of individual differences in cognitive abilities. Provided one recognizes that there are situations wherein it is unrealistic to expect an increased general factor loading under competing conditionsand some of these conditions have been identified in the present paper-then we are prepared to encourage their use in any applied work related to job situations where decisions have to be made under time-pressured conditions. Selection of pilots, business executives, and students in schools for the intellectually gifted are some of the areas which can benefit from the use of these tasks. REFERENCES Cattell R. 9. (1949) The Culrure Free Ability Test. Institute of Personality and Ability Testing, Champign, III Cattell R. 9. (1971) Abilities: Their Slructure. Growth. and Acfion. Houehton-Mifflin. Boston. Crawford J. and Stankov L. (1983) Fluid and crystallized intelligence and primacy/recency components of short-term memory. Intelligence 7, 227-252. Ekstrom R. 9.. French J. E., Harman H. H. and Bermen D. (1976) .~fnnuai for Kir of Fucror Referenced Cognitive Tests. Educational Testing Service, Princeton, N.J. Eysenck H. J. (Ed.) (1982) A Model for Inrelligence. Springer-Verlag. New York. Fogarty G. J. (1984) Abilities involved in performance on competing tasks. Ph.D. thesis. The University of Sydney. Fogarty G. and Stankov L. (1982) Competing tasks as an index of mtelligence. Person. indiuid. Difl 3, 407422. Horn J. L. (1967) Intelligence-Why it grows, why it declines. Trans-Acfion 23-31. Horn J. L. (1980) Concepts of intellect in relation to learning and adult development. lnrelligence 4, 285-317. Horn J. L. (1985) Remodeling old models of intelligence. In Handbook of Intelligence (Edited by Wollman), Wiley, New York. Horn J. L. and Stankov L. (1982) Auditory and visual factors of intelligence. Intelligence 6, 165-185. Humphries L. G. (1979) The concept of general intelligence. Inrelligence 3, 105-120. Hunt E. (1980) Intelligence as an information processing concept. Br. J. Psychol. 71, 449474. Hunt E. and Lansman M. (1982) Individual differences in attention. In Advances in rhe Psychology o/Human Intelligence (Edited by Sternberg R. J.), Erlbaum Associates, Hillsdale. N.J. Jensen A. R. (1982) Reaction time and psychometric g. In A .Cfodel for fnrelligence (Edited by Eysenck H-J.), Springer-Verlag, New York. Kaiser H. F. and Rice T. (1974) Little Jiffy, Mark IV. Educ. Psychol. .Measurmr 34, 111-117. Kahneman D. (1973) Arrendon and Eflorl. Prentice-Hall, Englewood Cliffs. N.J. Kucera H. and Francis W. N. (1967) Computational Analysis of Presenr Day American English. Brown University Press. Providence, R.I. Lansman M.. Poltrock S. and Hunt E. (1983) Individual differences in the ability to focus and divide attention. Intelligence 1, 299-3 12. Norman D. A. (1976) Memory and Afrenrion: An lntroducrion IO Human Information Processing. John Wiley. New York. McDonald R. P. (1980) A simple comprehensive model for the analysis of covariance structures: Some remarks on application. Br. J. Math. Slarisr. Psychol. 33, 161-183. Monty R. A. (1973) Keeping track of sequential events: Implications for the design of displays. Ergonomics 4, 443-454. Peterson R. L. (1969) Concurrent verbal activity. Psychol. Rec. 76, 376-386. Salthouse T. A. (1982) Adult Cognition: An Experimental Pswhology of Human Aging. Springer-Verlag. New York.
Competing tasks
49
Schmid J. and Leiman J. M. (1957) The development of hierarchical factor solutions. Psychomerrika 22, 53-61. Seashore C. E., Lewis D. and Saetveit J. C. (1960) Manual of Instruction and Interpretation for ;he Seashore Measures of Musical Talenr, (2nd revision). The Psychological Corporation, New York. Spearman C. (1927) The Abilities of Mun. Macmillan, London. Stankov L. (1979) Hierarchical factoring based on image analysis and orthoblique rotations. Multirar. Eehar. Rex 14, 339-353.
Stankov L. (1983a) Attention and intelligence. J. Educ. Psychol. 75, 471-490. Stankov L. (1983b) The role of competition in human abtlities revealed through auditory tests. Muhictar. Behau. Res. Monogr. No. 83-1, pp. 63 + VII. Stankov L. (1986) Age-related changes in auditory abilities and in a competing task. Mulriclar. Behar. Res. 21, 65-76. Stankov L. (1988) Single tests, competing tasks and their relationship to the broad factors of intelligence. Person. indioid. Diff: 9, 25-33.
Stankov L. and Fogarty G. (1987) Practice and competing tasks. Unpublished manuscript. Stankov L. and Horn J. L. (1980) Human abilities revealed through auditory tests. J. Educ. Psychol. 72, 21-44. Stankov L., Myors B. and Oliphant G. (1987) Competing tasks, working memory and intelligence. Infelhgence. Submitted for publication. Sternberg R. J. (1981) Intelligence and nonentrenchment. J. Educ. Psycho/. 73, I-16. Thomson G. A. (1948) The Factorial Analysis of Human Ability. 3rd edn. Houghton-Mifflin, Boston. Thorndike E. L. (1932). Intelligence in Animals and Man. University of Chicago Press. Thurstone L. L. (1938) Primary Mental Abilities. University of Chicago Press. Thurstone L. L. (1944) A Factorial Study of Perception. University of Chicago Press. Vernon P. A. and Jensen A. R. (1984) Individual and group differences in intelligence and speed of information processing. Person. indhid. Dig. 5, 411423.
Wing H. D. (1962) A Revision of the Wing Musical Aptitude Test. J. Res. Musical Educ. 10, 743-791.