A confirmatory factor analytic study of time-sharing performance and cognitive abilities

A confirmatory factor analytic study of time-sharing performance and cognitive abilities

INTELLIGENCE 14-1, 43-59 (1990) A Confirmatory Factor Analytic Study of Time-Sharing Performance and Cognitive Abilities JEFFREY B. BROOKINGS Witten...

911KB Sizes 35 Downloads 32 Views

INTELLIGENCE 14-1, 43-59 (1990)

A Confirmatory Factor Analytic Study

of Time-Sharing Performance and Cognitive Abilities JEFFREY B. BROOKINGS Wittenberg University

Previous failures to identify a general time-sharing (TS) ability from factor analyses of multiple task performance have been attributed in part to the uninformed use of exploratory procedures, which may obscure, rather than confirm, the presence of a TS factor. In the present study, 81 male subjects performed four information-processing tasks and six dual task combinations, and completed a battery of psychometric ability tests selected to identify three factors from the Cattell-Horn model (gf, gc, and gs). Confirmatory maximum likelihood factor analyses of the information-processing task variables were not supportive of a general TS factor. However, similar to Jennings and Chiles' (1977) results, the analyses did identify a TS factor specific to combinations that included a visual monitoring task. The TS factor was unrelated to the psychometric ability factors. Future investigations of TS performance should focus on the identification of processspecific TS abilities and possible relationships among them.

I n v e s t i g a t i o n s o f i n d i v i d u a l d i f f e r e n c e s in m u l t i p l e task p e r f o r m a n c e as reflective o f a g e n e r a l t i m e - s h a r i n g (TS) ability date b a c k to the early part o f this c e n t u r y a n d M c Q u e e n ' s ( 1 9 1 7 ) s t u d y o f the " p o w e r " to carry o n t w o m e n t a l p r o c e s s e s at the s a m e t i m e . S i n c e t h e n , t h e r e h a v e b e e n n u m e r o u s a t t e m p t s to c o n f i r m the e x i s t e n c e o f a TS ability b e c a u s e , as W i c k e n s ( 1 9 8 4 a , pp. 3 0 9 - 3 1 0 ) noted: "The practical implications of research and theory on attention and time sharing are as numerous as the cases in which a human operator is called upon to p~rform two activities concurrently, and his or her limitations in doing so represent a bottleneck in performance." The research reported in this paper was conducted while the author was a National Research Council Senior Research Associate at the Workload and Ergonomics Branch of the Harry G. Armstrong Aerospace Medical Research Laboratory, Wright-Patterson Air Force Base, Ohio. Additional support was provided by a contract through the Southeastern Center for Electrical Engineering Education (Contract No. F33615-85-D-0514). I thank Merry Roe for assisting with the data collection, and F. Thomas Eggemeier, William Perez, Gary B. Reid, Clark A. Shingledecker, Donald A. Topmiller, and Glenn F. Wilson for their interest and support. Finally, I am especially grateful to Phillip L. Ackerman and Christopher Hertzog for their very helpful comments on earlier versions of this article. Correspondence and requests for reprints should be sent to Jeffrey B. Brookiugs, Psychology Department, Wittenberg University, P.O. Box 720, Springfield, OH 45501. 43

44

BROOKINGS

This implies that if individual differences in TS performance can be attributed in part to a general TS ability, then it should be possible to develop a measure of the ability for personnel selection and training (Lane, 1982). Methodological approaches to analysis of multiple task performance have included: a) prediction of total task performance from component task performance (Fleishman, 1965; Freedle, Zavala, & Fleishman, 1968); b) prediction of success in flight training from dual task performance (Damos, 1978; North & Gopher, 1976); c) multiple regression analyses of relationships among dual task combinations, controlling for single task performance (Forester, 1983); and d) correlational and factor analyses of relationships among single and dual task variables (Hawkins, Rodriguez, & Reicher, 1979; Jennings & Chiles, 1977; Sverko, 1977; Sverko, Jerneic, & Kulenovic, 1983; Wickens, Mountford, & Schreiner, 1981). Factor Analytic Studies of TS Performance Factor analysis is a particularly useful technique for determining the existence of a TS ability because the notion of a general ability implies individual differences in dual task performance--across a variety of dual task combinations--that transcend single task ability differences. In such situations, which involve analyses of relationships among multiple dependent variables, the partialling of single task scores from dual task scores is accomplished most efficiently via factor analysis (Ackerman & Wickens, 1982). To date, the factor analytic evidence for a general TS ability is not overwhelming. However, Ackerman, Schneider, and Wickens (1982) conducted an extensive review of the literature and argued that the factor analytic studies suffered from conceptual, methodological, and statistical flaws sufficient to preclude a reasonable assessment of TS ability. Ackerman et al.'s (1982) criticisms of TS studies included some directed at a basic methodological errors (e.g., inclusion of variables with zero reliabilities in the factor analyses). However, the most serious flaws (e.g., incorrect decisions about the number of factors to extract, inappropriate rotational strategies) were traced to the absence of conceptual models to guide task selection and provide expectations regarding the likely form of the factor solution. To underscore their arguments, Ackerman et al. (1982) reanalyzed TS data collected earlier by Wickens et al. (1981), who had concluded--based on exploratory factor analyses--that their data were not supportive of the TS construct. Using statistical techniques which allowed them to place restrictions on the form of the rotated solution (i.e., orthogonal Procrustes rotation of the factors to a target matrix), Ackerman et al. (1982) found that the data were consistent with a general TS ability model in which each dual task composite loaded on a TS factor, in addition to the respective single task factors. Ackerman et al. (1982) cautioned that their findings could not be interpreted as strong evidence for the existence of a TS ability because their analyses were of

TIME-SHARINGPERFORMANCE

45

an ad hoc nature. Also, the Procrustes procedure has been shown to be inherently biased in favor of the investigator's target solution (see Horn, 1967), and there are no statistical tests for evaluating the fit of the rotated factors to the target matrix. More important, however, Ackerman et al. (1982) demonstrated that uninformed use of exploratory procedures (e.g., simple structure rotations) may in fact hide a TS factor, if it exists, rather than identify it. Confirmatory factor analysis (CFA) is a useful technique for determining the existence of a TS ability because in CFA the researcher is able to posit and evaluate a variety of complex factor models. In addition, several fit indices are provided by CFA programs, so that TS models can be evaluated relative to simpler models. To date, the only published study of TS ability using CFA is Fogarty's (1987) analysis of "competing tasks"; the rationale for task selection was the theory of fluid (g]) and crystallized (gc) intelligence (Horn & Cattell, 1966). Fogarty concluded that the evidence did not favor the inclusion of a TS ability in the gf/gc framework. However, Fogarty's subjects completed what amounted to a lengthy test battery without practice, and it has been argued (e.g., Ackerman et al., 1982; Damos, Bittner, Kennedy, & Harbeson, 1981) that for a TS factor to emerge, subjects must have extensive practice on the tasks and task combinations. Fogarty's study, therefore, does not provide conclusive evidence on the existence of a TS ability.

The Present Investigation This study was designed to identify the latent dimensions underlying individual differences in performance of information-processing tasks and dual task combinations, with particular attention to the influence of a TS ability. CFA was used to assess the fit of a model including a TS factor, relative to a model which attributed the variance in dual task performance to differences in the respective single task abilities. The information-processing tasks were selected to provide a broad sampling of the attentional resource "pools" posited by Wickens' (1984b) multiple resource model of attention. Use of a variety of tasks was intended to assess the possibility of "process-specific" TS abilities; that is, TS abilities specific to particular tasks and task combinations. To facilitate interpretation of the performance dimensions, a set of cognitive ability factors was derived from a battery of psychometric tests, and relationships between the two sets of factors were assessed. Ideally, the information-processing tasks and psychometric tasks would be selected using a common theoretical framework. Unfortunately, cross-matching of factors from the two domains has been attempted only recently (see Carroll, 1980), and the nature of the relationships is not clear. Therefore, psychometric tests were selected, based on the Cattell-Horn model (Horn & Cattell, 1966), to identify several broad ability factors, which were then related to the information-processing dimensions.

46

BROOKINGS

METHOD Subjects The subjects were 81 right-handed males, primarily college undergraduates, who were paid for their participation in the study. All subjects were native English speakers, had vision correctable to 20/20, and were between the ages of 18 and 51 (M = 24).

Apparatus The information-processing tasks and dual task combinations were presented on Commodore 64 microprocessor systems with dual floppy disks and a color CRT monitor. Responses were recorded by four-key response pads and, for the tracking task, a rotary controller box.

Psychometric Ability Tests In small group sessions, subjects completed a battery of paper-and-pencil measures including eight psychometric ability tests selected to identify three factors from the Cattell-Horn (Horn & Cattell, 1966) model: (1) Fluid intelligence (gD; (2) Crystallized intelligence (gc); and (3) General speediness (gs). Fluid intelligence is involved in a variety of novel, complex tasks, which require reasoning and the retention of stimuli in immediate awareness. Crystallized intelligence, on the other hand, is involved most extensively in verbal-conceptual tasks which draw upon previously acquired information and skills; general speediness is reflected in simple cognitive tasks emphasizing quickness of performance. The tests are listed below in the order in which they were given; the hypothesized "target" factor for each test is indicated in Table 1. 1. 2. 3. 4. 5. 6. 7. 8.

Finding A's Test (ETS Kit of Factor-Referenced Cognitive Tests; Ekstrom, French, Harrnan, & Dermen, 1976). Culture Fair Intelligence Test, Scale 2 (Cattell & Cattell, 1973). Aiming (Comprehensive Ability Battery; Hakstian & Cattell, 1976). Map Planning (ETS Kit). Vocabulary (ETS Kit). Number Comparisons (ETS Kit). Choosing a Path (ETS Kit). Opposites (ETS Kit).

Information-Processing Tasks Subjects performed four information-processing tasks and six dual task combinations formed by pairing each single task with the other three. The four tasks were selected from the Criterion Task Set (Shingledecker, 1984) performance assessment battery, which is primarily based upon Wickens' (1984b) multiple resource model of attention. Where necessary, the tasks were modified in accordance with the requirements of the experiment. All single and dual task trial blocks lasted 3

47

TIME-SHARING PERFORMANCE TABLE 1 Descriptive Statistics for the Psychometric Tests

Test

1. 2. 3. 4. 5. 6. 7. 8.

Culture Fair Map Planning Choosing a Path Vocabulary Opposites Finding A's Number Comparison Aiming

Factor

M

SD

Part 1/Part 2 Reliability a

gf gf gf

117.9 25.1 12.3 20.2 19.2 59.5 48.0 41.7

17.0 6.6 6.7 5.5 4.2 13.6 10.0 10.7

-.84 .84 .73 .48 .86 .88 .88

gc gc gs gs gs

Note. Scores for the Culture Fair Scale are normalized standard score IQs (M = 100, SD = 16), based on norms provided in the test manual (Cattell &

Cattell, 1973). Scores for all other tests are raw scores. aThese values are corrected Part 1/Part 2 correlations, using the SpearmanBrown formula. minutes. The four tasks were presented visually and subjects made manual responses. To stabilize performance levels quickly, and provide the reliable measures needed for the factor analyses, low-difficulty versions of the tasks were used. M e m o r y Search (MS). For this task, based on S. Sternberg's (1969) paradigm, a set o f four l e t t e r s - - t h e memory s e t - - w a s presented to subjects to memorize. Next, a series of letters ("test items") was presented one at a time. Subjects pressed the "yes" key on the keypad if the test item was from the memory set and the " n o " key if the item was not one of the memory set items. Subjects were instructed to respond as quickly and accurately as possible. The CTS version of the task is self-paced; as soon as the subject responded to a test item, it was erased and the next item presented. Several response measures were recorded (e.g., reaction time for correct responses, number of errors, number and percentage correct, etc.). In the CTS/multiple resource framework, MS is a central processing task which imposes demands primarily on resources related to short-term memory retrieval. G r a m m a t i c a l R e a s o n i n g (GR). GR is a variant of Baddeley's (1968) sentence verification task. Subjects were presented with sentences of varying complexity; a pair o f symbols ( @ , *) was presented simultaneously with each sentence. Subjects pressed the "yes" key on the keypad if the sentence described the order of the symbols displayed below it correctly, and the " n o " key if the sentence did not describe the order o f the symbols correctly. For this experiment, the sentences were sampled randomly from a population consisting o f all possible combinations (16) of these four binary conditions: (1) active versus passive sentence wording; (2) keyword "follows" versus "pre-

48

BROOKINGS

cedes"; (3) order of the two symbols in the sentence; and, (4) order of symbols in the symbol pair. Two examples are given below: 1. @ follows • @* 2. @ is preceded by • *@

(active; false) (passive; true)

To shorten the sentences and thereby reduce the visual angle of the task display, the version of the GR task used in this experiment contained no negatively worded sentences. Response measures included reaction time for correct responses, number of errors, number and percentage correct, and so forth. GR is a central processing task designed to impose demands on resources required for logical reasoning.

Probability Monitoring (PM). The PM task, adapted from a task developed by Chiles, Alluisi, and Adams (1968), required subjects to monitor a display which resembled an electro-mechanical dial. The dial consisted of six vertical hashmarks with the center of the dial marked by a seventh vertical hashmark offset above the others. Under "normal" conditions a pointer, located immediately below the hashmarks, moved randomly from one mark to another on a simulated electro-mechanical dial. Pointer movement was updated at the rate of five moves per second. Periodically, the pointer movement became nonrandom, such that the pointer stayed on one half of the dial 95% of the time. Subjects pressed a key on their keypad as soon as they were confident that nonrandom pointer movement--called a "signal"--was occurring. A correct response returned the pointer to random movement. If undetected, a signal lasted 12 s. Ten signals occurred during a 3-min trial block, and were equally likely to appear on either half of the dial. For each block of trials, the number of correct responses, missed signals, and false responses (i.e., responses to nonexistent signals) were recorded, along with mean correct response time. This task is assumed to place demands on resources related to visual perceptual information processing. Unstable Tracking (UT). In this task, which resembles Jex, McDonnell, and Phatak's (1966) critically unstable tracking task, a fixed target was centered on the screen. A cursor moved horizontally from the center of the screen; subjects were instructed to keep the cursor centered over the target by rotating a control knob. The system was unstable in that subject responses introduced error which was then magnified by the system. If the subject lost control and the cursor left the screen, it was reset to the center of the display and the subject continued tracking. Root mean square error and total number of control losses were recorded for each trial block. UT imposes demands upon resources related to execution of manual responses.

TIME-SHARINGPERFORMANCE

49

Procedure In an initial 2-hr session (Session 1), subjects performed two 3-min blocks of practice trials on each of the four single tasks, following by two blocks of trials on each of the six dual task combinations. On the following day, subjects first performed a 30-s block of warmup trials on each of the four single tasks. Next, they performed a single block of 3-rain trials on each of the single and dual tasks (Session 2), followed by another block on each of the single and dual tasks (Session 3). Five-rain rest breaks were given every 30 min. Subjects performed the tasks in the following order: MS, PM, UT, GR, MS--UT, PM--MS, GR--UT, PM---GR, GR--MS, and PM UT. For the dual tasks, one task was displayed slightly above the middle point of the screen, with the other task slightly below the center point. The maximum visual angle was approximately 4 ° vertical for the PM---GR combination and 4.25 ° horizontal for GR. Two days later, subjects completed the psychometric tests in small group sessions of 2-hrs duration, including two 5-min rest breaks.

Confirmatory Factor Analyses The confirmatory factor analyses were performed with LISREL VI (J6reskog & S6rbom, 1984). CFA with LISREL involves the specification and estimation of one or more factor models, each consisting of latent variables (i.e., factors) proposed to account for correlations among measured variables. Models are specified by fixing or constraining elements in matrices analogous to the factor pattern matrix, factor correlation matrix, and communalities from a common factor analysis (see Marsh & Hocevar, 1985, for details on the specification of factor models with LISREL). Elements in these matrices which are not fixed or constrained are "free" and estimated by LISREL. LISREL computes standard errors and critical ratios (i.e., t values) for each estimated parameter. Several indices of overall model fit are provided, and many more can be calculated from LISREL output (see Marsh, Balla, & McDonald, 1988). For this study, the evaluation of model adequacy emphasized substantive and practical criteria (e.g., parameter estimates within permissible ranges) and the following indices: 1.

2.

3.

The x2/d.f, ratio provides information on the relative efficiency of alternative models in accounting for the data. Values of 2.0 or less are interpreted to represent adequate fit. The root square residual (RMSR; J6reskog & S6rbom, 1984) is a measure of average residual correlation. Smaller values (e.g.,. 10 or less) are reflective of better fit. The Tucker-Lewis Index (TLI) scales the chi-square from 0 to 1, with 0 representing the fit of a null model (Bentler & Bonett, 1980), which assumes that the variables are uncorrelated, and I representing the fit of a perfectly

50

BROOKINGS fitting model. Larger values indicate better fit. Of the more than 30 fit indices examined by Marsh et al. (1988), only the TLI was relatively independent of sample size.

Factor Models In addition to the null model, the performance variable factor models included: (1) a one-factor, "general" performance ability model; (2) a three-factor oblique model with latent dimensions corresponding to visual perceptual input (PM), central processing (MS,GR), and motor output (UT) resources; (3) a model with correlated factors representing the four single task abilities; and, (4) a TS model, which added an orthogonal TS factor to the four-factor model. For this model, loadings were estimated for each dual task variable on the TS factor, in addition to the relevant single task factors. Loadings of the four single tasks on the TS factor were fixed at zero. For the psychometric test analyses, the "target" model hypothesized three factors (gf, gc, and gs). In addition, because CFA of this model revealed moderate to large interfactor correlations (see Results and Discussion), a model hypothesizing a second-order general ability factor was assessed also.

RESULTS AND DISCUSSION

Reliability Analyses Psychometric Tests. Part 1/Part 2 reliabilities for the psychometric tests (see Table 1) were generally adequate, with the exception of Opposites. Despite its relatively low reliability, this test was included in subsequent multivariate analyses because it was necessary for identification of gc. Information-Processing Tasks. For GR and MS, the response measure was the number correct, which is sensitive to both speed and accuracy (Wickens et al., 1981). For PM and UT, the response measures were correct response time and root mean square error, respectively. Session 2/Session 3 reliabilities for the single tasks and dual task component tasks (see Table 2) were quite good, with the exception of PM. The poor reliabilities for PM can be attributed in part to intersession changes in component task priorities and PM response strategies (see discussion of practice effects). Reliabilities for some of the tasks were higher in the dual task combinations than when the tasks were performed alone. Wickens et al. (1981) reported similar results.

Practice Effects There were no significant Session 2/Session 3 changes (see Table 2) in single task performance of MS, GR, and UT, but PM response times increased signifi-

51

TIME-SHARING PERFORMANCE TABLE 2 Session 2/Session 3 Reliability Coefficients and Repeated Measures t Tests for the Information-Processing Single Tasks and Dual Task Components Session 2 Va~able

r

M

SD

Session 3 M

SD

t

Single Tasks

MS GR PM UT

.80 .95 .68 .75

210.8 72.4 3.6 10.1

34.8 25.3 1.5 6.9

208.8 73.5 4.0 9.1

32.6 27.5 1.5 6.0

0.85 - 1.16 -3.23* 1.85

83.3 154.5 181.6 37.9 51.2 65.2 4.0 3.9 3.5 16.8 14.5 12.3

26.9 38.6 37.8 15.0 22.0 26.4 1.5 1.2 1.1 9.3 8.0 6.9

86.7 160.3 184.6 37.7 53.6 65. ! 4.0 4.1 3.5 14.3 13.3 11.9

31.3 40.7 41.9 15.3 22.6 27.2 1.5 1.4 1.3 9.0 8.0 7.6

-2.27* -2.09* -1.25 0.33 -2.80* 0.14 -0.06 -1.44 0.43 4.98* 2.37* 0.80

Dual Task Components

MS (GR) MS (PM) MS (UT) GR (MS) GR (PM) GR (UT) PM (MS) PM (GR) PM (UT) UT (MS) UT (GR) UT (PM)

.90 .80 .86 .95 .94 .95 .49 .30 .40 .88 .84 .81

Note. For each dual task component, the paired task is enclosed in parentheses. The response measure for all MS and GR variables is number correct; higher scores indicate better performance. The response measures for PM and UT are correct response time and root mean square error, respectively; for these variables, higher scores reflect poorer performance. *p < .05

cantly. In the dual task conditions, 5 of the 12 dual task components irrtproved significantly from Session 2 to Session 3. There were no significant practice effects for tasks paired with UT; however, U T performance improved significantly in two o f the three dual task combinations. Within sessions, single to dual performance decrements were smallest when U T was the paired task, perhaps because U T draws on a different " p o o l " of resources than do the other tasks (Wickens, 1984b). Performance of MS and G R - - w h e n paired with P M - - i m p r o v e d significantly from Session 2 to Session 3, but there were no changes in PM response time. O n e possibility is an asymmetric tradeoff strategy; that is, subjects allocated additional attentional resources to MS and G R , which are self-paced tasks, at the expense of PM. This is certainly plausible. Subjects knew that a P M signal lasted up to 12 s; therefore, they could devote full attention to the self-paced task for several seconds, and still have ample time to detect and respond to the P M signal.

52

BROOKINGS

A related interpretation is that subjects did improve PM performance, but by reducing the number of "false alarms" (i.e., responses to nonexistent signals), rather than by increasing response speed. As noted in the task descriptions, the PM instructions directed subjects to respond quickly, but not until they were confident that a signal was occurring. Also, PM false alarms decreased slightly from Session 2 to Session 3, and intrasession correlations between PM response time and number of false alarms were all negative, ranging from - . 2 0 4 and - . 2 8 1 . There is some evidence, then, that subjects were emphasizing accuracy over speed in PM performance.

Within-Battery Analyses

Psychometric Tests. CFA of the psychometric data showed that the threefactor oblique model provided an excellent fit to the data (xE/d.f. = 1.3~4, RMSR = .06, TLI = .93). The interfactor correlations were quite large (ran~ng from .510 to .792), reflecting the "positive manifold" indicative of a second-order factor. A model with a second-order general ability factor provided a good representation of correlations among the first-order factors. ~ Estimated loadings of the tests on their respective first-order factors and of the first-order factors., on the second-order general ability factor are displayed in Table 3. Cons~s/~nt with recent studies of the Cattell-Horn model (e.g., Gustafsson, 1984), gf~fiad the highest loading on the general factor. However, the loading of gs on the general factor was somewhat larger than is typically found (see Undheim & Gust~ifsson, 1987). Evidently, the general factor defined by this sample of tests was skewed somewhat toward gs. Information-Processing Tasks. Eight of the 12 dual task components were skewed significantly (p < .05). As a first step, then, the dual task component scores were transformed into normalized standard scores (stanines). Then, because there is no consensus regarding the most appropriate way to represent dual task performance in TS studies, three sets of analyses were performed. In the first set of analyses, the dual task component scores were multiplied to form dual task product composites. Ackerman (personal communication, December 18, 1985) recommends this procedure to penalize asymmetric tradeoff in dual task conditions. However, Kenny and Judd (1984) have argued that product latent variables are not normally distributed. They caution against the use of estimation procedures, such as maximum likelihood, which assumes multivariate normality. Consequently, a second set of analyses was performed on additive composites formed by summing the normalized dual task component scores. A third set of analyses was performed on the normalized dual task component IFor second-orderfactor models which posit three first-orderfactors and a single second-order factor, the numberof first-orderfactorcorrelationsfixed at zero--three--is the same as the number of parametersused to identify the second-orderfactor; therefore, the first- and second-ordermodels fit the data equally well (Marsh & Richards, 1987).

TIME-SHARING PERFORMANCE

53

TABLE 3 Standardized Maximum Likelihood Parameter Estimates for the Psychometric Test Second-Order Model First-Order Factor Loadings Test

gf

gc

gs

Culture Fair Map Planning Choosing a Path Vocabulary Opposites Finding A's Number Comparison Aiming Second-Order Loadings

.680* .780 .652 0 0 0 0 0 .923*

0 0 0 .770" .643 0 0 0 .647

0 0 0 0 0 .519 .647" .478 .787

Note. All estimated loadingswere statisticallysignificant(p < .05). Zero values represent fixed parameters; asterisks (*) indicate parameters fixed at 1.0 in the original solution.

scores to provide insight into possible asymmetric task tradeoff and to facilitate interpretation of the composite variable results. Prior to the analyses, the PM and UT response measures, mean correct response time and root mean square error, respectively, were reflected so that for all analyses, higher scores indicated better performance. Finally, to assess the influence of the normalizing transformation on the results, the product composite, additive composite, and component score analyses were performed also on non-normalized dual task scores. With only minor exceptions, the results were similar to those described below for the normalized scores. 2 For the product composite analyses, a model with correlated factors corresponding to the four single tasks, with the dual task composites loading on their respective single task factors, provided a good fit to the data (x2/d.f. = 1.84, RMSR = 04, TLI = .94). The Ackerman et al. (1982) TS model, which added an orthogonal TS factor to the four correlated factors, resulted in a statistically significant improvement in fit over the four-factor model (difference ×2(6) = 21.45, p < .01; X2/d.f. = 1.22, RMSR = .04, TLI = .98). 3

2Zero-order correlation matrices and fit indicesfor all analysesare availableupon request from the author. 3The initialanalysisof the productcompositeTS modelproduceda uniquenessestimateof -.034 (i.e., a Heywoodcase) for MS PM. Followingthe recommendationsof Dillon, Kumar,and Mulani 0987) for situationsin which an improperestimate can be attributedto samplingfluctuations,rather than model misspecification,the TS model was reestimated with the negative uniquenessfixed at zero.

54

BROOKINGS

Estimated loadings o f the single tasks on their target factors were statistically significant (p < .05) and, with the exception of M S - - - G R ' s estimated loading on M S , all of the dual task composites had significant loadings o n their respective single task factors. M S - - P M and MS-----GR had significant loadings o n the TS factor, three of the composites had positive but small loadings, and U T P M had a n o n s i g n i f i c a n t negative loading. There were significant interfactor correlations b e t w e e n MS and G R (r = .725), and between G R and P M (r = .295). Patterns of fit indices for the c o m p o n e n t score analyses were similar to those reported for the product composite scores. The fit of the TS model (x2/d.f. = 1.60, R M S R = .07, T L I = .94) was statistically superior (difference ×2(12) = 62.21, p < .01) to that o f the four-factor model (x2/d.f. = 2.04, R M S R = .08, TLI = .89), which in turn provided a better fit than did the one- and three-factor models. However, the parameter estimates shown in Table 4 provide a somewhat

TABLE 4 Maximum Likelihood Parameter Estimates for the TS Model: Component Score Analyses Factor Loadings Variable

Memory Search

Grammatical Reasoning

Probability Monitoring

MS GR PM UT MS (UT) UT (MS) MS (PM) PM (MS) MS (GR) GR (MS) UT (PM) PM (UT) PM (GR) GR (PM) UT (GR) GR (UT)

.918" 0 0 0 .867* 0 .783* 0 .658* 0 0 0 0 0 0 0

0 .977* 0 0 0 0 0 0 0 .879* 0 0 0 .902* 0 .960*

0 0 .871" 0 0 0 0 .643* 0 0 0 .697* .564* 0 0 0

Unstable Tracking

0 0 0 .831" 0 .946* 0 0 0 0 .919" 0 0 0 .968* 0

Time Sharing

0 0 0 0 .123 .087 .534* -.363* .189 .036 .269* -.241 * - .471 * .287* .061 .014

Factor Correlations

MS GR PM UT TS

--

.742* --

Note. Zero values represent fixed parameters. *p < .05

.178 .215 --

.237* .126 - .039 --

0 0 0 0

TIME-SHARINGPERFORMANCE

55

TABLE 5 Estimated Correlations Between the Ability and Information-Processing Factors: Component Score Analyses First-Order Factors Task Factors

MS GR PM UT TS

g/"

gc

gs

Second-Order Factor

.451" .620* .224* .124 .093

.370* .644* .155 .025 .002

.721" .718" .441" .153 .044

.587* .767* .306* .133 .078

*t < .05

different picture of the TS factor. The loadings for MS, GR, and UT on the TS factor were positive and statistically significant when these tasks were paired with PM, but the TS factor loadings for P M k w h e n paired with each of the other three tasks--were negative and statistically significant. As noted earlier, this can be attributed in part to asymmetric dual task tradeoff, speed-accuracy tradeoffs in PM responding, or both. Similarly, Jennings and Chiles (1977) reported that the only variables having appreciable loadings on their TS factor were visual monitoring tasks performed in multiple task combinations. Attempts to fit the TS model for the dual task additive composite scores failed to produce a converged solution. Specifically, standard errors associated with M S - - G R ' s estimated uniqueness and loading on the TS factor were unacceptably large, indicating that the model was misspecified for these data. Interbattery A n a l y s e s

With the matrix of correlations among the eight psychometric tests and 16 component analysis variables as input, a final LISREL analysis assessed relationships between the ability and performance factors. This was accomplished by specifying a model in which parameter estimates for the two sets of variables were fixed at the values estimated for the second-order ability and component variable TS models. Then, correlations between the two sets of factors were estimated. The purpose of this procedure was to preclude the ability variables influencing the performance variable factor structure, and vice versa (see Lansman, Donaldson, Hunt, & Yantis, 1982). The estimated interbattery factor correlations, presented in Table 5, show that MS and GR were significantly correlated with all three first-order ability factors and the second-order general ability factor. 4 PM was significantly correlated with 4Because of model identification problems, correlations of the information-processingfactors with the f'n'st-and second-orderability factors were assessed in two separate analyses.

56

BROOKINGS

gf, gs, and the second-order factor. As noted earlier, the second-order factor was skewed somewhat toward gs, and consequently, was correlated most highly with factors identified by tasks emphasizing speed of responding. 5 Finally, both UT and TS were uncorrelated with the ability factors. CONCLUSIONS The present study did not provide supportive evidence for a general TS ability. In the CFA of dual task product composite scores, only two of the composites had significant loadings on the TS factor. However, CFA of the dual task component scores did identify a TS factor specific to task combinations which included PM as one of the component tasks. Because the TS factor was orthogonal to the single task ability factors, the dual task performance differences cannot be attributed to the single task abilities. Rather, there were subject differences in PM response strategies, which in turn reflected: (a) the priorities assigned to PM and the paired task; and/or (b) the relative importance of speed and accuracy in PM responding. Consistent with the former, Jennings and Chiles (1977, p. 546) hypothesized that their visual monitoring TS factor represented a "higher-order" process, reflecting differences in the " . . . ability of the subject to shift attention from a higher priority task to scanning and detecting signals on the lower priority [i.e., visual monitoring] tasks." At least three factors contributed to the failure to identify a general TS ability. First, because all of the tasks were presented visually, and responded to manually, subjects may have had difficulty time sharing them (see McCleod, 1977) and resorted instead to an alternating strategy (Damos, Smist, & Bittner, 1983). In particular, the "speededness" of MS, GR, and, to a lesser extent, PM, may have encouraged a switching strategy which precluded "true" time sharing. If so, the most efficient dual task performers were not time sharing per se, but quickly mastered the component tasks and simply alternated rapidly between them. Consistent with this interpretation, first-order gs was correlated significantly with the MS, GR, and PM factors. Second, even though subjects spent a total of 4 hours in task training and performance, it may be that their performance did not reach the "differential stability" (i.e., high, stable, intersession correlations) that Damos et al. (1981) argued is necessary for identification of a TS ability. Five of the 12 dual task components showed significant intersession improvement. This suggests that subjects were still in the early stages of task acquisition and, therefore, their performance was not differentially stable. Also, Ackerman's (1986, 1987) model 5This interpretationwas suggestedby ChristopherHertzogin his reviewof an earlierversionof this article.

TIME-SHARING PERFORMANCE

57

of skill learning predicts that general abilities should be highly associated with complex task performance in the early stages of acquisition. In the present study, the factor defined by the GR task (the most complex task of the four used in this experiment) and dual tasks including GR as a component task, generally had the highest estimated correlations with the ability factors. This is additional evidence that subjects were in the early stages of task acquisition. Finally, TS ability might be more important when resources in the same "pool" must be time shared (Wickens, 1984b). Because none of the four tasks used in this study were drawn from the same resource pool and identity task pairs were not included (e.g., MS MS), the influence of a general TS ability may have been minimized. These results do not rule out a second-order general TS ability defined by correlated process-specific TS abilities, analogous to R. J. Sternberg's (1977) metacomponent/performance component distinction. To investigate such a model, however, would require sufficient marker variables to identify--at minim u m - - t h r e e first-order factors, and a sample size large enough to generate stable parameter estimates (see Anderson & Gerbing, 1984; Boomsma, 1985). In addition, the results from this study show that a variety of factors, including the kinds of tasks selected, and the way in which dual task performance is scored, determine whether a TS factor is likely to emerge. For now, the most profitable strategy is continued investigation of process-specific TS abilities (see Braune & Wickens, 1986), such as the visual monitoring TS factor found in this study and the Jennings and Chiles (1977) investigation. The evidence from such studies would provide insight into whether a general TS ability exists, and if so, the form it is likely to have.

REFERENCES Ackerman, P.L. (1986). Individual differences in information processing: An investigatioz of intellectual abilities and task performance during practice. Intelligence, 10, 101-139. Ackerman, P.L. (1987). Individual differences in skill learning: An integration of psychometricand information-processing perspectives. Psychological Bulletin, 102, 3-27. Ackerman, P.L., Schneider, W., & Wickens, C.D. (1982). Individual differences and time-sharing ability: A critical review and analysis (Rep. No. 8102). Champaign: University of Illinois, Human Attention Research Laboratory. Ackerman, P.L., & Wickens, C.D. (1982, October). Methodologyand the use of dual and complextask paradigms in human factors research. Proceedings of the Human Factors Society Twenty Sixth Annual Meeting, 354-358. Anderson, J.C., & Gerbing, D.W. (1984). The effect of sampling error on convergence, improper solutions, and goodness-of-fit indices for maximum likelihood confirmatory factor analysis. Psychometrika, 49, 155-173. Baddeley, A.D. (1968). A 3-minute reasoning test based on grammatical transformation. Psychonomic Science, 10, 341-347. Bentler, P.M., & Bonett, D.G. (1980). Significance tests and goodness of fit in the analysis of covariance structures. Psychological Bulletin, 88, 588-606.

58

BROOKINGS

Boomsma, A. (1985). Nonconvergence, improper solutions, and starting values in LISREL maximum likelihood estimation. Psychometrika, 50, 229-242. Braune, R., & Wickens, C.D. (1986). Time-sharing revisited: Test of a componential model for the assessment of individual differences. Ergonomics, 29, 1399-1414. Carroll, J.B. (1980). Individual differences in psychometric and experimental cognitive tasks (Rep. No. 163). Chapel Hill: University of North Carolina, The L.L. Thurstone Psychometric Laboratory. Cattell, R.B., & Cattell, A.K.S. (1973). Manual for the culture fair intelligence test, scales 2 and 3. Champaign, IL: Institute for Personality and Ability Testing. Chiles, W.D., Alluisi, E.A., & Adams, O.S. (1968). Work schedules and performance during confinement. Human Factors, 10, 143-196. Damos, D.L. (1978). Residual attention as a predictor of flight performance. Human Factors, 20, 435-440. Damos, D.L., Bittuer, A.C., Kennedy, R.S., & Harbeson, M.M. (1981). Effects of extended practice on dual-task tracking performance. Human Factors, 23, 627-631. Damos, D.L., Smist, T.E., & Bittner, A.C. (1983). Individual differences in multiple-task performance as a function of response strategy. Human Factors, 25, 215-226. Dillon, W.R., Kumar, A., & Mulani, N. (1987). Offending estimates in covariance structure analysis: Comments on the causes and solutions to Heywood cases. Psychological Bulletin, 101, 126-135. Ekstrom, R.B., French, J.W., Harman, H.H., & Dermen, D. (1976). Manual for kit of factorreferenced cognitive tests. Princeton, NJ: Educational Testing Service. Fleishman, E.A. (1965). The prediction of total task performance from prior practice on task components. Human Factors, 7, 18-27. Fogarty, G. (1987). Timesharing in relation to broad ability domains. Intelligence, 11, 207-231. Forester, J.A. (1983). An investigation of time-sharing as a general ability. Unpublished doctoral dissertation, University of New Mexico, Albuquerque. Freedle, D.O., Zavala, A., & Fleishman, E.A. (1968). Studies of component-total task relations: Order of components, total task practice, and total task predictability. Human Factors, I0, 283-296. Gustafsson, J.E. (1984). A unifying model for the structure of intellectual abilities. Intelligence, 8, 179-203. Hakstian, A.R., & Cattell, R.B. (1976). Manual for the comprehensive ability battery. Champaign, IL: Institute for Personality and Ability Testing. Hawkins, H.L., Rodriguez, E., & Reicher, G.M. (1979). Is time-sharing a general ability? (Rep. No. 3). Eugene: University of Oregon, Center for Cognitive and Perceptual Research. Horn, J.L. (1967). On subjectivity in factor analysis. Educational and Psychological Measurement, 27, 811-820. Horn, J.L., & Cattell, R.B. (1966). Refinement and test of the theory of fluid and crystallized intelligence. Journal of Educational Psychology, 57, 253-270. Jennings, A.E., & Chiles, W.D. (1977). An investigation of time-sharing ability as a factor in complex performance. Human Factors, 19, 535-547. Jex, H.R., McDonnell, J.D., & Phatak, A.V. (1966). Critical tracking task for manual control research. IEEE Transactions on Human Factors Engineering, 7, 138-145. J6reskog, K.G., & S6rbom, D. (1984). LISREL VI: Analysis of linear structural relationships by the method of maximum likelihood. Mooresville, IN: Scientific Software, Inc. Kenny, D.A., & Judd, C.M. (1984). Estimating the nonlinear and interactive effects of latent variables. Psychological Bulletin, 96, 201-210. Lane, D.M. (1982). Limited capacity, attention allocation, and productivity. In W.C. Howell & E.A. Fleishman (Eds.), Human performance and productivity: Vol. 2. Information processing and decision making. Hillsdale, NJ: Erlbanm.

TIME-SHARING PERFORMANCE

59

Lansman, M., Donaldson, G., Hunt, E., & Yantis, S. (1982). Ability factors and cognitive processes. Intelligence, 6, 347-386. Marsh, H.W., Balla, J.R., & McDonald, R.P. (1988). Goodness-of-fit indexes in confirmatory factor analysis: The effect of sample size. Psychological Bulletin, 103, 391-410. Marsh, H.W., & Hocevar, D. (1985). Application of confh'matory factor analysis to the study of selfconcept: First- and higher-order factor models and their invariance across groups. Psychological Bulletin, 97, 562-582. Marsh, H.W., & Richards, G.E. (1987). The multidimensionality of the Rotter I-E Scale and its higher order structure: An application of confirmatory factor analysis. Multivariate Behavioral Research, 22, 39-69. McCleod, P. (1977). A dual task response modality effect: Support for multiprocessor models of attention. Quarterly Journal of Experimental Psychology, 29, 651-677. McQueen, E.N. (1917). The distribution of attention. British Journal of Psychology II (Monograph Supplements 5). North, R.A., & Gopher, D. (1976). Measures of attention as predictors of flight performance. Human Factors, 18, 1-14. Shingledecker, C.A. (1984). A task battery for applied human performance assessment research. (Rep. No. 84-071). Wright-Patterson AFB, OH: Harry G. Armstrong Aerospace Medical Research Laboratory. Sternberg, R.J. (1977). Intelligence, information processing, and analogical reasoning: The componential analysis of human abilities. Hillsdale, NJ: Erlbaum. Sternberg, S. (1969). Memory scanning: Mental processes revealed by reaction time experiments. American Scientist, 57, 421-457. Sverko, B. (1977). Individual differences in time-sharing performance (Rep. No. ARL-774/AFOSR-77-4). Savoy: University of Illinois, Institute of Aviation, Aviation Research Laboratory. Sverko, B., Jerneic, Z., & Kulenovic, A. (1983). A contribution to the investigation of time-sharing ability. Ergonomics, 26, 151-160. Undheim, J.O., & Gustafsson, J.E. (1987). The hierarchical organization of cognitive abilities: Restoring general intelligence through the use of linear structural relations (LISREL). Multivariate Behavioral Research, 22, 149-171. Wickens, C.D. (1984a). Engineering psychology and human performance. Columbus, OH: Charles E. Merrill. Wickens, C.D. (1984b). Processing resources in attention. In R. Parasuraman & R. Davies (Eds.), Varieties of attention. New York: Academic. Wickens, C.D., Mounfford, S.J., & Schreiner, W. (1981). Multiple resources, task-hemispheric integrity, and individual differences in time-sharing. Human Factors, 23, 211-229.