Book reviews
261
because he believed that it ascribed a god-like rationality to human beings: in particular, with respect to the assumed metric level of evaluation and probability judgement. The traditional theory assumes that both are on at least an interval scale, and this is also assumed by the new models. I have argued elsewhere (Ranyard, 1990) that the upper limit of human uncertainty and value judgement is probably an ordinal metric of some kind. Whether this is true or not, it is important that the validity of the metric level assumptions of the new utility models is examined. The discussants do not raise this issue in the final chapter, but they do raise a number of equally important ones. In particular, they suggest that the new models need to pay more attention to the nature of mental representations and cognitive processes underlying decisions. Perhaps the durability of Prospect Theory is more to do with the important insights it has contributed on these matters rather than its precise value and probability weight functions. This volume brings together many important recent developments in utility theory, although regrettably, it does not include an account of the new Prospect Theory. Nevertheless, it will be of considerable interest to psychological researchers who want to assess the contribution of these models to our understanding of how individuals make decisions. A more complete assessment of them will need to wait until ordinal or qualitative alternatives to utility theory have developed sufficiently to allow meaningful comparisons.
Rob Ranyard Bolton Institute of Higher Education Deane Road Bolton B13 5AB UK
References
Ranyard, R., 1990. 'The role of rational models in the decision process' In: K.J. Gilhooly, M.T.G. Keane, R.H. Logie and G. Erdos (Eds.), Lines of thinking, Vol. 1. Chichester: Wiley. Simon, H.A., 1983 Reason in human affairs. Stanford, CA: Stanford UniversityPress.
John B. Carroll, Human Cognitive Abilities. A survey of factoranalytic studies. Cambridge University Press, 1993. Pp. 819. ISBN-0-521-38712-4 paperback
As a starting psychologist one of my dearest dreams was to construct out of the many hundreds of correlation matrices that the psychometric approach to cognition has produced one complete matrix of the intercorrelations between all known measures of human cognitive ability. Call this dream R. Meta-synthetic methods (by analogy to meta-analytic) would be needed to piece together R from so many
262
Book reviews
only partly overlapping sources. No useful data should be left unused. Because the factor analytic approach (for reasons of shortage of testing time) typically proceeds by concentrating on one or two ability domains at a time (e.g. spatial ability, reasoning), it is certain that even a century of research has left most cells of R empty. In my dream those empty cells were to be filled by a concerted international effort (think of all the young adult subjects idling in China!). One problem with this proposal is how to get the steering committee for my programme to agree on the rules for the inclusion or exclusion of data. When, for instance, do we consider one measure of word knowledge sufficiently equivalent to another (e.g. a different version of a test) to collapse the corresponding data? Do we restrict ourselves to young adults? And so on and so forth. Such agreement is of the utmost importance. One of the aims of constructing R is to have one shared databank of agreed upon good quality, so no theoretical or technical dispute should ever more be confounded by differences in the database, or the non-availability of crucial data. Even if agreement on rules for good quality could be reached, the logistics of constructing R certainly pose enormous problems and what will a certain US senator say when we ask for funds to compute correlations we hope will be zero? Thus ended a dream. John B. Carrol (1916) seems to hold an ideal much similar to mine. Instead of trying to reconstruct R by piecing together known parts of R (and inserting new parts where needed) Carrol has tried to construct F, the matrix of loadings of "all" factors on "all" tests, by piecing together the results of factor analyses of the 461 datasets of ability measures in the cognitive domain that the factor analytic approach has produced thus far. Carroll's aim was "to have them all", at least in so far as they passed certain tests. If one succeeds in completely reconstructing F obviously R could be computed too. Also, starting with R we could recover F. There is a difference, however. While the direct route to R unavoidably entails interpreting tests, at least as far as to be able to say they are sufficiently the same in item type, the route via F necessitates both judging the sameness of tests and of factors. To reduce the indeterminacy inherent in this additional interpretative step Carroll reanalysed all 461 datasets using strictly the same methodology and rules for making the well-known three decisions involved in exploratory factor analysis (EFA): how to estimate the communalities, how to decide on the number of factors and to what criterion to rotate the axes. The complex, but clear and utterly reasonable and impartial protocol he follows is explained in Ch. 3. This chapter alone makes this work worth buying. In itself it is a (almost) directly useful and principled account of how to conduct an exploratory factor analysis in any field. This introduction is followed by practical demonstrations in the form of extensively annotated re-analyses of three theoretically important datasets. The point of these demonstrations is to show how to proceed with E F A and also to hammer home that how one proceeds can make important differences to the interpretation of the results. Of course exactly those "important differences" are what many hold against exploratory factor analysis. They prefer confirmatory factor analysis (CFA).
Book reviews
263
Such researchers too should read and ponder this chapter to learn not to underestimate the foe. In the ten following chapters (400 pages) Carroll then surveys the "first order" cognitive ability factors emerging from direct reanalysis of the correlation matrices in eleven clusters of datasets: factors in language tests, measures of reasoning, memory and learning, and so on to the psychomotor domain. About 70 correlated first order factors pass the conservative tests of the protocol followed. Carroll does not go so far as to actually construct F (by pooling loadings from different datasets), but in principle the work is done. But there is more. In Carroll's methodology a first order analysis is, when the first order factors correlate high enough, routinely extended to a full hierarchical factor analysis, up to the third order (or stratum). The last step of this analysis is orthogonalization of the factors of all orders together, following the procedure of Schmid and Leiman. An example will show what this step amounts to. Consider a Vocabulary test. In the first order analysis this item type typically has its highest loading (+0.70) in a Verbal Comprehension factor, and no other significant loadings. The verbal factor, however, shares variance with other factors of the achievement type, e.g. Math. Achievement. This covariance is the basis of a second order factor: so called Crystallized Intelligence. This second order factor correlates positively with other second order factors (e.g. Gf Reasoning), so a third order factor can be computed: G. The orthogonalization procedure in the case of the Vocabulary test means that the common variance (h 2) of the test is divided in three main parts. The first part is the variance accounted for by the General factor G. The second part is that part of the common variance of the test that is accounted for by the part of the variance of Crystallized Intelligence that is independent of General intelligence. The last part of vocabulary variance is the part explained by the unique variance in the Verbal factor, left after the second and the third order factors have been given their due. Cattell called this a stub-factor (after what is left in a used cheque book). The higher the intercorrelations of the first order factors, the lower the loadings in the purified (cleaned out!) first order factors will be, compared to the original first order solution. Appendix B of the book consists of 468 ASCII-character files recorded on three 3.5 inch high-density disks. Each file contains a hierarchic factor matrix for one of the datasets reanalyzed. The disks are sold separately and in fact are not under review here. Still I strongly believe that every psychology department should buy a set (Macintosh disc ISBN 0 521 4478 9). I think these hierarchical analyses, described in the chapters 15 and 16, to be the most important part of Carroll's monumental work. It may sound a bit strange, but one of the strongest arguments for this judgement is that this summing up of the hierarchical picture largely confirms the more intuitive ideas of so many before him, especially those of R.B. Cattell. That means that at least a large part of the agenda of endless disputation in differential psychology (G or no G? etc.) can now be considered concluded. What has Psychology learned from all this work? Was it worth the better part of a century of research, worth the time of 131,571 subjects? Obviously, both "psychology" and "worth" are as multidimensional as intelligence, so there is not
264
Book reviews
one answer but there are many. Much that is pertinent is discussed in the last two chapters of the book on Issues about abilities and Implications. Many part-answers are technical in nature and of most interest to only certain segments of the readership. Examples: " Y o u should not think of the Raven as a pure test of fluid intelligence (Gf), but as a complex test of a certain kind of reasoning ability", or: " N o factor was found that could have been interpreted as capacity of working memory". But conclusions of a more general type can be drawn too. Tests of cognitive ability measure - and do that well - level and speed of people's performances on certain tasks. So they measure things about behaviour and those measurements have as we know a shockingly high predictive power. What such tests measure, however, are not in any direct sense interesting underlying capacities of the cognitive system per se. Also: Intelligence is indeed a many splendored thing. About 70 different cognitive factors can be reliably measured. The more cognitive (meaning: less perceptual) of those factors tend to fall in six clusters, which are positively correlated, the two most cognitive of them (meaning: most complex) even to a degree that in many analyses they could not be separated. A case also can be made for a third stratum G, but that General Factor is not important enough to dominate our thinking. Further: probably more factors on the first level can be found than the + 70 that passed Carroll's conservative tests. Such factors would be more subtle than those that pass now and will probably prove to be that highly correlated that an inordinate amount of testing time is needed for reliable measurement. The avenue of the "new abilities", whether of a Guilfordian or an experimental psychological stripe, does not seem very promising. The book, though everybody will accept its monumental quality, has sides that may seem weak to some. I, for instance think the author was much to lenient in accepting datasets. Nor do I understand how he could miss so many dutch datasets. Also I think Varimax rotation is a rudderless device in a hierarchical positive manifold. I would further have liked to see what happens when confirmatory factor analysis is applied to a dataset like the one coded G U I L l l when Carroll's solution is taken as a model to test. The authors' rebuttal, though, could in all cases be something simple and sympathetic like: Sorry about those datasets, but does that not provide you with a nice opportunity to do yourself what you think is still missing? One suggests that President de GauUe should be on Mount Rushmore too, and then on is handed a chisel! This book should be read by everyone interested in the psychology of cognitive ability.
Jan Elshout
Faculty of Psychology University of Amsterdam Roetersstraat 15 1018 WB Amsterdam The Netherlands