Thinking Skills and Creativity 4 (2009) 116–123
Contents lists available at ScienceDirect
Thinking Skills and Creativity journal homepage: http://www.elsevier.com/locate/tsc
Exploring the nature of divergent thinking: A multilevel analysis Jörg-Tobias Kuhn ∗ , Heinz Holling Westfälische Wilhelms-Universität Münster, Germany
a r t i c l e
i n f o
Article history: Received 16 August 2007 Received in revised form 22 May 2009 Accepted 27 June 2009 Available online 4 July 2009 Keywords: Divergent thinking Creativity Multilevel analysis Cognitive abilities Gender differences
a b s t r a c t In this study, a large sample with a clustered data structure from an educational context was utilized to analyze the relationship between cognitive abilities, school type, gender, and divergent thinking. The sample comprised 1098 students in 55 classrooms. A sequence of nested multilevel regression analyses revealed that processing capacity, as a core component of intelligence, as well as processing speed shared a significant amount of variance with divergent thinking. Additionally, school type affected divergent thinking even when controlling for intellectual abilities. Furthermore, a significant gender difference in divergent thinking favouring females was found. Gender differences as well as the effect of processing capacity and processing speed varied across classrooms, reflecting differences in the classroom environment. The nature of the relationships of cognitive and social factors to divergent thinking was considered. © 2009 Elsevier Ltd. All rights reserved.
1. Introduction Divergent thinking (DT) tests are probably the most widely employed psychometric measures for the assessment of creative problem solving. In this context, DT has been defined as the ability to generate numerous and diverse ideas to openended questions (Runco, 1991, 2007). For example, in a typical verbal DT task, subjects are asked to produce unusual uses for common objects (e.g., bricks, newspapers). In contrast, classical intelligence tests, which mostly have only one single correct answer to each item, are usually seen as assessments of convergent thinking, which lies at the opposite end of a divergent–convergent continuum (Eysenck, 2003). Whereas some authors at least implicitly assume that creativity can be assessed with DT (Silvia et al., 2008), others perceive creativity as a complex or syndrome (Mumford & Gustafson, 1988). In the latter point of view, which is shared by most creativity researchers, DT is regarded as a predictor rather than a criterion of creativity (Runco, 2008). Although creativity and DT are not seen as synonymous by most authors, DT tests are considered adequately reliable and valid (Barron & Harrington, 1981), and they share at least partial unique variance with diverse performance criteria of creativity (e.g., Plucker, 1999; Runco, 1986; Vincent, Decker, & Mumford, 2002). Further, a recent meta-analysis revealed that DT is a significantly better predictor of creative achievement than intelligence (Kim, 2008). The relationship of DT with cognitive abilities, especially with intelligence, has been a topic of considerable debate among researchers (Haensly & Reynolds, 1989; Sternberg & O’Hara, 1999). Correlations between measures of intelligence and DT tests vary widely, depending on the heterogeneity of the sample used and the tests administered. A recent meta-analysis (Kim, 2005) shed light on the large variety of results, resulting in an average correlation of r = .17 between DT and intelligence tests. DT–intelligence correlations were significantly lower for elementary school students than for other participants, whereas the Wallach–Kogan Divergent Thinking Tests (Wallach & Kogan, 1965) correlated significantly less (r = .12) with intelligence
∗ Corresponding author at: University of Muenster, Psychological Department IV, Fliednerstr. 21, D-48149 Münster, Germany. Tel.: +49 251 83 34127; fax: +49 251 83 39469. E-mail address:
[email protected] (J.-T. Kuhn). 1871-1871/$ – see front matter © 2009 Elsevier Ltd. All rights reserved. doi:10.1016/j.tsc.2009.06.004
J.-T. Kuhn, H. Holling / Thinking Skills and Creativity 4 (2009) 116–123
117
than the Torrance Tests of Creative Thinking (r = .22; Torrance, 1966) or the Guilford divergent thinking tasks (r = .25; cf. Guilford, 1967). Interestingly, Kim (2005) found no support for the threshold theory, which assumes that below a critical IQ level (usually below 120), a correlation between DT and intelligence exists, which is supposed to disappear in samples with IQ levels higher than 120. The results of Kim (2005) are in line with recent studies failing to find empirical support for the threshold theory (Preckel, Holling, & Wiese, 2006; Sligh, Conners, & Roskos-Ewoldsen, 2005). Further, a recent study by Silvia (2008), using latent variable models instead of zero-order correlations and thus controlling for measurement error, found a substantial relationship between DT and intelligence (ˇ = .43), underscoring the importance of intellectual abilities in explaining performance in DT and hence, creativity. The relationship between DT and other cognitive abilities has been investigated with less vigour. This is surprising, because DT is most likely affected by cognitive factors other than intelligence as well. Speed of processing, for example, could be presumed to exert a strong positive influence on timed DT test scores, because faster processing allows the generation of more solutions to DT problems (cf. Carroll, 1993). In line with this assumption, Rindermann and Neubauer (2004) found a correlation of r = .30 between a trail-making test measuring processing speed and a verbal DT test in a sample of 271 students. Further, a correlation of r = .38 between a coding test and the verbal DT test was observed. Correlations between processing speed measures and another DT test (unusual product use), however, were much smaller in this study (r = .11 and r = .18, respectively). In another study, Fuqua, Bartsch, and Phye (1975) investigated the relationship between personal tempo and DT in a sample of 225 pre-school age children. These authors assume that personal tempo can be directly related to impulsivity, which corresponds to the tendency of giving quick and often premature answers to test items. Results showed that in contrast to impulsive children, reflective children achieved higher DT scores, which might appear to contradict the evidence by Rindermann and Neubauer (2004) if reflective children are assumed to act more slowly than impulsive ones. However, Wallach and Kogan (1965) report a strong influence of task speededness on DT scores, where speededness refers to the strictness of the time limit to finish the task. These authors found that highly speeded DT tasks with strict time limits showed larger correlations with intelligence tests than unspeeded DT tasks with less strict time limits. Further, a recent finding reported by Vartanian, Martindale, and Kwiatkowski (2007) revealed that the relationship between speed of processing and creative potential is moderated by the degree of interference exhibited in the processing speed tasks, i.e. creative potential correlated more highly with processing speed tasks that required the inhibition of interference. Clearly, the relationship between speed of processing and DT requires further research. Another cognitive ability possibly affecting DT is memory (Stein, 1989). Guilford (1950) noted that DT tests can be scored for three different facets: Fluency (the total number of ideas generated), flexibility (the number of categories in the ideas) and originality (the number of unique or unusual ideas). If DT forms an important prerequisite of creative problem solving, then a large memory capacity will facilitate finding adequate solutions, especially if the similarity between learning situation and problem-solving situation is high or if task-relevant knowledge facilitates memorization (e.g., Runco, Dow, & Smith, 2006; Tulving & Thompson, 1973). The capability to memorize which answers have already been given, or which categories have been produced, as well as the accessibility of memory traces in general should be very helpful in acquiring high DT test scores (cf. Batey, Chamorro-Premuzic, & Furnham, 2009). Gender differences in DT have been found as well, although they have not been consistent. For example, Kim and Michael (1995) report significantly higher fluency scores for female students in one visual and two verbal creativity subtests from the Torrance Tests of Creative Thinking (TTCT; Torrance, 1990). In contrast, Dudek, Strobel, and Runco (1993) report an interaction effect of gender and socio-economic status (SES), with boys largely scoring higher than girls on the TTCT, depending on SES level. This finding was contradicted by results from Matud, Rodriguez, and Grande (2007), who report better results for men compared to women in the TTCT in lower education groups, but advantages for women on verbal fluency in a group having university educational level. Therefore, the role of gender in DT tests is not yet entirely clear, although recent evidence indicates a slight advantage of females on DT tests (Baer, 2008). Situational or social influences are another important factor in understanding DT test score differences. Because DT measures can be regarded as a predictor of creative potential, especially if administered in groups, social influences should be taken into account. Firstly, DT often takes place in a specific social context that greatly affects and shapes creative performance and thinking (Csikszentmihalyi, 1999). Secondly, several studies have provided supportive evidence for the fact that creative potential is hampered in social environments with a lack of freedom and autonomy or with negative attitudes towards creative behavior (Niu & Sternberg, 2003; Richardson, 1988; Thomas & Berk, 1981; Westby & Dawson, 1995; Witt & Beorkrem, 1989). Thirdly, socio-economic status often varies not only within but also between classrooms, affecting DT test performance (Dudek et al., 1993). Finally, an effect of different school forms on DT in students has been reported as well (Grampp & Grampp, 1977). If DT tests are administered to students in different classes, it is highly likely that students in the same classroom have been exposed to homogeneous social and learning environments that differ from those in other classes. This theorizing is in line with ecological or contextual theories (Belsky, 1984; Bronfenbrenner, 1986), which assume that characteristics of the child, the immediate environment (e.g., home or school context), and broader social context may affect children’s development. Because the classroom is one of the most central social settings for children, it can be hypothesized that membership in a specific classroom strongly affects DT processes in students, as well as the relationship of cognitive abilities with DT. In order to model relationships of DT with cognitive abilities both at the individual and classroom level, multilevel analysis is required (Hox, 2002). As mentioned before, school type is another factor potentially affecting DT test performance. The German school system is based on tracking, i.e. achievement grouping: Depending on scholastic achievement in primary school (usually lasting 4
118
J.-T. Kuhn, H. Holling / Thinking Skills and Creativity 4 (2009) 116–123
years), students are distributed either to a lower, middle, higher and sometimes a gifted track. Although students within the same track can be considered more homogeneous with respect to their intellectual ability than when compared to students from other tracks, it has been shown that they choose their classroom peers for constructing academic self-concepts (e.g., math). Academic self-concepts, however, are positively related to scholastic performance. Perhaps unexpectedly, a recent finding indicates that students in the higher track actually have lower math-related self-concepts than those in the lower track when statistically controlling for math achievement (Trautwein, Lüdtke, Marsh, Köller, & Baumert, 2006). It is therefore important to analyze the effect of school type on DT performance: A strong effect of school type on DT performance would be expected based on ability grouping, whereas only a small effect would be expected if the effect of ability grouping is countervailed by lower DT self-concepts of students in the higher tracks. To summarize, we investigated the following hypotheses in this study: (a) DT is positively related to intelligence, memory, and processing speed; (b) gender affects DT test performance, with females scoring higher than males; (c) students in higher tracks have significantly better DT skills than students in lower tracks, even when statistically controlling for the cognitive abilities under investigation; and (d) predictor values vary significantly between classrooms due to social and situational influences of the different classrooms environments. All hypotheses were tested using nested multilevel regression models. 2. Method 2.1. Participants The data used in this study were part of a representative sample for the lower, middle, and higher tracks and also included students from the gifted track of the German education system (Jäger et al., 2005). Students came from different schools in different regions of Germany. Classroom affiliation was available for 1098 students in 55 classrooms from the original sample, with 9 classrooms from the lower track, 8 classrooms from the middle track, 20 classrooms from the higher track and 18 classrooms from the gifted track. 516 students were female, 582 were male. Students came from grades 7 to 10, with a mean age of 14.46 years (SD = 1.06). 165 students of the sample attended the lower track of the German education system, 180 attended the middle track, 433 attended the higher track and 320 visited special schools for the gifted. All students were tested in regular classes by experienced test administrators. Overall testing time was about 200 min. 2.2. Study measures All tests analyzed in this study were based on the Berlin Model of Intelligence Structure (BIS; Jäger, 1984). The Berlin structure of intelligence test for youth (BIS-HB; Jäger et al., 2005) is based on a hierarchical and two-faceted model of intelligence, with an operation facet for processing capacity, processing speed, memory, and DT (cf. Süß & Beauducel, 2005). In addition, a content facet represents verbal, figural, and numerical ability. The latter is often neglected in DT research (Cropley, 2000), but systematically represented in the BIS-HB. General intelligence is presumed to form a higher-order factor. The BIS model offers a very broad operationalization of cognitive functioning and therefore provides a good database for exploring the relationship between DT and cognitive abilities. Further, the model has repeatedly shown excellent psychometric fit (Süß & Beauducel, 2005). Processing capacity in the BIS-HB corresponds to reasoning or fluid intelligence (Wilhelm & Schulze, 2002), and the tests used to measure reasoning capacity are similar to classical intelligence tests used in the literature (for brief task descriptions of BIS tasks, see Bucik & Neubauer, 1996). Therefore, processing capacity in our study is synonymous with “intelligence” or “reasoning” in other analyses (cf. Carroll, 1993). Overall, 15 different processing capacity tests, 5 from each content domain, were administered and aggregated into a single reasoning score across the content facet. By using this parceling procedure, unwanted content variance could be suppressed, and a broad operationalization of the constructs of interest could be achieved (Wittmann, 1988). Likewise, the 9 processing speed tests, 9 memory tests and 12 DT tests were aggregated across content aspects into a single score each. All tests were administered with a pre-specified time limit, and DT tests were scored for fluency. Flexibility scores were available for five DT tests, but because of the very high correlation of the aggregated DT scale scored for fluency with the aggregated DT scale scored for fluency and flexibility (r = .98), flexibility scores were discarded, and only fluency scores were used in all subsequent analyses. This is in line with other studies which found that the use of anything but fluency scores adds only little information (e.g., Hargreaves & Bolton, 1972). Objectivity of scoring for the DT tests was measured by the intraclass correlation coefficient between the ratings of two independent raters, resulting in satisfactory values for all DT tests (M = .94, SD = .04). Internal consistencies across aggregated scores of the four operations were satisfactory (˛ = .80–.92; Jäger et al., 2005). 2.3. Data analyses All variables were standardized prior to analysis (M = 0, SD = 1). In order to test whether multilevel analyses were justified, we calculated the intraclass correlation coefficients ICC(1) of all aggregated scores (Bliese, 2000). The ICC(1) may be interpreted as the proportion of the total residual variation that is due to differences between classrooms. Hence, ICC(1) can be interpreted as a measure of homogeneity: Larger intraclass correlations result when classrooms are homogeneous with respect to the variables under investigation. If ICC(1) differs significantly from zero, multilevel analyses need to be conducted.
J.-T. Kuhn, H. Holling / Thinking Skills and Creativity 4 (2009) 116–123
119
Table 1 Correlations and partial correlations among aggregated test scores. DT DT PC M PS
.51** .45** .61**
PC
M
PS
.15**
.06** .35**
.39** .36** .29**
.62** .66**
.61**
Note: Values below diagonal reflect zero-order correlations, values above diagonal reflect partial correlations, controlling for the other two variables. DT = divergent thinking, PC = processing capacity, M = memory, and PS = processing speed. ** p < .01.
In multilevel analyses, one further distinguishes between level-1 and level-2 predictors. In the case of educational research, students are often nested in classrooms, i.e. students can be considered level-1 units, whereas classrooms can be considered level-2 units. In this context, level-1 predictors vary over students (e.g., intelligence, motivation), whereas level-2 predictors vary over classrooms only (e.g., teacher gender). Next, we calculated multilevel regression models in order to predict DT (Hox, 2002). We first computed a random-intercept model including only the level-1 predictors processing capacity, processing speed, memory and gender as predictors. As an extension of ordinary least-squares regression, the random-intercept model can capture group differences in the mean between classrooms by additionally estimating the variance of classroom-specific deviations from the overall DT mean (Hox, 2002). If this variance significantly differs from zero, it can be concluded that classrooms differ significantly in their mean DT ability. As a next step, we additionally included school type as a dummy-coded level-2 predictor. Including school type should significantly reduce the random-intercept variance. Finally, we investigated whether the slopes of the level-1 predictors varied between classrooms, and hence fitted a random coefficients model. Because we were primarily interested in level-1 predictors, we centered all level-1 variables at their respective group-mean (Enders & Tofighi, 2007). We used maximum likelihood for parameter estimation, because the number of fixed parameters varied across regression models and we intended to conduct likelihood-ratio testing to compare the different multilevel regression models (Ruppert, Wand, & Carroll, 2003). 3. Results We first investigated the zero-order and partial correlations among DT and cognitive abilities at the individual level (N = 1098, cf. Table 1). Correlations between all scores were high, though the most substantial partial correlation was found between processing speed and DT. When statistically controlling for speed and processing capacity, the correlation between memory and DT failed to reach significance. Next, we explored whether the ICC(1) for all aggregated test scores differed significantly from zero. This was indeed the case (see Table 2). ICC(1) was especially large for processing capacity, indicating that group membership exerted a strong influence on individual scores. This result is also supported by checking the differences in DT residuals between classrooms (Fig. 1). The residuals can be interpreted as the estimated classroom means in DT in this case. As can be seen, DT significantly differs from zero in several classrooms, but not in others. Multilevel analyses therefore were necessary in order to avoid biased parameter and standard error estimates. Subsequently, we fitted a random-intercept model using only level-1 predictors (Model 1). As can be seen from Table 3, all variables except for memory on average significantly contributed to the prediction of DT. However, we included memory as a predictor in subsequent models to maximize comparability. The largest effect was found for processing speed, whereas the effect of processing capacity was substantial albeit smaller. Girls achieved significantly higher DT scores than boys. In order to model the effect of level-2 predictors, we added school type as a predictor to the analysis (Model 2 in Table 3). The results show that students on schools from higher tracks achieved significantly better DT scores. By taking school type into account, the random-intercept variance could be significantly reduced, as shown by the significant likelihood-ratio test comparing Model 1 and Model 2. That is, school type explained differences in DT means across classrooms. Finally, we checked whether the level-1 predictors had random slopes that varied across classrooms (Model 3 in Table 3). We found that gender differences were not homogeneous across classrooms, as indicated by the significant slope. Further, we found that the slope of memory varied significantly. Although the average effect of memory did not reach significance, Table 2 Variances and intraclass correlation coefficients for all aggregated test scores. Parameter 00 2 ICC(1)
DT
PC **
.24 .75** .24**
M **
.59 .43** .58**
PS **
.30 .68** .31**
.39** .61** .39**
Note: 00 = level-2 variance, 2 = level-1 variance, ICC(1) = intraclass correlation coefficient, DT = divergent thinking, PC = processing capacity, M = memory, and PS = processing speed. ** p < .01.
120
J.-T. Kuhn, H. Holling / Thinking Skills and Creativity 4 (2009) 116–123
Fig. 1. Empirical Bayes predictions (with 95% confidence intervals) of standardized DT classroom means for all 55 classrooms. Table 3 Parameter estimates and model comparison statistics for Model 1, Model 2 and Model 3. Parameter Fixed Intercept PC M PS Gender Middle track Higher track Gifted track Random Intercept PC M PS Gender df 2
Model 1
Model 2
Model 3
.01 .19** .04 .43** −.19**
.01 .19** .04 .43** −.19** .40** .94** 1.27**
.01 .19** .04 .43** −.19** .39** .96** 1.28**
.25**
.04**
.04** .00 .02 .00 .05
3 74.26**
14 10.66
Note: Random parameters are variances of random effects. PC = processing capacity, M = memory, PS = processing speed, df = difference in degrees of freedom, and 2 = likelihood-ratio statistic. Girls attending the lower track served as reference category. *p < .05. ** p < .01.
this indicated that it was more important for DT performance in some classrooms than in others. The slopes of processing speed and processing capacity were similar across classrooms, implying that the explanatory power of these predictors was rather constant. Therefore, Model 3 did not fit significantly better than Model 2, as indicated by the likelihood-ratio test. 4. Discussion As mentioned by Mumford (2003), more studies illuminating the relationship between cognitive abilities and DT, based on normative samples, are necessary. The present study therefore pursued the goal of investigating cognitive as well as social predictors of DT. This was done in three steps: Computing correlations and partial correlations of DT with reasoning, processing speed and memory, analyzing group-specific variance in these cognitive abilities and comparing nested multilevel regression models predicting DT from these abilities, in combination with additional predictors of interest like gender and school type. Results from the correlation analyses showed that processing speed, of all cognitive abilities taken into consideration, had the highest partial correlation with DT (r = .39). This is not surprising, as Wallach and Kogan (1965) noted that task speededness strongly affects DT scores. Because all tasks were speeded in this study, a high correlation between processing speed and DT scores could have been expected (Wilhelm & Schulze, 2002). The partial correlation of processing capacity
J.-T. Kuhn, H. Holling / Thinking Skills and Creativity 4 (2009) 116–123
121
and DT was low albeit significant (r = .15), comparable to the results reported by Kim (2005). However, Kim (2005) used zero-order correlation coefficients in his meta-analysis. In comparison, the zero-order correlation between reasoning and DT was rather high in our study (r = .52). These results lend support to the idea that DT and intelligence share a significant portion of variance, although processing speed and DT are even more strongly correlated. One of the reasons for this finding might be that all tests utilized here had relatively strict time limits, thus favouring persons with a higher working speed. Large intraclass correlations, especially with respect to processing capacity, revealed that classroom membership strongly affected all cognitive abilities analyzed, and that classrooms were relatively homogeneous with respect to the variables under investigation. A possible reason for this result can be seen in the German school system that produces homogeneous classes through an early selection procedure based on scholastic achievement. This hypothesis was supported by the results of Model 2, which showed that school type significantly predicted random variance in DT means. However, the intraclass correlation was lowest for DT, suggesting that it was more heterogeneous across classrooms than the other cognitive abilities investigated. A well-known finding in the literature has been that personality traits associated with creativity are often disliked by teachers (Westby & Dawson, 1995), but even teachers who reward creative behavior can inadvertently lower motivation in students to repeat it (Amabile, Hennessey, & Grossman, 1986). Another reason might be that DT is less g-loaded than other cognitive abilities like intelligence (Carroll, 1993). Because ability grouping is based on prior scholastic performance, which is highly correlated with intelligence (Deary, Strand, Smith, & Fernandes, 2007), somewhat more heterogeneity in DT is preserved between school types. Nevertheless, more research is needed to clarify why DT is less affected by classroom membership than other cognitive abilities. Results from the multilevel analysis revealed that all cognitive abilities except for memory were statistically significant predictors of DT. For being creative, therefore, good memory abilities are not of central importance. In addition, significant gender differences in DT favouring girls were found. However, this effect was not homogeneous across classrooms, i.e. it was larger in some classrooms than in others. This finding might explain the inconsistency in the literature concerning gender differences in DT, i.e. gender differences might largely depend on the social environment, and vary accordingly. Another important factor in determining the effect of gender on DT is the degree of activation of gender stereotypes. Women, for example, activate a more negative self-stereotype than men with respect to sciences (Guimond & Roussel, 2001). In this context, it is possible that better self-stereotypes of female students with respect to creativity have positively affected test performance. This hypothesizing is consistent with the fact that women benefit much less strongly from creativity training than men, possibly indicating that they already work at a high level on DT problems (Scott, Leritz, & Mumford, 2004). These questions, however, were not explicitly addressed in this study. However, factors such as gender self-stereotype or genderratio in the classroom (Preckel, Zeidner, Goetz, & Schleyer, 2008) are related to task-specific self-concepts and hence, task performance. Therefore, an explicit modelling of such factors, although not done here, should be conducted in future research. School type strongly affected DT test performance, even when controlling for abilities like intelligence, speed, or memory. That is, the creative potential of students was significantly affected by school-related factors. A recent finding (Malmberg, Wanner, & Little, 2008) suggests that students in the highest track of the German education system show less educational goal disengagement and put more effort into their work than students in the middle or lower track, which might have affected DT test performance negatively in the lower and middle track. Further, the classroom climate or teacher characteristics, like teachers’ conceptions of creativity (Kampylis, Berki, & Saariluoma, 2009) might have affected DT test scores. For example, it has been shown that an open climate is central to creative performance (Hunter, Bedell, & Mumford, 2007). Because no additional variables beyond school type were available as level-2 predictors in the data set analyzed, these hypotheses require further investigation: The finding that school type strongly affected DT beyond cognitive abilities shows that DT is determined by social factors to a meaningful degree. This has implications for educational research in that when evaluating creative potential of students, social and school-specific factors must be explicitly taken into account. There are a number of limitations to this analysis of DT as related to intelligence and social factors. Firstly, all variables were measured simultaneously, causal conclusions are therefore limited. Secondly, no level-2 predictors except for school type were available. Longitudinal studies using school- and teacher-related variables in addition to non-cognitive individual attributes (e.g., personality factors; Batey et al., 2009) allow clearer causal conclusions as to the effect of social factors on DT above and beyond intellectual abilities and therefore are a promising venue for future research. Thirdly, all tests were administered with a relatively strict time limit which might have inflated the effect of processing speed on DT. Testing conditions with more generous time limits offer potentially different insights into the relationship of variables investigated here. Finally, despite exhibiting sufficient reliability, the measures used in this study were manifest indicators, i.e. we did not use latent variable modelling like multilevel structural equation modelling (e.g., Mehta & Neale, 2005) here, because classroom sizes were too small. Future studies, preferably longitudinal in nature, should address these limitations.
References Amabile, T. M., Hennessey, B. A., & Grossman, B. S. (1986). Social influences on creativity: The effects of contracted-for reward. Journal of Personality and Social Psychology, 50, 14–23. Baer, J. (2008). Evidence of gender differences in creativity. Journal of Creative Behavior, 42, 78–105. Barron, F., & Harrington, D. M. (1981). Creativity, intelligence, and personality. Annual Review of Psychology, 32, 439–476. Batey, M., Chamorro-Premuzic, T., & Furnham, A. (2009). Intelligence and personality as predictors of divergent thinking: The role of general, fluid and crystallised intelligence. Thinking Skills and Creativity, 4, 60–69. Belsky, J. (1984). The determinants of parenting: A process model. Child Development, 55, 83–96.
122
J.-T. Kuhn, H. Holling / Thinking Skills and Creativity 4 (2009) 116–123
Bliese, P. D. (2000). Within-group agreement, non-independence and reliability: Implications for data aggregation and analysis. In K. J. Klein, & S. W. Kozlowski (Eds.), Multilevel theory, research, and methods in organizations (pp. 349–381). San Francisco, CA: Jossey-Bass. Bronfenbrenner, U. (1986). Ecology of the family as a context for human development. Developmental Psychology, 22, 521–530. Bucik, V., & Neubauer, A. C. (1996). Bimodality in the Berlin model of intelligence structure (BIS): A replication study. Personality and Individual Differences, 21, 987–1005. Carroll, J. B. (1993). Human cognitive abilities: A survey of factor-analytic studies. Cambridge, MA: Cambridge University Press. Cropley, A. J. (2000). Defining and measuring creativity: Are creativity tests worth using? Roeper Review, 23, 72–79. Csikszentmihalyi, M. (1999). Implications of a systems perspective for the study of creativity. In E. J. Sternberg (Ed.), Handbook of creativity (pp. 313–335). Cambridge: Cambridge University Press. Deary, I. J., Strand, S., Smith, P., & Fernandes, C. (2007). Intelligence and educational achievement. Intelligence, 35, 13–21. Dudek, S. Z., Strobel, M. G., & Runco, M. A. (1993). Cumulative and proximal influences on the social environment and children’s creative potential. The Journal of Genetic Psychology, 154, 487–499. Enders, C. K., & Tofighi, D. (2007). Centering predictor variables in cross-sectional multilevel models: A new look at an old issue. Psychological Methods, 12, 121–138. Eysenck, H. (2003). Creativity, personality and the convergent–divergent continuum. In M. A. Runco (Ed.), Critical creative processes (pp. 95–114). Cresskill, NJ: Hampton Press. Fuqua, R. W., Bartsch, T. W., & Phye, G. D. (1975). An investigation of the relationship between cognitive tempo and creativity in preschool-age children. Child Development, 46, 779–782. Grampp, G., & Grampp, H. (1977). Divergentes Denken bei Schülern verschiedener Schularten [Divergent thinking in students of different school forms]. Psychologie in Erziehung und Unterricht, 24, 319–325. Guilford, J. P. (1950). Creativity. American Psychologist, 5, 444–454. Guilford, J. P. (1967). The nature of human intelligence. New York: McGraw-Hill. Guimond, S., & Roussel, L. (2001). Bragging about one’s school grades: Gender stereotyping and students’ perception of their abilities in science, mathematics, and language. Social Psychology of Education, 4, 275–293. Haensly, P. A., & Reynolds, C. R. (1989). Creativity and intelligence. In J. A. Glover, R. R. Ronning, & C. R. Reynolds (Eds.), Handbook of creativity (pp. 111–132). New York: Plenum. Hargreaves, D. J., & Bolton, H. (1972). Selecting creativity tests for use in research. British Journal of Psychology, 63, 451–462. Hox, J. (2002). Multilevel analysis: Techniques and applications. Mahwah, NJ: Erlbaum. Hunter, S. T., Bedell, K. E., & Mumford, M. D. (2007). Climate for creativity: A quantitative review. Creativity Research Journal, 19, 69–90. Jäger, A. O. (1984). Intelligenzstrukturforschung: Konkurrierende Modelle, neue Entwicklungen, Perspektiven [Intelligence structure research: Competing models, new developments, perspectives]. Psychologische Rundschau, 35, 21–35. Jäger, A. O., Holling, H., Preckel, F., Schulze, R., Vock, M., Süß, H.-M., et al. (2005). Berliner Intelligenzstruktur-Test für Jugendliche: Begabungs- und Hochbegabungsdiagnostik (BIS-HB) [Berlin Intelligence Structure test for gifted children and youth]. Göttingen, Germany: Hogrefe. Kampylis, P., Berki, E., & Saariluoma, P. (2009). In-service and prospective teachers’ conceptions of creativity. Thinking Skills and Creativity, 4, 15–29. Kim, J., & Michael, W. B. (1995). The relationship of creativity measures to school achievement and to preferred learning and thinking style in a sample of Korean high school students. Educational and Psychological Measurement, 55, 60–74. Kim, K. H. (2005). Can only intelligent people be creative? A meta-analysis. The Journal of Secondary Gifted Education, 16, 57–66. Kim, K. H. (2008). Meta-analyses of the relationship of creative achievement to both IQ and divergent thinking test scores. Journal of Creative Behavior, 42, 106–130. Malmberg, L.-E., Wanner, B., & Little, T. D. (2008). Age and school-type differences in children’s beliefs about school performance. International Journal of Behavioral Development, 32, 531–541. Matud, M. P., Rodriguez, C., & Grande, J. (2007). Gender differences in divergent thinking. Personality and Individual Differences, 43, 1137–1147. Mehta, P. D., & Neale, M. C. (2005). People are variables too: Multilevel structural equations modeling. Psychological Methods, 10, 259–284. Mumford, M. D. (2003). Where have we been, where are we going? Taking stock in creativity research. Creativity Research Journal, 15, 107–120. Mumford, M. D., & Gustafson, S. B. (1988). Creativity syndrome: Integration, application, and innovation. Psychological Bulletin, 103, 27–43. Niu, W., & Sternberg, R. J. (2003). Societal and school influences on student creativity: The case of China. Psychology in the Schools, 40, 103–114. Plucker, J. A. (1999). Is the proof in the pudding? Reanalysis of Torrance’s (1958 to present) longitudinal data. Creativity Research Journal, 12, 103–114. Preckel, F., Holling, H., & Wiese, M. (2006). Relationship of intelligence and creativity in gifted and non-gifted students: An investigation of threshold theory. Personality and Individual Differences, 40, 159–170. Preckel, F., Zeidner, M., Goetz, T., & Schleyer, E. J. (2008). Female, big fish’ swimming against the tide: The ‘big-fish–little-pond-effect’ and gender-ratio in special gifted classes. Contemporary Educational Psychology, 33, 78–96. Richardson, A. G. (1988). Classroom learning environment and creativity: Some Caribbean findings. Psychological Reports, 62, 939–942. Rindermann, H., & Neubauer, A. C. (2004). Processing speed, intelligence, creativity, and school performance: Testing of causal hypotheses using structural equation models. Intelligence, 32, 573–589. Runco, M. A. (1986). Predicting children’s creative performance. Psychological Reports, 59, 1247–1254. Runco, M. A. (1991). Divergent thinking. Norwood, NJ: Ablex. Runco, M. A. (2007). Creativity. Burlington, MA: Elsevier. Runco, M. A. (2008). Commentary: Divergent thinking is not synonymous with creativity. Psychology of Aesthetics, Creativity, and the Arts, 2, 93–96. Runco, M. A., Dow, G., & Smith, W. R. (2006). Information, experience, and divergent thinking: An empirical test. Creativity Research Journal, 18, 269–277. Ruppert, D., Wand, M. P., & Carroll, R. J. (2003). Semiparametric regression. Cambridge: Cambridge University Press. Scott, G., Leritz, L. E., & Mumford, M. D. (2004). The effectiveness of creativity training: A quantitative review. Creativity Research Journal, 16, 361–388. Silvia, P. J. (2008). Another look at creativity and intelligence: Exploring higher-order models and probable confounds. Personality and Individual Differences, 44, 1012–1021. Silvia, P. J., Winterstein, B. P., Willse, J. T., Barona, C. M., Cram, J. T., Hess, K. I., et al. (2008). Assessing creativity with divergent thinking tasks: Exploring the reliability and validity of new subjective scoring methods. Psychology of Aesthetics, Creativity, and the Arts, 2, 68–85. Sligh, A. C., Conners, F. A., & Roskos-Ewoldsen, B. (2005). Relation of creativity to fluid and crystallized intelligence. Journal of Creative Behavior, 39, 123–136. Stein, B. S. (1989). Memory and creativity. In J. A. Glover, R. R. Ronning, & C. R. Reynolds (Eds.), Handbook of creativity (pp. 163–176). New York: Plenum. Sternberg, R. J., & O’Hara, L. A. (1999). Creativity and intelligence. In R. J. Sternberg (Ed.), Handbook of creativity (pp. 251–272). Cambridge: Cambridge University Press. Süß, H.-M., & Beauducel, A. (2005). Faceted models of intelligence. In O. Wilhelm, & R. Engle (Eds.), Handbook of understanding and measuring intelligence (pp. 313–332). London: Sage. Thomas, N. G., & Berk, L. E. (1981). Effects of school environments on the development of young children’s creativity. Child Development, 52, 1153–1162. Torrance, E. P. (1966). Torrance Tests of Creative Thinking (research ed.). Princeton, NJ: Personnel Press. Torrance, E. P. (1990). The Torrance Tests of Creative Thinking norms—Technical Manual figural (streamlined) forms A & B. Bensenville, IL: Scholastic Testing Service. Trautwein, U., Lüdtke, O., Marsh, H. W., Köller, O., & Baumert, J. (2006). Tracking, grading, and student motivation: Using group composition and status to predict self-concept and interest in ninth-grade mathematics. Journal of Educational Psychology, 98, 788–806. Tulving, E., & Thompson, D. M. (1973). Encoding specificity and retrieval processes in episodic memory. Psychological Review, 80, 352–373.
J.-T. Kuhn, H. Holling / Thinking Skills and Creativity 4 (2009) 116–123
123
Vartanian, O., Martindale, C., & Kwiatkowski, J. (2007). Creative potential, attention, and speed of information processing. Personality and Individual Differences, 43, 1470–1480. Vincent, A. S., Decker, B. P., & Mumford, M. D. (2002). Divergent thinking, intelligence, and expertise: A test of alternative models. Creativity Research Journal, 14, 163–178. Wallach, M. A., & Kogan, N. (1965). Modes of thinking in young children: A study of the creativity-intelligence distinction. New York: Holt, Rinehart & Winston. Westby, E. L., & Dawson, V. L. (1995). Creativity: Asset or burden in the classroom? Creativity Research Journal, 8, 1–10. Wilhelm, O., & Schulze, R. (2002). The relation of speeded and unspeeded reasoning with mental speed. Intelligence, 30, 537–554. Witt, L. A., & Beorkrem, M. N. (1989). Climate for creative productivity as a predictor of research usefulness and organizational effectiveness in an R&D organization. Creativity Research Journal, 2, 30–40. Wittmann, W. W. (1988). Multivariate reliability theory: Principles of symmetry and successful validation strategies. In J. R. Nesselroade, & R. B. Cattell (Eds.), Handbook of multivariate experimental psychology (2nd ed., pp. 505–560). New York: Plenum.