Available online at www.sciencedirect.com
ScienceDirect Re-examining prominent measures of divergent and convergent creativity Robert A Cortes, Adam B Weinberger, Richard J Daker and Adam E Green Much of creativity research has focused on the constructs of divergent and convergent thinking. In this review, we address key gaps in extant empirical understanding of these constructs and offer suggestions for future research to parse their respective contributions to creative cognition. Furthermore, we consider the construct validity of the psychometric tasks most commonly used to measure these types of thinking: The Alternative Uses Task and the Remote Associates Test. We underscore that, although these tasks frequently are used to assay these constructs separately, they actually involve mixtures of the two constructs together. We conclude that additional measurement development and factor analytic research is necessary to delineate the separability and interdependence of divergent and convergent thinking as components of creativity. Address Department of Psychology, Georgetown University, Washington, DC, 20057, USA Corresponding author: Cortes, Robert A (
[email protected])
Current Opinion in Behavioral Sciences 2018, 27:90–93 This review comes from a themed issue on Creativity Edited by Rex Jung and Hikaru Takeuuchi
https://doi.org/10.1016/j.cobeha.2018.09.017 2352-1546/ã 2018 Elsevier Ltd. All rights reserved.
Introduction Creative cognition draws upon a diverse set of neurocognitive processes, but most research has focused on two main types of thinking: first, divergent thinking, referring to expansive generation of novel ideas, and second, convergent thinking, which requires homing in on a single response from a number of possible alternatives [1,2]. Although there are frequent presumptions about the ways in which these constructs are distinct and entwined [2], there has been little empirical evidence upon which to base such characterizations. Here, we address key gaps in extant empirical understanding and offer suggestions for future research that help parse apart the constructs of divergent and convergent thinking. Current Opinion in Behavioral Sciences 2019, 27:90–93
Furthermore, we consider the construct validity of the psychometric tasks most commonly used to measure these types of thinking: The Alternative Uses Task and the Remote Associates Test.
Constructs Creativity is among the most advanced things minds do and one among the most challenging to capture experimentally [3,4]. But the value of creativity makes this challenge worth meeting, and it is the task of creativity researchers to ensure that the constructs we study are as precise and useful as possible. Two prominent measures applied to the study of creativity are the Alternative Uses Task (AUT), which aims to measure divergent thinking by asking participants to think of novel uses for everyday objects [1], and the Remote Associates Test (RAT), which seeks to measure convergent thinking by providing participants with three unrelated words (e.g. pine, sauce, crab) and asking them to think of a fourth word (in this case, apple) that forms a compound association with each of the given words [5]. Here, we will argue that, although these are the most widely used assessments of divergent and convergent creativity respectively, neither measure is purely divergent or convergent (see Tasks). Divergent and convergent thinking are often thought to be somewhat opposing types of creative cognition [2]. Typically, when researchers find a variable that correlates with the AUT but not the RAT, they claim to have found a unique predictor of one type of creativity (i.e. divergent thinking) as well as evidence of a dissociation between the two different types of creative thought [6–10]. But divergent and convergent thinking are often interwoven during creative cognition; according to dual process models of creativity, an individual’s creative performance stems from the combination of these two processes [11–16,17,18]. In particular, divergence is often thought to broaden the representational search space in which convergence operates to identify the best ideas for the task at hand. Thus, it can be difficult to distinguish these two constructs-based on task performance measures. Before proceeding further, it is useful to consider another construct that has been reasonably well defined and separated into subconstructs: intelligence. Cognitive www.sciencedirect.com
Re-examining divergent and convergent creativity Cortes et al. 91
psychologists have effectively parsed various components of intelligence, a construct that, like creativity, is difficult to define, into discrete G factors (e.g. Gf, Gc, Gv) that correlate with one another but are still separable and predictive of different outcomes [19–22]. Similar to divergence and convergence, these G factors are often thought of as separate components that work together in combination to produce intelligent cognition [23]. Critically, intelligence researchers have thoroughly validated multiple tasks that measure each of these subcomponents, which then allows them to extract a latent variable (g) from scores on these tasks, which is far more reliable and robust than any of the individual measures alone [24,25]. Unfortunately, the factor analytic work necessary to make claims about divergence and convergence as separable subcomponents of creativity simply has not yet been done, and only a few studies have directly compared performance on convergent and divergent tasks. Indeed, there do not appear to be enough properly validated measures of convergent thinking to support a factor analysis. Even if divergent and convergent thinking are somewhat disparate subcomponents of creativity, we would still expect them to be at least moderately correlated (i.e. similar the G factors of intelligence). This has not been demonstrated. Indeed, performance on the AUT and RAT appear to be unrelated to each other [6,26–28]. This calls into question whether these two tasks reflect separate components of the same higherorder construct (creativity), regardless of whether a third variable correlates with one task but not the other. Despite this crucial theoretical gap, researchers (ourselves included) have used the AUT and RAT with the assumption that they measure different subcomponents of creative cognition. In order to properly utilize these tasks, it is essential to do the sort of fundamental latent variable modeling of creativity that has served as the foundation of modern intelligence research.
Tasks Alternative uses task
The AUT was developed in 1967 by J. P. Guilford [1] and is generally described and used as a measure of pure divergence [1,29,30]. However, some amount of convergent thinking is necessary for adequate performance on the AUT. This is reflected in the manner in which the AUT is presented, as participants are provided with ample time (typically 3 min) to evaluate their ideas. They are also directed to consider whether potential uses meet the criteria described in their instructions (i.e. alternative, unusual, creative, appropriate). Thus, as participants generate responses, they must critically evaluate each idea and determine whether or not it is sufficiently unusual and, furthermore, whether it is a valid use of the object. For example, if asked to think of an alternative use for a ‘brick’, someone might have the fleeting thought, ‘geranium’. This answer, while unusual, is not plausibly www.sciencedirect.com
valid, so this thought would most likely be discarded in favor of something else. This is a convergent process. Remote associates test
The RAT, developed in 1962 by Sarnoff Mednick [5], is likely the most widely used measure of convergence [29]. This task is scored for accuracy and response time, and occasionally for the number of problems solved via insight as opposed to analysis [31]. However, just as the AUT is not a task of pure divergence, the RAT certainly cannot be said to solely measure convergent thinking. Participants do not select the correct response from a list of options provided to them. Rather, they must, on their own, generate a fourth word that fits with the three stimulus words. Thus, solving the RAT involves first generating many possible ideas (for e.g. ‘house’ could fit two of the three words: dream, break, light) and then selecting the best option. Although this process may happen somewhat more automatically (or at least outside of conscious awareness) when participants report solutions occurring by insight, there is nonetheless some form of search through a representational space in which multiple options must be considered. Furthermore, it bears examining why the RAT tends not to be related to measures of divergent thinking. When it was first validated, the RAT was shown to be related to originality of personality and the subjective ratings of an unreported number of professors asked to evaluate their students’ creativity [32,33]. However, more recent evidence suggests that performance on the RAT displays little to no relationship with various measures of divergent thinking (including the AUT and the Torrance Test of Creative Thinking; TTCT) but is highly related to measures of intelligence [6,26–28]. One explanation for this could be that measures of divergent thinking such as the AUT and the TTCT require participants to produce responses that are semantically related to the prompts (e.g. ‘Just suppose you could walk on air or fly without being in an airplane or similar vehicle. What problems might this create?’; [34,35]), while the RAT does not consistently involve semantic processing. As noted by Worthen and Clark [36], the RAT does not take into account the semantic remoteness between the stimulus words and the solution word, or indeed whether they are semantically related at all. When those authors examined 20 RAT items, it was found that 38 out of the 60 associations within those items were functionally (i.e. semantically) unrelated, and that 48 of them were highly structurally related, such that the solution words were only associated with stimulus words because they were often used in the same phrase or compound word [36]. For example, in the above-mentioned RAT trial, there is very little in the way of semantic relatedness/meaning that ties the prompt words, crab, sauce, and pine to the solution word, apple. As well as can be determined by Oxford English Dictionary, the only thing associating Current Opinion in Behavioral Sciences 2019, 27:90–93
92 Creativity
apple with crab may have been an obscure accident of word derivation somewhere in the boggy-broguey history of Scotch dialects by which a phonetic bit, perhaps never referring to crustaceans, landed as the sound ‘crab’ next to ‘apple’ in modern English [37]. But an apple doesn’t have anything to do with a crab — there is no legitimate semantic relation — or with a pine for that matter. In fact, many RAT problems are not merely askew to semantic relatedness, they actually require inhibiting semantically related associates in order to find the solution word. For instance, in response to the prompt words, ‘cover, arm, and wear,’ one might begin thinking of semantic relations (e.g. based on the actions or appearance of arms) yet to solve this problem one must inhibit those semantically related associates and choose ‘under’ instead. The need for inhibition of semantic representations in the RAT is supported by evidence that the ability to inhibit semantically meaningful distractors during an executive inhibition task strongly predicts performance on the RAT [38]. It should be noted that most recent work with the RAT has used an extended version called the Compound Remote Associates task (CRA; [39]), but this update of the RAT did not — and was not intended to — account for semantic relatedness between the stimulus words and the solution word. Other apparent explanations for the lack of relationship between the RAT and AUT include pronounced differences in the structures, timing, and measurement of these tasks, that are unrelated to the constructs of divergence and convergence. Thus, it is not straightforward to compare performance on these tasks and conclude, for example, that a person’s divergence exceeds their convergence.
Conclusion In this review, we highlighted the need for measurement development and factor analytic work to establish divergence and convergence as separable subcomponents of creativity. We emphasized that the tasks used to assay these constructs separately actually involve mixtures of the two constructs together. We additionally highlighted that the RAT differs from divergent thinking measures in that it frequently does not involve semantic relations.
reliably predict elusive constructs with great success. A related goal, given the difficulty of directly comparing the RAT and AUT, is the development and rigorous validation of tasks that use the same criteria to measure both divergent and convergent thinking (for promising developments, see Ref.: [17,40,41,42]). Cause for optimism can also be found in the establishment of the Society for the Neuroscience of Creativity, whose charter prioritizes ontological demarcation of creativity constructs and measurements, and the gathering of consensus among researchers and others in education and industry who are invested in understanding and bolstering human creative potential. New progress in behavioral and brain-based creativity research is likely to hinge on the ontological clarity of divergent and convergent thinking.
Conflict of interest statement Nothing declared.
Acknowledgements RAC, ABW, RJD, and AEG are supported by grants from the National Science Foundation [DRL-1420481] and The John Templeton Foundation [ID 51971]. RAC is additionally supported by the National Science Foundation Graduate Research Fellowship and the Patrick Healy Graduate Fellowship
References and recommended reading Papers of particular interest, published within the period of review, have been highlighted as: of special interest of outstanding interest 1.
Guilford JP: The Nature of Human Intelligence. McGraw-Hill; 1967.
2.
Eysenck H: Creativity, personality, and the convergentdivergent continuum. In Critical Creative Processes. Edited by Runco MA. Hampton Press; 2003:95-114.
3.
Runco MA, Jaeger GJ: The standard definition of creativity. Creat Res J 2012, 24:92-96.
4.
Taylor CW: Various approaches to and definitions of creativity. In The Nature of Creativity: Contemporary Psychological Perspectives. Edited by Sternberg RJ. Cambridge University Press; 1988:99-121.
5.
Mednick S: The associative basis of the creative process. Psychol Rev 1962, 69:220-232.
6.
Chermahini SA, Hommel B: The (b) link between creativity and dopamine: spontaneous eye blink rates predict and dissociate divergent and convergent thinking. Cognition 2010, 115:458465.
7.
Ritter SM, Ferguson S: Happy creativity: listening to happy music facilitates divergent thinking. PLoS One 2017, 12: e0182210.
8.
Colzato LS, Ritter SM, Steenbergen L: Transcutaneous vagus nerve stimulation (tVNS) enhances divergent thinking. Neuropsychologia 2018, 111:72-76.
9.
Kuypers KPC, Riba J, de la Fuente Revenga M, Barker S, Theunissen EL, Ramaekers JG: Ayahuasca enhances creative divergent thinking while decreasing conventional convergent thinking. Psychopharmacology 2016, 233:3395-3403.
Recommendations Creativity researchers must carefully consider whether the tasks we use align with the constructs we take them to measure. Single tasks like the AUT and the RAT are unlikely to capture pure divergence or convergence, and certainly cannot capture creativity as a whole. As such, caution is required when interpreting results that show a variable correlating with only one task. In order to effectively measure the target constructs, creativity research should ultimately seek to utilize a battery of tasks in order to capture shared variance. Intelligence researchers have benefited enormously from this approach, which has allowed them to Current Opinion in Behavioral Sciences 2019, 27:90–93
10. Ma K, Hommel B: Metacontrol and body ownership: divergent thinking increases the virtual hand illusion. Psychol Res 2018:18. 11. Campbell DT: Blind variation and selective retention in creative thought as in other knowledge processes. Psychol Rev 1960, 67:380-400. www.sciencedirect.com
Re-examining divergent and convergent creativity Cortes et al. 93
12. Basadur M: Optimal ideation-evaluation ratios. Creat Res J 1995, 8:63-75. 13. Brophy DR: Understanding, measuring and enhancing individual creative problem-solving efforts. Creat Res J 1998, 11:123-150. 14. Runco MA: Creativity: Theories and Themes: Research, Development, and Practice. Academic Press; 2007. 15. Allen AP, Thomas KE: A dual process account of creative thinking. Creat Res J 2011, 23:109-118. 16. Sowden PT, Pringle A, Gabora L: The shifting sands of creative thinking: connections to dual-process theory. Think Reason 2015, 21:40-60. 17. Goldschmidt G: Linkographic evidence for concurrent divergent and convergent thinking in creative design. Creat Res J 2016, 28:115-122. This study utilizes linkography to demonstrate the cyclical shift between divergent and convergent thinking within the engineering design process. Presents an effective method for measuring divergence and convergence along similar dimensions within a domain-specific creativity. 18. Barr N: Intuition, reason, and creativity: an integrative dual process perspective. In The New Reflectionism in Cognitive Psychology. Edited by Pennycook G. Routledge Press; 2018:99124. This review provides an extensive summary of the empirical evidence linking divergent, convergent, executive, and associative processes to the facilitation of creative cognition. It covers both foundational and recent cognitive neuroscientific evidence. 19. Horn JL, Cattell RB: Refinement and test of the theory of fluid and crystallized general intelligences. J Educ Psychol 1966, 57:253-270. 20. McGrew KS: Analysis of the major intelligence batteries according to a proposed comprehensive Gf-Gc framework. J Ment Retard 1997, 89:215-230. 21. McGrew KS: The Cattell-Horn-Carroll theory of cognitive abilities. In Contemporary Intellectual Assessment: Theories, Tests, and Issues. Edited by Flanagan DP, Harrison PL. Guilford Press; 2005:136-181. 22. Schneider WJ, McGrew KS: The Cattell-Horn-Carroll theory of cognitive abilities. In Contemporary Intellectual Assessment: Theories, Tests, and Issues. Edited by Flanagan DP, Harrison PL. Guilford Press; 2012:99-144. 23. Gustafsson JE: A unifying model for the structure of intellectual abilities. Intelligence 1984, 8:179-203. 24. Ekstrom RB, Dermen D, Harman HH: Manual for Kit of FactorReferenced Cognitive Tests. Educational Testing Service; 1976. 25. Carroll JB: Human Cognitive Abilities: A Survey of Factor-Analytic Studies. Cambridge University Press; 1993. 26. Vartanian O, Martindale C, Kwiatkowski J: Creative potential, attention, and speed of information processing. Personal Individ Diff 2007, 43:1470-1480. 27. Lee CS, Therriault DJ: The cognitive underpinnings of creative thought: a latent variable analysis exploring the roles of intelligence and working memory in three creative thinking processes. Intelligence 2013, 41:306-320.
www.sciencedirect.com
28. Lee CS, Huggins AC, Therriault DJ: A measure of creativity or intelligence? Examining internal and external structure validity evidence of the remote associates test. Psychol Aesthet Creat Arts 2014, 8:446-460. 29. Kaufman JC, Plucker JA, Baer J: Essentials of Psychological Assessment Series. Essentials of Creativity Assessment. John Wiley & Sons Inc; 2008. 30. Runco MA, Acar S: Divergent thinking as an indicator of creative potential. Creat Res J 2012, 24:66-75. 31. Bowden EM, Jung-Beeman M, Fleck J, Kounios J: New approaches to demystifying insight. Trends Cogn Sci 2005, 9:322-328. 32. Crutchfield R: Conformity and character. Am Psychol 1955, 10:191-198. 33. Mednick MT: Research creativity in psychology graduate students. J Consult Psychol 1963, 27:265-266. 34. Gilhooly KJ, Fioratou E, Anthony SH, Wynn V: Divergent thinking: strategies and executive involvement in generating novel uses for familiar objects. Br J Psychol 2007, 98:611-625. 35. Green AE: Creativity, within reason: semantic distance and dynamic state creativity in relational thinking and reasoning. Curr Dir Psychol Sci 2016, 25:28-35. This review focuses on a program of cognitive and brain-based research in our lab that has helped to establish the value of semantic distance measurement in characterizing creative cognition, especially creative analogical reasoning. 36. Worthen BR, Clark PM: Toward an improved measure of remote associational ability. J Educ Measur 1971, 8:113-123. 37. Crab apple: OxfordLearnerDictionaries.com. . Retrieved from Oxford Advanced American Dictionary; 2018 https://www. oxfordlearnersdictionaries.com/us/definition/english/crab-apple. 38. White HA, Shah P: Uninhibited imaginations: creativity in adults with attention-deficit/hyperactivity disorder. Personal Individ Diff 2006, 40:1121-1131. 39. Bowden EM, Jung-Beeman M: Normative data for 144 compound remote associate problems. Behav Res Methods Instrum Comput 2003, 35:634-639. 40. Barbot B, Besanc¸on M, Lubart T: The generality-specificity of creativity: exploring the structure of creative potential with EPoC. Learn Individ Diff 2016, 52:178-187. This study outlines the multidimensional and hierarchical structure of creative thought and emphasizes the need to measure it with comprehensive test batteries sampling a range of creative tasks, domains and creative thinking modes. 41. Rietzschel EF, Nijstad BA, Stroebe W: Effects of problem scope and creativity instructions on idea generation and selection. Creat Res J 2014, 26:185-191. 42. Kleinmintz OM, Abecasis D, Tauber A, Geva A, Chistyakov AV, Kreinin I, Klein E, Shamay-Tsoory SG: Participation of the left inferior frontal gyrus in human originality. Brain Struct Funct 2018, 223:329-341. This study introduces a task that measures both divergent and convergent thinking along the same dimensions. Importantly, this task can be used in fMRI research to reveal the shared (or disparate) neural substrates that underly divergence and convergence.
Current Opinion in Behavioral Sciences 2019, 27:90–93