Thinking Skills and Creativity 33 (2019) 100574
Contents lists available at ScienceDirect
Thinking Skills and Creativity journal homepage: www.elsevier.com/locate/tsc
Latency as a predictor of originality in divergent thinking ⁎
Selcuk Acara, , Ahmed M. Abdulla Alabbasib, Mark A. Runcoc, Kenes Beketayevd
T
a
Buffalo State, The State University of New York, United States Arabian Gulf University, Bahrain c Southern Oregon University, United States d SparcIT, United States b
A R T IC LE I N F O
ABS TRA CT
Keywords: Divergent thinking Latency Order effect Think time Originality
Previous research on divergent thinking (DT) indicates that fluency, originality, and flexibility change with time. Although there is a drop in the ideational productivity per minute, ideas tend to get more original and flexible as time passes, a phenomenon known as the order effect. The present research extends previous findings of longer latencies during flexible ideation and examined the relationship between latency and originality. 1325 verbal and 488 figural responses generated by 83 people were analyzed in a multilevel model (MLM) where ideas were nested under the type of DT tasks and the DT task(s) was nested under individuals. Originality was measured with a semantics-based algorithm and latency was measured as the difference in time between consecutive ideas. Analysis controlling the response order found that originality was higher with a longer latency. These findings indicate that longer think time (TT) is a predictor of originality. This holds true in both early versus late sections of DT, as well as across different type of DT tasks. The results are interpreted in terms of associational processes and executive functions.
1. Introduction Often used as estimates of creative potential, DT tests provide fluency, flexibility, originality, and elaboration scores (Guilford, 1956; Runco & Acar, 2012, 2018). Fluency reflects ideational productivity and is operationalized by simply counting the number of discrete ideas given by any one individual. The variety and diversity of the ideas are captured through counting conceptual categories, which indicate one’s flexibility. Detail and elegance of the ideas show the level of elaboration. The most statistically infrequent ideas are considered the original ones. Originality could be viewed as the most important of the indices because it has been shown to be the strongest predictor of creativity (Acar, Burnett, & Cabra, 2017) and the only component of creativity that most theories recognize as prerequisite (Rothenberg & Hausman, 1976; Runco, 1988; Runco, Illies, & Eisenman, 2005). This article focuses on originality. DT tests are useful for creativity research because they are operational and allow the testing of hypotheses. One set of new testable hypotheses involves the relationship of time to DT (Runco & Cayirdag, 2011). Creativity research usually approached time from the perspective of time while testing DT (Benedek, Mühlmann, Jauk, & Neubauer, 2013; Plucker, Runco, & Lim, 2006; Preckel, Wermer, & Spinath, 2011; Torrance, 1969; Vernon, 1971), time provided to allow incubation (Cai, Mednick, Harrison, Kanady, & Mednick, 2009; Coskun, 2005; Gilhooly, Georgiou, Garrison, Reston, & Sirota, 2012; Madjar & Shalley, 2008), or time explaining changes in creativity across developmental stages (Charles & Runco, 2001; Claxton, Pannells, & Rhoads, 2005; Daugherty, 1993; Lau & Cheung,
⁎
Corresponding author at: International Center for Studies in Creativity, 1300 Elmwood Ave., Chase 239, Buffalo, NY, 14222, United States. E-mail address: acars@buffalostate.edu (S. Acar).
https://doi.org/10.1016/j.tsc.2019.100574 Received 13 February 2019; Received in revised form 28 May 2019; Accepted 6 June 2019 Available online 07 June 2019 1871-1871/ © 2019 Elsevier Ltd. All rights reserved.
Thinking Skills and Creativity 33 (2019) 100574
S. Acar, et al.
2010b; Nash, 1974; Simonton, 1977; Torrance, 1968; Williams, 1976) or eras (Kim, 2011; Simonton, 1975). These lines of investigation contributed to the creativity literature with the concepts such as fourth grade slump (Torrance, 1968) and 10-year rule (Bloom, 1985; Hayes, 1989; Kaufman & Kaufman, 2007). Analyzing DT in terms of time spent on ideas has been a fertile area of research, as well. Time is an important part of the process of DT and is objective. The indices mentioned above—fluency, originality, and flexibility—can each be related to the use of time while taking DT tasks. Parnes (1961) suggested to extend the time spent for ideation, known as the extended effort principle, because more ideas lead to more original ideas. He found more original ideas in the last phase of ideation. Higher originality in late portions was subsequently supported by Mednick's (1962) findings. According to Mednick, most people generate many ideas early in the DT session and run out of ideas quickly, whereas creative individuals are capable of producing ideas over an extended period of time. In other words, rate of ideational productivity is negatively related with time. Christensen, Guilford, and Wilson (1957) also found decreasing rate of fluency over time. They investigated three outcomes from DT in terms of temporal order: remoteness, uncommonness, and cleverness. They expected an upward trend in each of these across time – higher remoteness, uncommonness, and cleverness – in late ideas than early ideas. They found a positive relationship between temporal order and remoteness and uncommonness, but not “cleverness” of ideas. The remoteness and uncommonness of ideas indicate their originality and the greater use of diverse categories, which is flexibility. Since Christensen's et al. (1980), many research reports provided evidence supporting this order effect on originality (Mednick, 1962; Milgram & Rabkin, 1980; Phillips & Torrance, 1977; Runco, 1986; Ward, 1969) and flexibility (Runco, 1986). These findings showed that as ideation continues in DT, ideas tend to become more original and are more likely to be drawn from new conceptual categories. Recently, extended effort was investigated to better understand the order effect using more advanced techniques, such as online testing and computer-based DT tests (e.g., Beaty & Silvia, 2012), and latent semantic analysis (LSA; Hass, 2017a, 2017b). The order effect was also recently examined using neuroscientific methods such as the EEG and the fMRI, while participants received different DT tasks (Heinonen et al., 2016; Wang, Hao, Ku, Grabner, & Fink, 2017). Although these studies used different methods to study the order effect, they all reached the same conclusion: originality increases over time but fluency decreases. Several of these studies suggested that the order effect could be explained in terms of the involvement of executive processes. For example, Beaty and Silvia (2012) examined the role of fluid intelligence and concluded “people higher in fluid intelligence started with better ideas and did better throughout the task—their initial ideas were as good as their later ones” (p. 314). This finding suggests that fluid intelligence moderate the serial order effect. Similarly, Wang et al. (2017) looked at the possible role of three executive processes (updating, shifting, and inhibition) on the serial order effect. Not surprisingly, shifting function explained why originality increases over time (i.e., later ideas are more original). In one recent investigation, Acar and Runco (2017) asked participants to think aloud while responding to verbal and figural DT tasks. Participants verbalized their ideas rather than writing the responses on a piece of paper, and what they said was recorded, transcribed, and coded. This method allowed tracking the time at which ideas were generated. Acar and Runco (2017) calculated latency (i.e., the time elapsing between consecutive ideas) to investigate its relation to category switch. They expected that latency would be higher when participants switch from one category to another because of the need to make remote associations and the greater involvement of executive functions while switching. To exemplify, human cognition can find “tomato” after “lettuce” as they are both vegetables, whereas “remote control device” may take more time, when thinking about vegetables as it belongs to a different semantic category (technology). Acar and Runco found that category switch takes 5 s more than the instances at which participants stayed within the same category. This difference was even more pronounced with figural tasks than the verbal tasks. In a subsequent re-analysis, Acar, Runco, and Ogurlu (2019) considered the role of the order effect in latency between adjacent responses generated for DT. This research extended previous findings that ideas become more original later in DT output (Christensen et al., 1957; Mednick, 1962; Milgram & Rabkin, 1980; Runco, 1986; Ward, 1969). When response order was added into the model, Acar et al. found that latency was not equal across initial versus late phases. Participants used more time later than earlier in DT. Moreover, the latency required for category switch was not equal throughout the DT session. Acar et al. (2019) found an interaction effect of response order and category switch, indicating that category switch takes more time later than earlier. The present study extends this line of research (Acar & Runco, 2017; Acar et al., 2019), which merely focused on the relationship between latency between adjacent respoonses and flexibility, to originality. Does latency between adjacent responses also predict originality? There are several reasons to hypothesize that latency would predict originality. First, time taken during idea generation in DT may imply further refinement of ideas through processes such as idea combination and avoidance of premature closure. Such processes are likely to enhance the quality and originality of the ideas. This is consistent with greater use of executive functions because ideation involving deliberate processes requires more use of executive functions (Benedek, Jauk, Sommer, Arendasy, & Neubauer, 2014; Gilhooly, Fioratou, Anthony, & Wynn, 2007). Second, the exploration of diverse conceptual categories (higher flexibility) may provide novel ideas as a result of cognitive agility. In other words, originality may be a byproduct of making mental leaps among different conceptual categories. Christensen's et al. (1980) findings of higher “remoteness” with time may explain why “uncommonness” was more frequent in late ideas. As people make more remote associations, the probability of making a common connection would diminish. This explanation is related to spreading activation of semantic memory (Collins & Loftus, 1975) in which memory search takes place based on semantic proximity of ideas and strongly related ideas take a shorter amount of time to retrieve than those remotely related (Balota & Lorch, 1986; Den Heyer & Briand, 1986; Lorch, 1982; Kennett, 2018). This perspective is particularly relevant to the present study because originality was operationalized on the basis of semantic distance (see Method, below). A third reason for the current hypothesis is that initial ideas take less time (Acar et al., 2019; Christensen et al., 1957) and tend to be less original (Mednick, 1962; Milgram & Rabkin, 1980; Runco, 1986; Ward, 1969) probably because they often come from experiences that are readily available in the memory, compared to those that come later in DT (Gilhooly et al., 2007). As people rely 2
Thinking Skills and Creativity 33 (2019) 100574
S. Acar, et al.
on memory search less and utilize more imagination, the originality of the generated ideas should increase (Runco, Okuda, & Thurston, 1991). An important feature of the present study is that it calculated originality through semantics-based algorithm (SBA) approach (Beketayev & Runco, 2016). This approach has been used by Beketayev and Runco (2016), who combined 12 different semantic networks to measure semantic distance. Reliability was higher in semantics-based originality than the traditional method. In this method, higher originality refers to the greater distance of the words and concepts within the responses provided by an individual. The average of the distance across 12 networks was adjusted by occurrence frequency to reward those ideas that are less likely to be used with the words in DT prompt. The similarity between the DT prompt and responses is high when responses to DT include closely related concepts to the DT prompt. In this case, similarity is higher and originality is lower compared to the case in which unrelated concepts are used. This is a similar method to latent semantic analysis, which has recently become quite popular in the field (Dumas & Dunbar, 2014; Forster & Dunbar, 2009; Forthmann, Holling, Çelik, Storme, & Lubart, 2017; Forthmann, Oyebade, Ojo, Günther, & Holling, 2018; Hass, 2017a, 2017b; Kenett, 2018). This approach is different than the sample-based infrequency counts, which is the traditional method in DT research. A limitation of this traditional approach is that the pool of common responses, which is used to identify uncommon, infrequent, and original responses, is influenced by the sample size and respondent characteristics. A new trend in DT assessment is the latent semantic analysis (LSA; Forster & Dunbar, 2009) approach where originality of an idea is determined by its semantic distance (i.e., low conceptual proximity), as defined in large set of texts. LSA uses external criteria (i.e., text corpora) and the latter uses internal criteria to quantify originality. Our method builds on and extends the LSA approach in that it obtains semantic similarity scores through multiple corpora such as Wikipedia. In contrast to the LSA approach, SBA adjusts these scores by weighting the semantic similarity obtained from the responses given for each DT. Therefore, the SBA approach integrates the external and internal corpora (i.e., responses to DT). The most important advantage of this alternative approach is that it is more convenient. In short, scoring is automated. As a result, scoring is objective; it does not involve human judgment. Acar and Runco (2014) used three different associative networks to quantify the distance of each individual idea given to DT tasks. They found good evidence of reliability and validity. Importantly, they found significantly more remote associations in the second half than the first half of DT. The order effect for originality is still applicable when it is operationalized in terms of semantic distance. In both LSA and SBA approaches, the semantic distance is defined as the opposite of semantic similarity. Recently, Hass and Beaty (2018) tested the relationship between latency and the similarity of adjacent responses in DT, which were determined through subjective ratings. They found a negative relationship between the similarity of the adjacent responses and latency. Latency was shorter between similar responses. Hass (2017a, 2017b) also looked into the relationship between latency and “global similarity” (i.e., semantic similarity between the responses and the DT prompt and “local similarity” (i.e., similarity between the adjacent responses. In contrast to Hass and Beaty (2018), Hass (2007) assessed semantic similarity based on latent semantic analysis (LSA; Dumais, Furnas, Landauer, Deerwester, & Harshman, 1988; Forster & Dunbar, 2009). Another difference was that Hass (2017a, 2017b) used aggregated latency for each individual, whereas Hass and Beaty (2018) conducted multilevel analyses at the level of individual responses (Level 1) by controlling prompt type at Level 2. Hass (2017a, 2017b) found no relationship between latency and global similarity (i.e., the semantic similarity between the DT prompt and the responses to it) whereas latency was negatively related to local similarity (i.e., semantic similarity between the adjacent responses). The present study looks into the relationship between latency and originality, which is operationalized as semantic distance (lack of similarity) between DT prompt and the responses. Hass (2017a, 2017b) did not find a significant relationship between semantic distance (DT prompt vs. response) and latency when he used LSA to measure the semantic distance. The present study investigated these same questions but used the SBA scoring method 2. Method 2.1. Participants Data were collected from a sample of 117 graduate and undergraduate bilingual Bahraini students who were studying in the United States of America. The sample was collected through the Cultural Office, Bahrain Embassy. Participants received emails with the following instructions: Please find below two links: one for those who prefer to participate in Arabic, and the other for those who prefer to work on the English version of the test. There is no preference regarding the version (Arabic/English), the most important thing is to choose the language in which you better express your ideas and thoughts (i.e., creativity). In the link there is a consent form, which includes information about the study. All participants were asked for their consent to participate. Participants filled out a demographic questionnaire, which asks for information about age, gender, educational level, college major, and previous exposure to creativity. Among the 117 participants who accepted to involve in this study, 86 preferred to respond in English. The 31 participants who decided to respond in Arabic were dropped from the data because the SBA method is currently not available in Arabic. Out of the 86 participants who responded in English, three participants were excluded from analyses because they exceeded more than one hour of testing, which may have occurred due to interruption or off-task time during the online testing. This practice is consistent with previous research (e.g., Hass, 2017a, 2017b; Weinberger, Iyer, & Green, 2016) that examined the time spent on task as a method of checking data quality. The final sample used in the present study was 83. The average time the participants spent on DT tasks was 11.36 min. The mean age was 32.8 (SD = 11.8). 47 (56.6%) of participants were females and 36 (43.4%) were males. The majority reported 3
Thinking Skills and Creativity 33 (2019) 100574
S. Acar, et al.
studying a master’s degree (n = 44; 53.1%), 28 (33.7%) reported studying a bachelor’s degree, 8 (9.6%), reported studying at doctoral level. Finally, 3 (3.6%) reported studying a professional degree. Eight majors were represented as follows: 23 (27.7%) business, 17 (20.5%) engineering, 10 (12%) information technology, 9 (10.8%) education and psychology, 7 (8.4%) languages, 5 (6.1%) medicine, 5 (6.1%) political sciences and law, 1 (1.2%) religion, and 6 (7.2%) did not report their major. Regarding previous exposure to creativity, 63 (75.9%) reported not taking a class that was focused on creativity; 63 (75.9%) reported not reading a book that was focused on creativity; 56 (67.5%) reported not attending a workshop that was focused on creativity. 2.2. Instruments Two divergent thinking tests were used; three verbal (Uses Test; Toothbrush, Wheel, and Spoon) tasks from Wallach and Kogan (1965) and one figural task from Runco Creativity Assessment Battery (r-CAB). The verbatim instructions for the Uses Test were: People typically use everyday items for specific purposes. Often there are alternative uses for the same objects. For example, a newspaper could be used as a hat or a blanket, and many other things. For the following items, list as many alternative uses as you can. The more uses you think of, the better. Do not worry about spelling. The verbatim direction for Figural DT was: Look at the figure below. What do you see? List as many things as you can that this figure might be or represent. This is NOT a test. Think of this as a game and have fun with it! The more ideas you list, the better.
2.3. Procedure After obtaining the Institutional Review Board’s (IRB) approval, one of the authors officially contacted Bahrain Cultural Office asking for a permission to apply the study on a sample of Bahraini students. Copies of all of the study instruments were provided to the Cultural Office. The DT tests were presented to a sample of Arab students who were given the option to choose the language they feel comfortable to complete the tasks. All of the study tasks including latency data were collected by the online survey. For accuracy, time was calculated in seconds and milliseconds. Latency was calculated as the difference in time between all adjacent ideas given for the same DT item. 2.4. Data analysis The present study followed the statistical model used in Acar and Runco (2017) for the verbal DT. They used a three-level multilevel model, in which latency was the dependent variable and category switch was a predictor at Level 1. In this model, Level 1 represents responses given to DT tasks, Level 2 represents DT tasks and Level 3 represents individuals. This model is appropriate for the present data because analyses are conducted at the response level, which are nested in DT tasks because DT tasks allow multiple responses. Likewise, DT tasks are also nested in individuals because each individual responds to multiple DT tasks. Multi-level modeling approach allows controlling variance that stems from differences in DT tasks employed and individual differences. The data analysis model was slightly different for the figural DT because the explicit instructions were slightly different and there was only one type of DT task in figural as opposed to 3 verbal tasks. Therefore, there were two-levels: responses (Level 1) and participants (Level 2). In the present study, originality was used as the dependent variable and latency was the predictor. Therefore, using Snijders and Bosker’s (2012) notations, for the verbal DT, the model was built based on the following variables: a) b) c) d)
Dependent variable (Yijk): Originality as measured by semantic distance between the responses and the stimulus (continuous) Level 1 (response) predictor (X100): Latency – time spent between two adjacent responses (continuous) Level 1 (response) predictor (X200): Sequence – Order of the responses given to DT tasks (ordinal) Level 1 (response) predictor (X300): Character count of the individual responses given to DT (continuous) Using Snijders and Bosker (2012), the unconditional model was as the following: Yijk = γ000 + V00k + U0jk + Rijk
where γ000 represents intercept, and, Rijk U0jk, and V00k representing residuals at Levels 1, 2, and 3, respectively in level-two unit j within level-three k unit. Their variances are represented with σ2, τ02, and δ02 respectively. In order to explain variation in Yijk (i.e., latency), the full three-level model was tested with three predictors with the following equation: Yijk = γ000 + γ100 X100 + γ010 X010 + γ001 X001 + V00k X010 + U0jk X100 + V00k + U0jk + Rijk. Where γ100 represents coefficient of level predictor of X100, γ010 is the coefficient of Level 2 predictor of X010, and γ001 is the coefficient 4
Thinking Skills and Creativity 33 (2019) 100574
S. Acar, et al.
of Level 3 predictor of X001. Because all of the predictors were at Level 1, the following equation would represent possible effects, including both fixed and random effects, to observe. Yijk = γ000 + γ100 (CHARACTER) + γ200 (SEQUENCE) + γ300 (LATENCY) + γ010 (DT TASK) + γ001 (STUDY PARTICIPANTS) + V00k (CHARACTER) + V00k (SEQUENCE) + V00k (LATENCY) + U0jk (CHARACTER) + U0jk (SEQUENCE) + U0jk (LATENCY) + V00k + U0jk + Rijk. Analyses were conducted separately for the verbal and figural items separately because of the slight differences in the instructions and the possibility of different processes involved in verbal versus figural stimulus (Runco & Albert, 1985), which is particularly important when originality is scored based on semantic networks. The below model was only applicable to verbal data, which involved three different DT tasks. The model for the figural data had only two-levels because there was only one figural DT item in the present analyses. Level 1 represents responses and Level 2 represents study participants. Therefore, the following model represents the unconditional model: Yij = γ000 + U0j + Rij and the full model is stated as the following: Yij = γ00 + γ10 (CHARACTER) + γ20 (SEQUENCE) + γ30 (LATENCY) + γ01 (STUDY PARTICIPANTS) + U0j (CHARACTER) + U0j (SEQUENCE) + U0j (LATENCY) + U0jk + Rijk. Models were tested with Maximum likelihood (ML) estimation and model fit examined through deviance values (Snijders & Bosker, 2012) as obtained from proc mixed procedure in SAS. 3. Results The following analyses used same statistical model as Acar and Runco (2017). Because verbal and figural DT stimulus may produce different effects (Acar & Runco, 2017), and the instructions were slightly different, we ran separate analyses for verbal and figural DT tasks. In both analyses, there were three predictors: Character count of the responses, response order, and latency were predictors at Level 1. Originality was calculated based on SBA (Beketayev & Runco, 2016). There was no Level 2 or Level 3 predictors. 3.1. Analyses with verbal DT The analyses used 1325 ideas generated by 83 people. First, using outlier detection technique (Tukey, 1977), the cases with extreme latency (i.e., > 15 s) were removed as they might indicate task interruption, which is possible in online administration of DT. Second, because originality had a non-normal distribution, values were converted to z-scores. Unconditional model (Model 1) that had no predictors was presented in Table 1. Because Level 3 variance was 0, this three-level model allowed intraclass correlations (ICC) between Level 1 and Level 2, which was .15. This value represents the correlation among the responses that were generated for the same DT item. Model building continued by adding predictors. Model comparison was tested through a deviance statistic, which were reported in Table 1 for each model tested, to compare model fit (Snijders & Bosker, 2012). Model parameters were estimated via maximum likelihood (ML) unless two nested models were compared. Table 1 also includes deviance values based on restricted ML (REML), which were presented in the parenthesis. Total latency between two adjacent responses included time spent for typing a response. To control for the time spent for typing, in Model 2, we added the number of characters in responses (i.e., character count) to the model. Inclusion of character count improved the model fit (see change in Deviance values between Model 1 and Model 2 in Table 1). Then, following the guidelines by Snijders and Bosker (2012), we explored if random slope of character count improves the model. When added to Level 2 and Level 3, random slope of character count did not improve the model, thus, they were dropped from the model. Building off previous work showing the role of response order (Acar et al., 2019), idea sequence was added to the model, and it improved the model fit (see Model 3). The random slope for response order also improved the model and was retained in Model 4. The next model (Model 5) included latency, which improved the model but its random slope did not. Hence random slope was dropped from the model. Next model (Model 6) had latency by character count interaction effect, which provided better model fit than the previous model. The latency by sequence and sequence by character count interactions did not improve the model and were dropped. Again, following the suggestions by Snijders and Bosker (2012), we compared model fit after dropping non-significant effects. Model fit remained the same only when Level 2 random slope of sequence was dropped (see Model 7). Model 7 was the best model that had the fewest parameters (Table 2). Model 7 indicates that originality is positively related to latency (γ300 = 0.082, SE = 0.018, p < .01) even when character count (γ100 = 0.047, SE = 0.008, p < .01) and sequence (γ200 = 0.053, SE = 0.013, p < .01) were controlled. Spending more time for response seems to be a predictor of its originality. The significant interaction effect of latency by character count (γ400= −0.005, SE = 0.001, p < .01) indicates that originality is higher when more time is spent for responses with fewer characters. 3.2. Analyses with figural DT Because there was only one figural DT item used in the analyses, data had two levels: Responses at Level 1 and Study Participants 5
6 0.125 (0.029)**
0
3230.3 3238.3 3247.9 5
0
3684.5 3690.5 3697.7 4
3215.4 [3237.7] 3225.4 [3241.7] 3237.4 [3246.4] 6
0
0.140 (.030)**
0.822 (0.037)**
0.021 (0.004)** 0.025 (0.006)**
0.022 (0.004)**
0.842 (0.038)**
−0.372 (0.065)**
Model 3
Sequence added
−0.291 (0.062)**
Character count added Model 2
0.150 (0.300)**
0.843 (0.036)**
0.004 (0.037)
Model 1
3-Level Unconditional
3204.1 [3225.0] 3218.1 [3233.0] 3234.8 [3242.6] 8
0 0.001 (0.001)
0.116 (0.032)** 0.000 (0.000)
0.810 (0.038)**
0.021 (0.004)** 0.054 (0.013)**
−0.466 (0.069)**
Sequence random slope added Model 4
3167.7 3204.7 3223.9 9
0 0.001 (0.001)
0.116 (.032)** 0.000 (.000)
0.810 (0.038)**
−0.484 (0.070)** 0.016 (0.005)** 0.055 (0.013)** 0.019 (0.012)
Model 5
Latency added
Note: DT = Divergent thinking. The values presented in the parentheses reflect standard errors of the respective model parameter estimates. The values present in brackets indicate model summary results based on restricted maximum likelihood estimation (REML). * p < .05. ** p < .01.
Character Sequence Latency Character*Latency Random effects Level 1 Residual Level 2 Intercept Slope (Sequence) Level 3 Intercept Slope (Sequence) Model summary Deviance AIC BIC # of estimated parameters
Fixed effects Intercept
Parameters
Table 1 Multilevel Model Building with Latency as the Dependent Variable in Verbal Divergent DT Tasks.
3167.2 [3212.8] 3185.2 [3220.8] 3206.7 [3230.4] 10
0 0.001 (0.001)
0.113 (0.032)** 0.000 (000)
0.791 (0.037)**
0.047 (0.008)** 0.055 (0.013)** 0.082 (0.018)** −0.005 (0.001)**
−0.815 (0.098)**
Model 6
Latency*Character added to
3173.9 [3207.8] 3189.9 [3213.8] 3209.1 [3221.0] 9
0 0.001 (0.001)
0.125 (0.030)**
0.790 (0.038)**
0.047 (0.008)** 0.054 (0.013)** 0.082 (0.018)** −0.005 (0.001)**
−0.806 (0.098)**
Level 2 Random Slope for Character dropped Model 7
S. Acar, et al.
Thinking Skills and Creativity 33 (2019) 100574
Thinking Skills and Creativity 33 (2019) 100574
S. Acar, et al.
Table 2 Multilevel Model Building with Latency as the Dependent Variable in Figural DT Tasks. Parameters
Fixed effects Intercept Character Latency Character*Latency Random effects Level 1 Residual Level 2 Intercept Slope (Sequence) Model summary Deviance AIC BIC # of estimated parameters
3-Level Unconditional Model 1
Character count added Model 2
Latency added Model 3
Latency*Character count Model 4
0.000 (0.000)
−0.017 (0.093) 0.002 (0.009)
−0.021 (0.094) −0.000 (0.011) 0.007 (0.018)
−0.344 (0.152)* 0.041 (0.019)* 0.063 (0.027)* −0.006 (0.002)**
0.998 (0.064)**
1.007 (0.007)**
1.010 (0.074)**
0.987 (0.074)**
0
0
0.000 (0.031)
0.008 (0.035)
1383.9 1387.9 1392.7 3
1271.5 1277.5 1284.7 4
1261.8 1271.8 1283.8 5
1254.9 1266.9 1281.3 6
Note: DT = Divergent thinking. The values presented in the parentheses reflect standard errors of the respective model parameter estimates. * p < .05. ** p < .01.
at Level 2. Analyses used 488 responses from 83 participants (the same participant group as the verbal analyses). Similar to the verbal analyses, originality values were converted to z-scores, and the extreme values of latency were removed through outlier analysis (Tukey, 1977). A two-level unconditional model (Model 1) had 0 Level 2 variance. Therefore, intraclass correlation could not be calculated. Although this implies there may be no need for a multilevel structure, we continued building the model in a two-level structure to control for possible effects related to the predictors to be added. Model building continued with adding predictors (See Table 2). First, character count was added, and it improved the model (Model 2). However, its random slope did not. Next, sequence was added to the model, but it did not improve the model. Then, we added latency to the model and the model fit improved (see change in Deviance values between Model 2 and Model 3). Then, random slope for latency was added to the model but it did not improve the model. Next, we added interaction of latency by character count to the model and model fit improved (Model 4). Addition of the other interaction terms (sequence by character count and sequence by latency) did not improve the model. Therefore, Model 4 was accepted as the final model. This model provides very similar result to these from the verbal items. Originality was positively related to latency (γ10 = 0.063, SE = 0.027, p = .021) even when character count was controlled (γ10 = 0.041, SE = 0.019, p = .032). Moreover, the interaction of latency by character count was also significant (γ40 = -0.006, SE = 0.002, p = .008). Differently than the results from the verbal items, sequence had no effect on originality and there was no random slope of sequence in the figural items. These findings showed that longer latency is related to higher originality even when character count was taken into consideration. Also, originality was higher when more time was taken for responses with shorter typing time. 4. Discussion The present study extended previous work by Acar and Runco (2017). They reported that latency, which is time elapsing between adjacent responses, is positively related to the category switch that is an indicator of flexibility. Acar et al. (2019) also demonstrated the importance of response order for variability in latency across early versus later phases. The present research differs from these two previous studies in that they focused on flexibility and the present study focused on originality. Importantly, originality was operationalized herein with SBA. This method is different from the traditional method of infrequency counts in a given pool of ideas in that SBA utilized co-occurrence of the concepts and their similarity that implies lack of distance between the DT question and the responses. The semantic distance between responses and the stimulus provided an index of originality as remote associations tend to yield more original ideas. Results indicated that latency is related to originality in both verbal and figural DT tasks. Making remote associations and generating unexpected ideas and solutions take more time than simply generating known, expected, common, tried, and experienced ideas and solutions. Quickly generated responses are more likely to come from experiences and they can be generated fast because they are easy to retrieve from memory. Original ideas may require taking the time to find a remote association and making greater cognitive leaps. From the perspective of spreading activation in semantic networks (Collins & Loftus, 1975), distance between semantically related concepts is shorter, therefore, it takes a shorter amount of time to find a conceptually related response than conceptually less related ones. As a result, latency between adjacent responses will be shorted when a response is semantically related, which often makes it less original. Our findings are consistent with seminal works of Mednick (1962) and Christensen's et al. (1980), as well as the recent work (Acar 7
Thinking Skills and Creativity 33 (2019) 100574
S. Acar, et al.
et al., 2019; Beaty & Silvia, 2012) that found decrease in productivity and increase in originality per minute over time. Decrease in productivity over time may be related to less reliance on memory search and greater need for strategy use, which is more deliberate, controlled, and thus, time consuming. However, this seems to increase the originality of ideas. This is also parallel to the notion of changing sources of ideation during the DT process. Early ideas often come from experiences and observations, and as they run out, people tend to resort to their imagination and more deliberate and complex thinking processes, such as idea combination or assumption checking (Runco et al., 1991). These processes are all effortful and may naturally take more time than simple memory recall. This observation follows Gilhooly et al. (2007) findings that early ideas come from memory search and late ideas involve executive functions. There is further evidence on this in a recent work. Kaya and Acar (2019) found that experienced responses tend to come earlier than inexperienced responses when participants were instructed to generate as many responses as possible. This pattern was compromised when they were told to generate original responses. In additional associative theories, the role of executive functions in DT has been supported by recent research. Executive functions control, monitor, and regulate thought processes. Specific executive processes—such as inhibition, switching, and updating—are connected to creative and divergent thinking (Benedek et al., 2014; Gilhooly et al., 2007; Lee & Therriault, 2013; Süß, Oberauer, Wittmann, Wilhelm, & Schulze, 2002). This explanation does not negate the role of associative processes in DT where distant associations are more likely to lead to original solutions. However, executive functions may guide the nature of associations made in DT. It could be argued that instead of a random search of possible connections, original solutions are the results of controlled search for alternatives (Benedek et al., 2014; Gilhooly et al., 2007). Once again, our findings supported the fact that ideas become more original in later phases compared with earlier phases of DT (Milgram & Rabkin, 1980; Phillips & Torrance, 1977; Runco, 1986; Ward, 1969). Importantly, our findings controlled the response order. Therefore, regardless of moment that an idea is generated, ideas with higher originality take longer than those with lower originality. This finding reinforces the assertion that original ideas often do not come as a result of chance or serendipity (Mednick, 1962) but through controlled, effortful, and deliberate cognitive processes (Volle, 2018). This is the reason why “extended effort principle” (Parnes, 1961) may indeed be useful for obtaining optimal outcomes from DT. Recent models of creativity assessment emphasized the importance of incorporating new factors such as time and the effort that is necessary to come up with original responses (Barbot, 2018) and the present set of findings is one of the first providing empirical evidence for it. These findings confirm that time is an important variable and predictor for DT. Ideas preceded by longer think time (TT) are likely to be more original and may signal category switch. There are limitations to the present research, including the particular sample of examinees and the two particular DT tests used. Ideation seems to vary somewhat from DT test to DT test (Runco, Abdulla, Paek, AlJasim, & Alsuwaidi, 2016). Still, there are implications for practice. First, people need and use more time for more original ideas and time restrictions may hurt their originality. Therefore, DT testing may benefit from lenient time conditions as well as encouragements and instructions to extend the time taken for idea generation (Wallach & Kogan, 1965). Second, time taken for ideational productivity may be considered an additional indicator of creative potential and could be incorporated into creativity testing. Given the recent advancements in DT assessment such as SBA, technology may support creativity assessment by incorporating think time or latency. Our findings revealed two other interesting findings. First, we found a significant sequence effect in the verbal DT but not in figural DT. The sequence effect reflects the idea that late responses tend to be more original than the early responses (Christensen et al., 1957; Mednick, 1962; Milgram & Rabkin, 1980; Phillips & Torrance, 1977; Runco, 1986; Ward, 1969). Therefore, the order of the idea may determine its originality to some degree. This was still the case in the present study for the verbal items but not the figural ones. Prior research found clear differences between verbal and figural tasks (Clapham, 2004; DeMoss, Milich, & DeMers, 1993; Kuhn & Holling, 2009; Richardson, 1986; Runco & Albert, 1985; Torrance, 1990). This may be because visual stimuli may be more open to subjective interpretation than the verbal ones. In other words, there may be more ambiguity with the figural DT items which can lead to less predictable, more idiosyncratic ideational pathway. As a result, original responses may be more scattered across the early and late phases compared to the verbal DT. Second, there was a significant interaction effect of latency by character count. Originality was higher when more time was spent for responses with lower character counts. This may be because some original responses were found as a result of longer think time which yielded a simple but original response that is concisely stated. Future studies may replicate these findings with originality scores based on alternative scoring techniques such as those using latent semantic analysis (Dumas & Dunbar, 2014; Forster & Dunbar, 2009), rated originality (Plucker, Qian, & Wang, 2011), as well as traditional scoring that relies on sample-based infrequency counts. Future studies may also investigate how explicit instructions influence the order and level of original ideas when scored in novel methods used in the present or LSA-based methods. It would also be interesting to see if average latency (i.e., time used per idea) changes when participants are given strict, lenient, or no time conditions. If DT performance benefits from lenient or untimed testing conditions, this could lead to longer latency per idea and higher originality. The present study had some limitations. The online survey did not allow measuring typing speed directly and we used character count to control this effect. However, this is one of the individual level differences, which were controlled in the present 3 Level design. Second, this study used a sample from Bahrain and future studies should replicate with other populations. Participants’ cultural and personal background may have influenced their responses and further studies with different samples would enhance the generalizability of our findings. We underline the fact that the study participants were asked to respond to the survey in English or Arabic, whichever they felt more comfortable while completing the tasks. Those who felt comfortable in expressing themselves in English responded in English and others who responded in Arabic were dropped from the dataset. Third, the present study included one type of verbal item (Uses) and one type of Figural item. Give recent research on differences among various types of DT items (Hass & Beaty, 2018; Runco et al., 2016), generalizability of the findings to other types of DT should be explored.
8
Thinking Skills and Creativity 33 (2019) 100574
S. Acar, et al.
References Acar, S., & Runco, M. A. (2014). Assessing associative distance among ideas elicited by tests of divergent thinking. Creativity Research Journal, 26, 229–238. Acar, S., & Runco, M. A. (2017). Latency predicts category switch in divergent thinking. Psychology of Aesthetics, Creativity, and the Arts, 11, 43–51. Acar, S., Burnett, C., & Cabra, J. F. (2017). Ingredients of creativity: Originality and more. Creativity Research Journal, 29, 133–144. Acar, S., Runco, M. A., & Ogurlu, U. (2018). The moderating influence of idea sequence: A re-analysis of the relationship between category switch and latency. Personality and Individual Differences. https://doi.org/10.1016/j.paid.2018.06.013. Balota, D. A., & Lorch, R. F. (1986). Depth of automatic spreading activation: Mediated priming effects in pronunciation but not in lexical decision. Journal of Experimental Psychology: Learning, Memory, and Cognition, 12, 336–345. Barbot, B. (2018). The dynamics of creative ideation: Introducing a new assessment paradigm. Frontiers in Psychology. https://doi.org/10.3389/fpsyg.2018.02529. Beaty, R. E., & Silvia, P. J. (2012). Why do ideas get more creative across time? An executive interpretation of the serial order effect in divergent thinking tasks. Psychology of Aesthetics, Creativity, and the Arts, 6(4), 309–319. Beketayev, K., & Runco, M. A. (2016). Scoring divergent thinking tests by computer with a semantics-based algorithm. Europe’s Journal of Psychology, 12, 210–220. Benedek, M., Mühlmann, C., Jauk, E., & Neubauer, A. C. (2013). Assessment of divergent thinking by means of the subjective top-scoring method: Effects of the number of top-ideas and time-on-task on reliability and validity. Psychology of Aesthetics, Creativity, and the Arts, 7, 341–349. Benedek, M., Jauk, E., Sommer, M., Arendasy, M., & Neubauer, A. C. (2014). Intelligence, creativity, and cognitive control: The common and differential involvement of executive functions in intelligence and creativity. Intelligence, 46, 73–83. Bloom, B. S. (Ed.). (1985). Developing talent in young people. New York, NY: Ballantine Books. Cai, D. J., Mednick, S. A., Harrison, E. M., Kanady, J. C., & Mednick, S. C. (2009). REM, not incubation, improves creativity by priming associative networks. Proceedings of the National Academy of Sciences, 106, 10130–10134. Charles, R. E., & Runco, M. A. (2001). Developmental trends in the evaluative and divergent thinking of children. Creativity Research Journal, 13, 417–437. Christensen, P. R., Guilford, J. P., & Wilson, R. C. (1957). Relations of creative responses to working time and instructions. Journal of Experimental Psychology, 53, 82–88. Clapham, M. M. (2004). The convergent validity of the Torrance tests of creative thinking and creativity interest inventories. Educational and Psychological Measurement, 64, 828–841. https://doi.org/10.1177/0013164404263883. Claxton, A. F., Pannells, T. C., & Rhoads, P. A. (2005). Developmental trends in the creativity of school-age children. Creativity Research Journal, 17, 327–335. Collins, A. M., & Loftus, E. F. (1975). A spreading activation theory of semantic processing. Psychological Review, 82, 407–428. https://doi.org/10.1037/0033-295X.82. 6.407. Coskun, H. (2005). Cognitive stimulation with convergent and divergent thinking exercises in brainwriting: Incubation, sequence priming, and group context. Small Group Research, 36, 466–498. Daugherty, M. (1993). Creativity and private speech: Developmental trends. Creativity Research Journal, 6, 287–296. DeMoss, K., Milich, R., & DeMers, S. (1993). Gender, creativity, depression, and attributional style in adolescents with high academic ability. Journal of Abnormal Child Psychology, 21, 455–467. https://doi.org/10.1007/BF01261604. Den Heyer, K., & Briand, K. (1986). Priming single digit numbers: Automatic spreading activation dissipates as a function of semantic distance. The American Journal of Psychology, 315–340. Dumais, S. T., Furnas, G. W., Landauer, T. K., Deerwester, S., & Harshman, R. (1988). Using latent semantic analysis to improve access to textual information. Proceedings of the SIGCHI conference on Human factors in computing systems, 281–285. Dumas, D., & Dunbar, K. N. (2014). Understanding fluency and originality: A latent variable perspective. Thinking Skills and Creativity, 14, 56–67. Forster, E. A., & Dunbar, K. N. (2009). Creativity evaluation through latent semantic analysis. Proceedings of the Annual Conference of the Cognitive Science Society, Vol. 2009, 602–607. Forthmann, B., Holling, H., Çelik, P., Storme, M., & Lubart, T. (2017). Typing speed as a confounding variable and the measurement of quality in divergent thinking. Creativity Research Journal, 29, 257–269. Forthmann, B., Oyebade, O., Ojo, A., Günther, F., & Holling, H. (2018). Application of latent semantic analysis to divergent thinking is biased by elaboration. The Journal of Creative Behavior. https://doi.org/10.1002/jocb.240. Gilhooly, K. J., Fioratou, E., Anthony, S. H., & Wynn, V. (2007). Divergent thinking: Strategies and executive involvement in generating novel uses for familiar objects. British Journal of Psychology, 98, 611–625. Gilhooly, K. J., Georgiou, G. J., Garrison, J., Reston, J. D., & Sirota, M. (2012). Don’t wait to incubate: Immediate versus delayed incubation in divergent thinking. Memory & Cognition, 40, 966–975. Guilford, J. P. (1956). Structure of intellect. Psychological Review, 53, 267–293. Hass, R. W. (2017a). Semantic search during divergent thinking. Cognition, 166, 344–357. Hass, R. W. (2017b). Tracking the dynamics of divergent thinking via semantic distance: Analytic methods and theoretical implications. Memory & Cognition, 45, 233–244. Hass, R. W., & Beaty, R. E. (2018). Use or consequences: Probing the cognitive difference between two measures of divergent thinking. Frontiers in Psychology, 9, 2327. Hayes, J. R. (1989). The complete problem solver (2nd ed.). Hillsdale, NJ: Erlbaum. Heinonen, J., Numminen, J., Hlushchuk, Y., Antell, H., Taatila, V., & Suomala, J. (2016). Default mode and executive networks areas: Association with the serial order in divergent thinking. PLoS One, 11(9), e0162234. Kaufman, S. B., & Kaufman, J. C. (2007). Ten years to expertise, many more to greatness: An investigation of modern writers. Journal of Creative Behavior, 41, 114–124. Kaya, F., & Acar, S. (2019). The impact of originality instructions on cognitive strategy use in divergent thinking. Manuscript under review. Kenett, Y. N. (2018). Going the extra creative mile: The role of semantic distance in creativity – Theory, research, and measurement. In R. E. Jung, & O. Vartanian (Eds.). The Cambridge handbook of the neuroscience of creativity (pp. 233–248). Cambridge, UK: Cambridge University Press. Kim, K. H. (2011). The creativity crisis: The decrease in creative thinking scores on the Torrance Tests of Creative Thinking. Creativity Research Journal, 23, 285–295. Kuhn, J. T., & Holling, H. (2009). Measurement invariance of divergent thinking across gender, age, and school forms. European Journal of Psychological Assessment, 25, 1–7. https://doi.org/10.1027/1015-5759.25.1.1. Lau, S., & Cheung, P. C. (2010b). Developmental trends of creativity: What twists of turn do boys and girls take at different grades? Creativity Research Journal, 22, 329–336. Lee, C. S., & Therriault, D. J. (2013). The cognitive underpinnings of creative thought: A latent variable analysis exploring the roles of intelligence and working memory in three creative thinking processes. Intelligence, 41, 306–320. Lorch, R. F., Jr (1982). Priming and search processes in semantic memory: A test of three models of spreading activation. Journal of Verbal Learning and Verbal Behavior, 21, 468–492. Madjar, N., & Shalley, C. E. (2008). Multiple tasks’ and multiple goals’ effect on creativity: Forced incubation or just a distraction? Journal of Management, 34, 786–805. Mednick, S. A. (1962). The associative basis of the creative process. Psychological Review, 69, 220–232. Milgram, R. M., & Rabkin, L. (1980). Developmental test of Mednick’s associative hierarchies of original thinking. Developmental Psychology, 16, 157–158. Nash, W. R. (1974). The effects of a school for the gifted in averting the fourth grade slump in creativity. Gifted Child Quarterly, 18, 168–170. Parnes, S. J. (1961). Effects of extended effort in creative problem solving. Journal of Educational Psychology, 52, 117–122. Phillips, V. K., & Torrance, E. P. (1977). Levels of originality at earlier and later stages of creativity test tasks. Journal of Creative Behavior, 11, 147. https://doi.org/10. 1002/j.2162-6057.1977.tb00602.x. Plucker, J. A., Runco, M. A., & Lim, W. (2006). Predicting ideational behavior from divergent thinking and discretionary time on task. Creativity Research Journal, 18, 55–63. Plucker, J. A., Qian, M., & Wang, S. (2011). Is originality in the eye of the beholder? Comparison of scoring techniques in the assessment of divergent thinking. The
9
Thinking Skills and Creativity 33 (2019) 100574
S. Acar, et al.
Journal of Creative Behavior, 45, 1–22. Preckel, F., Wermer, C., & Spinath, F. M. (2011). The interrelationship between speeded and unspeeded divergent thinking and reasoning, and the role of mental speed. Intelligence, 39, 378–388. Richardson, A. G. (1986). Two factors of creativity. Perceptual and Motor Skills, 63, 379–384. https://doi.org/10.2466/pms.1986.63.2.379. Rothenberg, A., & Hausman, C. R. (1976). The creativity question. Durham, NC: Duke University Press. Runco, M. A. (1986). Flexibility and originality in children’s divergent thinking. Journal of Psychology: Interdisciplinary and Applied, 120, 345–352. https://doi.org/10. 1080/00223980.1986.9712632. Runco, M. A. (1988). Creativity research: Originality, utility, and integration. Creativity Research Journal, 1, 1–7. https://doi.org/10.1080/10400418809534283. Runco, M. A., & Acar, S. (2012). Divergent thinking as an indicator of creative potential. Creativity Research Journal, 24, 66–75. Runco, M. A., & Acar, S. (2018). Divergent thinking. In R. J. Sternberg, & J. C. Kaufman (Eds.). Cambridge handbook of creativity. Runco, M. A., & Albert, R. S. (1985). The reliability and validity of ideational originality in the divergent thinking of academically gifted and non-gifted children. Educational and Psychological Measurement, 45, 483–501. Runco, M. A., & Cayirdag, N. (2011). Time. In M. A. Runco, & S. R. Pritzker (Vol. Eds.), Encyclopedia of creativity: Vol. 2, (pp. 485–488). San Diego, CA: Academic. Runco, M. A., Okuda, S. M., & Thurston, B. J. (1991). Environmental cues and divergent thinking. In M. A. Runco (Ed.). Divergent thinking (pp. 79–85). Norwood, NJ: Ablex Publishing Corporation. Runco, M. A., Illies, J. J., & Eisenman, R. (2005). Creativity, originality, and appropriateness: What do explicit instructions tell us about their relationships? Journal of Creative Behavior, 39, 137–148. https://doi.org/10.1002/j.2162-6057.2005.tb01255.x. Runco, M. A., Abdulla, A. M., Paek, S. H., Al-Jasim, F. A., & Alsuwaidi, H. N. (2016). Which test of divergent thinking is best? Creativity. Theories–Research-Applications, 3, 4–18. Simonton, D. K. (1975). Sociocultural context of individual creativity: A transhistorical time-series analysis. Journal of Personality and Social Psychology, 32, 1119–1133. Simonton, D. K. (1977). Creative productivity, age, and stress: A biographical time-series analysis of 10 classical composers. Journal of Personality and Social Psychology, 35, 791–804. Snijders, T., & Bosker, R. (2012). Multilevel analysis: An introduction to basic and applied multilevel analysis. London, United Kingdom: Sage. Süß, H. M., Oberauer, K., Wittmann, W. W., Wilhelm, O., & Schulze, R. (2002). Working-memory capacity explains reasoning ability—And a little bit more. Intelligence, 30, 261–288. Torrance, E. P. (1968). A longitudinal examination of the fourth grade slump in creativity. Gifted Child Quarterly, 12, 195–199. Torrance, E. P. (1969). Curiosity of gifted children and performance on timed and untimed tests of creativity. Gifted Child Quarterly, 13, 155–158. Torrance, E. P. (1990). Experiences in developing creativity measures: Insights, discoveries, decisions. Torrance Center for Creative Studies and Talent Development Unpublished Manuscript. Tukey, J. W. (1977). 1977. Exploratory data analysis. Reading, MA. Addison-Wesley. Vernon, P. E. (1971). Effects of administration and scoring on divergent thinking tests. British Journal of Educational Psychology, 41, 245–257. Volle, E. (2018). Associative and controlled cognition in divergent thinking: Theoretical, experimental, neuroimaging evidence, and new directions. In R. E. Jung, & O. Vartanian (Eds.). The Cambridge handbook of the neuroscience of creativity. Cambridge, UK: Cambridge University Press. Wallach, M. A., & Kogan, N. (1965). Modes of thinking in young children. New York, NY: Holt, Rinehart, & Winston. Wang, M., Hao, N., Ku, Y., Grabner, R. H., & Fink, A. (2017). Neural correlates of serial order effect in verbal divergent thinking. Neuropsychologia, 99, 92–100. Ward, W. C. (1969). Rate and uniqueness in children’s creative responding. Child Development, 40, 869–878. Weinberger, A. B., Iyer, H., & Green, A. E. (2016). Conscious augmentation of creative state enhances “real” creativity in open-ended analogical reasoning. PLoS One, 11(3), e0150773. Williams, F. E. (1976). Rediscovering the Fourth‐Grade Slump in a study of children’s self‐concept. The Journal of Creative Behavior, 10, 15–28.
10