The effects of cognitive task complexity on writing complexity

The effects of cognitive task complexity on writing complexity

Journal of Second Language Writing 30 (2015) 45–57 Contents lists available at ScienceDirect Journal of Second Language Writing The effects of cogn...

405KB Sizes 0 Downloads 92 Views

Journal of Second Language Writing 30 (2015) 45–57

Contents lists available at ScienceDirect

Journal of Second Language Writing

The effects of cognitive task complexity on writing complexity Mark Wain Frear *, John Bitchener AUT University, Auckland, New Zealand

A R T I C L E I N F O

A B S T R A C T

Article history: Received 19 January 2014 Received in revised form 20 August 2015 Accepted 21 August 2015

This study reports the findings of a within-subject experimental study that examined the relationship between increases in cognitive task complexity and the writing of intermediate L2 writers of English. Potential effects on lexical and syntactic complexity were investigated. This article expands on past writing research using similar cognitive task complexity by adding a patently low complexity task to better track the effects of complexity, and a subordination measure that investigates each dependent clause separately. Thirty-four non-native speakers of English studying at language schools in New Zealand performed three letter-writing tasks of varying levels of task complexity. The findings revealed a significant effect for task complexity on decreases in syntactic complexity using a ratio of dependent clauses to T-units measure where independent clauses were measured separately. Conversely, significant findings were found for increases in lexical complexity, analysed as a mean segmental type-token ratio. The results of this study are discussed in relation to the Cognition Hypothesis (Robinson, 2001a, 2001b, 2005, 2007, 2011). ß 2015 Elsevier Inc. All rights reserved.

Keywords: Cognitive task complexity Second language writing Complex output

Introduction Centrally, the present study contributes to on-going investigations into the suitability of writing as a beneficial medium for promoting the development of complex language in L2 learners (Ishikawa, 2007; Kormos, 2011; Kuiken & Vedder, 2007, 2008, 2012; Ong & Zhang, 2010; Sercu, De Wachter, Peters, Kuiken, & Vedder, 2006). The emphasis is on whether the manipulation of certain task-based variables, believed to affect a writer’s cognitive burden and thus attention, will have a subsequent effect on their ability to produce attention-demanding elements in a written text. Ostensibly, writing may not appear to be the best format for developing complex linguistic structures that require attention, especially when they are not proceduralized or automatized. For example, the act of writing has been characterized as a problem solving activity (Belcher & Hirvela, 2001) requiring a writer’s constant management of limited attentional resources (Flower & Hayes, 1981). Thus, it appears counterintuitive that making writing more difficult by increasing the burden on attentional resources has any beneficial effects. However, writing has also been ascribed intrinsic characteristics such as recursion, planning time, and selective control (Ellis & Yuan, 2004; Grabe & Kaplan, 1996; Grabowski, 2007; Kormos & Trebits, 2012) that are potentially conducive to the production of complex linguistic structures that require attention.

* Corresponding author. E-mail address: [email protected] (M.W. Frear). http://dx.doi.org/10.1016/j.jslw.2015.08.009 1060-3743/ß 2015 Elsevier Inc. All rights reserved.

46

M.W. Frear, J. Bitchener / Journal of Second Language Writing 30 (2015) 45–57

To date, task-based research with foci on the relationship between task complexity (as cognitive burden manipulation) and writing is growing, but it still remains a lesser domain than the oral modality, which has historically been the main focus of research (Carless, 2012). In the developing field of complexity and L2 writing, there have been a number of means by which cognitive burden has been increased or decreased on writers, while a wide range of potential effects has also been investigated. Variables have included the effects of planning time, writing assistance, and editing time on fluency, lexical complexity, metacognitive process, and text quality (Ong, 2014; Ong & Zhang, 2010, 2013). Kormos (2011) tested the addition and removal of narrative context on complexity, accuracy, and fluency in L1 (native language) and FL (foreign language) writing. Ishikawa (2007) examined the manipulation of here-and-now variables (and the incidental addition of planning time) on measures of accuracy, complexity, and fluency, and Kuiken and Vedder (2007, 2008, 2012) and Sercu et al. (2006) investigated the manipulation of reasoning demands and number of elements on complexity, accuracy, and fluency. Considering the variations in both independent and dependent variables across this relatively small pool of research, there is a need for partial replication of some of these studies. Porte and Richards (2012) note the importance of partially replicating studies in L2 writing (in which central elements of an original study remain the same, but non-major aspects are varied between the past and present to facilitate comparisons). They also claim that replication or partial replication studies are important ways to test the robustness of past research and address the disjointed and conflicting findings resulting from the growing diversity of scope and topic in L2 writing research. We agree that sustained investigations may contribute to fully understanding the relationship between one set of variables and any extra unaccounted for variables that might be affecting results. The present study is a partial replication study of Kuiken and Vedder (2007, 2008, 2012) and Sercu et al. (2006). Similar measures are used to operationalize the independent variables (cognitive task complexity) and the dependent variables (syntactic and lexical complexity). Additionally, similar frameworks have been utilized to make predictions about the relationships between cognitive task complexity and linguistic complexity. These are the Cognition Hypothesis (Robinson, 2001a, 2001b, 2005, 2007, 2011) and the Trade-off Hypothesis (Skehan, 1998, 2003, 2014; Skehan & Foster, 1999, 2001). However, in contrast to these previous studies, we have included variations in the operationalizing of the dependent variables by adopting a more fine-grained approach to the measurement of syntactic complexity, with dependent clauses being analysed separately (Wolfe-Quintero, Inagaki, & Kim, 1998) as well as in one inclusive group. We have also addressed challenges related to the operationalizing of the independent variables. Regarding cognitive task complexity, Norris (2010) and Re´ve´sz (2014) suggest that more evidence is needed to investigate whether manipulating task complexity actually affects the types of cognitive demands predicted by the Cognition Hypothesis (Robinson, 2001a, 2001b, 2005, 2007, 2011). We considered an additional concern. Specifically, if modifying task complexity has an effect on cognitive burden, there is currently no means by which these modifications can be accurately measured, and it is not clear how this might impact the findings. As a result, we introduced a patently low complexity task to highlight the effects of task complexity, which can otherwise be obscured by the inability to accurately gauge variations in task complexity between more complex tasks. This paper starts with a brief review of the theoretical framework for cognitive task complexity. Subsequently, we discuss the different predictions made by the Trade-off Hypothesis (Skehan, 1998, 2003, 2014; Skehan & Foster, 1999, 2001) and the Cognition Hypothesis (Robinson, 2001a, 2001b, 2005, 2007, 2011) for increases in cognitive task complexity on output. In the subsequent section, writing complexity is discussed, focusing on the lexical and syntactic measures utilized in the present study. We also include rationales for using T-units and subordination. Finally, past studies on cognitive task complexity and writing that utilized similar independent and dependent variables to this paper are reviewed. We then report on the results of a within-subject experimental study in which second language learners (classified as intermediate level with IELTS scores ranging from 4.5 to 5.5) were provided with writing tasks that had cognitive task complexity manipulated in the task design. Tasks ranged from patently low complexity to two higher complexity tasks. The results were analysed for evidence of changes in the linguistic features produced in the texts. Specifically, syntactic complexity was evaluated as variations in T-unit length. This was operationalized by measuring the ratio of dependent clauses to T-units across all dependent clauses, and the ratio of dependent clauses to T-units with each dependent clause measured separately. Lexical complexity was measured by using a mean segmental type-token ratio. This is a measure of lexical variety using the ratio of different lexical items to the total number of items used while accounting for text length.

Cognitive task complexity In the study of second language writing, complexity can be applied to two terms: Cognitive task complexity refers to the modifications made during task design that make a task difficult to complete, whereas writing complexity refers to the language in written output that can be considered varied and elaborate (Ellis, 2003). This study utilized both aspects of complexity by investigating the effects of cognitive task complexity on the texts produced by English L2 writers. Broadly, cognitive task complexity can comprise the interaction of two elements manipulated in the design of pedagogical tasks (Ellis, 2003). These elements are types of information and amounts of information (Brown, Anderson, Shilcock, & Yule, 1984). These features are theorized to affect the cognitive burden a student experiences during task performance by placing varying demands on learners’ cognitive resources. The effects of modifying these elements are frequently analysed using two frameworks: These are the Trade-off Hypothesis (Skehan, 1998, 2003, 2014; Skehan & Foster, 1999, 2001) and the Cognition Hypothesis (Robinson, 2001a, 2001b, 2005, 2007, 2011).

M.W. Frear, J. Bitchener / Journal of Second Language Writing 30 (2015) 45–57

47

The Trade-off Hypothesis (Skehan, 1998, 2003, 2014; Skehan & Foster, 1999, 2001) assumes that cognitively demanding tasks require trade-offs due to limited attentional resources. In terms of complex and accurate performance, both are viewed as separate and competing dimensions that compete for limited attention, with one dimension likely receiving less attention than the other. Though trade-offs in performance will not always be apparent due to the influence of other factors (Skehan & Foster, 2012), the need for trade-offs in performance can be viewed as the default position. When resource limits are reached, learners will focus on meaning, which Skehan (1996) claims they are naturally and unavoidably predisposed towards. The effects of trade-offs on writing complexity may be the production of less complex language as a simpler way of expressing meaning when attention limits are overtaxed. Alternatively, Robinson (2001a, 2001b, 2005, 2007, 2011) takes the position that attentional resources are more flexible than the Trade-off Hypothesis suggests, and that increases in cognitive task complexity (under the correct conditions) could lead to dual increases in accurate and complex language production (Robinson, 2001b). In this study, we do not make any claims about the potential for cognitive task complexity to affect both complexity and accuracy. We focus only on the potential effects of increases in cognitive task complexity on complex output as proposed in Robinson’s Cognition Hypothesis (2001a, 2001b, 2005, 2007, 2011). Robinson claims there are developmental parallels between adults learning a second language and children’s development of their first language (Slobin, 1993), and the developmental process might be emulated in adults by increasing cognitive task complexity. For example, pedagogic tasks could be sequenced using increases in cognitive task complexity, and this sequencing might be similar to the way children meet increasingly complex demands when learning a first language (Robinson, 2001a, 2005). Thus, increases in conceptual development, triggered by increases in cognitive task complexity, should lead L2 learners to produce the language that expresses those concepts. To operationalize the Cognition Hypothesis, Robinson developed the Triadic Componential Framework (Robinson, 2001a, 2001b, 2005, 2007, 2011) that details the means by which complexity might be manipulated between tasks and across syllabi. Within the Triadic Componential Framework, a distinction is made between resource-directing and resourcedispersing elements. Increasing the complexity of resource-directing elements is expected to increase cognitive/conceptual demands on a learner. Robinson also claims that increases in resource-directing elements may guide learner attention in a way that facilitates noticing. Noticing is the conscious attention to language forms, and it is considered a prerequisite of learning (Schmidt, 2001). However, resource-dispersing elements have effects closer to those predicted by the Trade-off Hypothesis (Skehan, 1998, 2003, 2014; Skehan & Foster, 1999, 2001) when complexity is increased on limited attentional resources. When increased in complexity, resource-dispersing elements do not direct learners’ attention to the language required to meet the demands of a complex tasks. Instead, attention is dispersed, making the completion of a task more difficult. For example, less planning time is considered a resource-dispersing element. The cognitive task complexity elements utilized in the current study (reasoning demands and number of elements) are described as resource-directing (Robinson, 2001a, 2001b, 2005, 2007, 2011). Given the appropriate conditions, Robinson believes that increases in these dimensions may facilitate the production of complex language output as learners strive to find language to express the increases in conceptual complexity. Finally, questions have been raised about the operationalization of cognitive task complexity. Re´ve´sz (2014) and Norris (2010) stated the need for more evidence about the causal relationship between cognitive task complexity and the types of cognitive burden they are theorized to trigger in learners. A related issue that should be investigated is the current inability to accurately manipulate degrees of cognitive task complexity between tasks and how this might affect cognitive burden and performance. If the levels of applied cognitive task complexity are too high, differing degrees of cognitive burden might not trigger different outputs but may instead trigger attentional overload. This might lead to trade-offs in production similar to predictions made by the Trade-off Hypothesis (Skehan, 1998, 2003, 2014; Skehan & Foster, 1999, 2001). However, below a certain level of task complexity, results may favour the Cognition Hypothesis (2001a, 2001b, 2005, 2007, 2011) because attention is not overloaded. The strategy we have used to address this methodological issue is to add a task that is patently lower in cognitive task complexity. As a result, a clearer illustration of whether attentional overload is affecting the results between the more complex tasks might be revealed.

Writing complexity In this paper, writing complexity refers to complex written language whose production is deemed to be difficult, imposing a certain burden on the writers. This is especially the case when they are pushed to the edge of their current ability. The importance of investigating complex language is partially based on the following presumptions: Complex language production may represent pushed output in which learners create more elaboration and structure in their developing language ability, use language more efficiently, align the language being learnt with target language use, express complex ideas efficiently and precisely, and potentially signal acquisition (Cheng, 1985; Cook, 1994; McLaughlin, 1990; Skehan, 1996; Swain, 1985). The two types of complex language investigated in the present study are subordination (using a ratio of dependent clauses to T-units as a means to determine the mean length of T-units) and lexical variety (mean segmental type-token ratio). There were a number of reasons for choosing these specific measures. First, Ellis and Barkhuizen (2005) stressed that language learners may use different means to realize complexity in their writing. Thus, both lexical and syntactic complexity

48

M.W. Frear, J. Bitchener / Journal of Second Language Writing 30 (2015) 45–57

were used in this study to identify whether the participants’ responded to variations in cognitive task complexity by writing different types of complexity. That this paper is a partial replication study also influenced the choice of measures. We utilized measures similar to those used by Kuiken and Vedder (2007, 2008), but they were expanded upon with a more fine-grained approach to investigating the ratio of dependent clauses. Beyond the requirements of using the same target measures for a partial replication study, some justification should also be provided for the measures. T-units provide a readily accessible, quantifiable unit for measuring segments of written language (Ellis & Barkhuizen, 2005). Additionally, T-units are considered one type of indicator of a learners’ development in writing ability because the average length of T-unit increases with a writer’s maturity (Cooper, 1976; Hudson, 2009; Hunt, 1965). However, longer T-units can also be associated with flawed writing (Crowhurst, 1983; Hake & Williams, 1979). The present study does not take a default position that longer T-units are inherently better than shorter T-units. Rather we recognize that short T-units may, in the hands of a good writer, be used to express meaning in a concise way that in some cases is more appropriate than long T-units. Nonetheless, T-unit length remains a good indicator of writing complexity and therefore was used in this study. Subordination is considered a good indicator of complex output (Norris & Ortega, 2009) for intermediate learners, making it suitable for this study’s participants. However, measurements of T-units with subordination usually cluster subordinate clauses into one group, potentially obscuring the detailed effects of task complexity. To elaborate, subordination is viewed as inherently difficult (Mendhalson, 1983) with associated cognitive burdens during production (Cheung & Kemper, 1992; Lord, 2002). Therefore, when subordination is produced as a result of attention-taxing task complexity, it may be difficult for intermediate learners, who could be additionally susceptible to the degree to which each dependent clause is automatized. As a result, analysing dependent clauses separately may provide a clearer picture of the effects of cognitive task complexity on subordination because when each clause has a different position on the developmental continuum, it may be more or less attention demanding to access. By analysing dependent clauses as one group and separately, any potential differences between the two approaches should be highlighted. The second target measure was a mean segmental type-token ratio, which is a measure of lexical variety. Wolfe-Quintero et al. (1998) state that measures of lexical variation appear to be related to the development of second language, particularly those measures that also account for text length. The type-token ratio measure of lexical variety used in this study is a measure of lexical variety that accounts for variations in text length. Writing research utilizing reasoning demands and number of elements Of the relatively small group of studies focusing on cognitive task complexity and writing, the work of Kuiken and Vedder (2007, 2008, 2012) and Sercu et al. (2006) have utilized similar independent and dependent variables to this study. Sercu et al. (2006) used letter-writing tasks in which the participants wrote to a friend about their choice of holiday destination. Cognitive task complexity was manipulated by increasing the number of elements and reasoning demands between tasks. Syntactic complexity was investigated by analysing the number of clauses per T-unit and number of dependent clauses per clause. Lexical variation was analysed using a type-token ratio and a type-token ratio correcting for text length (the number of word types per square root of two times the total number of word tokens). Kuiken and Vedder (2007, 2008, 2012) utilized similar letter-writing tasks and variations in number of elements and reasoning demands. Kuiken and Vedder (2007) analysed syntactic complexity, including number of clauses per T-unit and the number of dependent clauses per T-unit. Lexical variation (the occurrence of frequently and in frequently used words) was investigated with a type-token ratio that corrected for text length. Kuiken and Vedder (2008) analysed syntactic complexity as the number of clauses per T-unit, and a dependent clause ratio that examined the degree of syntactic embedding. Lexical variation was analysed using two type-token ratios, one that corrected for text length (Wolfe-Quintero et al., 1998), and one that did not. Finally, Kuiken and Vedder (2012) used a combination of results from past work that included the results from Kuiken and Vedder (2007, 2008). In terms of lexical variation, these studies showed that increases in cognitive task complexity resulted in increases in lexical variation (Kuiken & Vedder, 2007) and type-token ratio (Sercu et al., 2006). However, the type-token ratio used by Sercu et al. (2006) did not account for text length, and the findings for increases in lexical variation (Kuiken & Vedder, 2007) were inconsistent between groups (Kuiken & Vedder, 2012). Kuiken and Vedder (2008) found no significant increases in lexical variation as a result of increases in cognitive task complexity. When analysing syntactic complexity, Kuiken and Vedder (2007, 2008, 2012) and Sercu et al. (2006) examined the number of clauses per T-unit and the number of dependent clauses per clause (with dependent clauses analysed as one group). None of the findings confirmed any effect for cognitive task complexity on syntactic complexity. No strong support for predictions by the Cognition Hypothesis (Robinson, 2001a, 2001b, 2005, 2007, 2011) about complex language production can be inferred from these results. The effects of cognitive task complexity on lexical complexity were only noticeable in type-token ratio measures that did not account for text length, and the findings for lexical variation were inconsistent between groups. Findings also provided no clear support for the Cognition Hypothesis’ predictions for syntactic complexity. Though statistically non-significant, we felt the past findings for syntactic complexity bore further consideration, as the absence of any significant effects may have been attributable to issues not addressed in the research. First, the inability to

M.W. Frear, J. Bitchener / Journal of Second Language Writing 30 (2015) 45–57

49

accurately and precisely vary the amounts of cognitive duress applied to learners when independent variables are being manipulated makes it difficult to know what effects are being generated between complex tasks. In such cases, the addition of a patently low complexity task may provide clearer evidence whether any effect exists between lower and higher degrees of cognitive task complexity. Second, the blunt nature of the ratio of dependent clauses to T-units measured across all dependent clauses may have influenced past findings. Measures of subordination, as utilized in studies by Kuiken and Vedder (2007, 2008, 2012) and Sercu et al. (2006), are more complex than they appear (Bulte´ & Housen, 2012). We believe that for intermediate learners, for whom each dependent clause may be automatized to a different extent, the effects of cognitive task complexity may be more clearly observed on each individual dependent clause. The present study This paper reports the findings of a within-subject experimental study designed to assess the effects of cognitive task complexity on lexical and syntactic complexity, with dependent clauses measured both as one group and separately. Cognitive task complexity was manipulated by varying the amount of reasoning demands and numbers of elements in the task instructions of three letter-writing tasks. One task was designed to be patently low complexity with the expectation that it might illustrate any effects between the independent and dependent variables otherwise obscured by the inability to accurately gauge variations in cognitive task complexity between more complex tasks. The cognitive task complexity variables used in the present study are considered resource-directing variables (Robinson, 2001a, 2001b, 2005, 2007, 2011) with the potential for positive effects (Robinson, 2001b) on complex production. In this study, the positive effects are viewed as increases in complex language production consistent with the participants’ developmental level. In light of the issues mentioned above, the following questions were formulated: 1. What are the effects of cognitive task complexity on syntactic complexity when measured (a) as the ratio of dependent clauses to T-units across all dependent clauses, and (b) as the ratio of dependent clauses to T-units when dependent clauses are measured separately? 2. What are the effects of increases of cognitive task complexity on lexical variety? 3. Does using patently low complexity tasks contribute to the tracking of cognitive task complexity on complex writing?

Method Participants and context Thirty-four non-native speakers of English studying in New Zealand volunteered to participate in this study. The various nationalities of the group were Japanese (2), Chinese (12), Korean (11), Thai (3), Russian (2), Vietnamese (2), and French (2). There were 22 female and 12 male participants, and though the ages ranged from 18 to 41, most participants (26 out of 34) were in their twenties. They were all students for whom English was a second language, and they were all identified as intermediate level language learners by the school where they studied. Their IELTS levels ranged between 4.5 and 5.5. The learning context was an Auckland language school, which ran classes focused on raising students overall proficiency in reading, writing, speaking, and listening. Target measures The target measures used in this study were a mean segmental type-token ratio, which is a measure of lexical variety; and the ratio of dependent clauses to T-units, a measure of syntactic complexity. The ratio of dependent clauses to T-units was analysed across all dependent clauses and also with each dependent clause analysed separately (Wolfe-Quintero et al., 1998). Instruments Three instruments, Task 1 (low complexity), Task 2 (medium complexity), and Task 3 (high complexity), were used for data collection. These tasks were devised to elicit language samples reflecting the complex language competency of the participants when placed under increasing cognitive duress. Each instrument was designed to initiate different levels of attention demanding activity through the manipulation of cognitive task complexity in the task instructions. Task 1 (low complexity) was intended to require the least amount of cognitive duress to affect task completion, whereas Task 2 (medium complexity) and Task 3 (high complexity) were both designed to necessitate higher levels of cognitive duress to complete. Task 1 (see Appendix A) was the low complexity task. It was patently low complexity, meaning it was designed to ensure that the levels of cognitive burden being applied were low enough (in comparison to Tasks 2 and 3) that the inability to accurately measure cognitive task complexity did not skew the results when compared against Task 2 and Task 3.

50

M.W. Frear, J. Bitchener / Journal of Second Language Writing 30 (2015) 45–57

Task 1 consisted of a written handout that provided a situation and a set of instructions. The situation involved an Englishspeaking friend who was coming to New Zealand. The instructions directed the participant to write to this friend and tell him about New Zealand. The task instructions were devised to be easy to understand while simultaneously avoiding phrases that instructed the writer to form any opinions or state any reasons why New Zealand might be worthwhile moving to. Furthermore, Task 1 did not have additional information for the participants to use in the writing of the letter, unlike Tasks 2 and 3. Task 1 participants were expected to rely on their own resources. Task 2 (see Appendix B) was the medium complexity task. It was made more complex than Task 1 by manipulating the reasoning demands and number of elements in the task instructions. This was expected to stimulate higher amounts of attention demanding activity (Brown et al., 1984; Ellis, 2003; Robinson, 2005, 2007) in order to complete the task. Reasoning demands were incorporated into Task 2 by designing instructions that directed the participants to write to a friend who was coming to New Zealand and inform that friend which of the two restaurants they would visit upon arrival. Information pertaining to the restaurants was provided with the task instructions. Furthermore, the choice of restaurant needed to be justified based on the information about each restaurant as well as the visiting friends’ information. The second cognitive task complexity dimension was number of elements. This dimension was added by supplying more choice of elements (needed for the completion of the task) in the task instructions than were used in Task 1 (low complexity). Task 3 was the high complexity task. It was similar to Task 2 (medium complexity); however, there were greater amounts of cognitive task complexity. The number of elements that needed to be considered was increased, thus augmenting the reasoning demands. There were three restaurants to choose from rather than two. Additionally, the participants were also expected to include the information of two more friends who would be visiting the restaurant as well as the preferences of the person receiving the letter. Procedure The data were collected in two stages. In stage one, participants were approached at their schools and invited to participate. Those who were interested attended a meeting, which also served as stage 1 of the data collecting process. Initially, the participants completed a consent form then the date for stage 2 of the data collecting process was negotiated. In all cases, stage 2 was performed between three to five days after stage 1. After the date for stage 2 was set, the participants were given Task 1 (low complexity). The participants had 2 min to view the task and ask questions about any aspects of the instructions. 30 min was provided to complete the task. No dictionaries or smart phones were allowed. Stage 2 involved the completion of two consecutive tasks of differing levels of cognitive task complexity (Task 2, medium complexity; and Task 3, high complexity). First, the schedule for stage two was explained, and then the participants were given the first task as well as a 2-min comprehension check. Clarifications of the instructions were provided on request. 30 min was provided to complete the task. Upon completion, there was a 5-min break (in class) where the participants could relax but were discouraged from discussing the tasks. Second, after the break, the second task was distributed. As with the first task, there was a comprehension check followed by a 30-min time limit for completion. Task 2 and Task 3 were counterbalanced to account for participant fatigue, with random selection of task order made by picking a number (2 or 3) from a container. Task 1 (low complexity) was performed at a time prior to Tasks 2 and 3 and was not part of the counterbalancing process. Although this exclusion may have had some effect on the results, Task 1 was excluded from counterbalancing for a number of reasons. First, time constraints imposed by student availability required that Task 1 be performed at a separate time. Second, the data collected in this study was one of three groups participating in a larger study; and as such, it was believed that Task 1 would be an additional indicator of participant skill level (in conjunction with the IELTS scores) between the group used in the present study, and the two other groups that were not included. Task 1 was designed to induce participants to use their own resources. Thus, they were expected to produce language at a level they were comfortable with as opposed to being pushed to synthesize the extra information supplied in the more complex tasks. This was expected to provide an indication of skill level for all participants separate from task complexity or the additional variables used in the two groups not included in this study. Table 1 illustrates the way in which Task 2 and Task 3 were counterbalanced during the second stage of the data collecting process. Analyses The different analyses of ratio of dependent clauses to T-units required that all dependent clause types within the T-units be recorded separately during coding. This allowed for the analysis of dependent clauses as one group and also as separate items (see Table 2). To determine the ratio of dependent clauses to T-units across all dependent clauses in any one text, the Table 1 Counterbalancing Task 2 and Task 3 during stage 2. Participants 34 participants

17 participants 17 participants

1st task assigned to participants

2nd task assigned to participants

Task 2 (medium complexity) Task3 (high complexity)

Task 3 (high complexity) Task 2 (medium complexity)

M.W. Frear, J. Bitchener / Journal of Second Language Writing 30 (2015) 45–57

51

Table 2 Two ways to analyse the ratio of dependent clauses to T-units. Data sample: I have been here for three weeks. I think this city is the best one in New Zealand. I was so happy when I heard that you were coming to New Zealand. 1. Dependent clauses analysed as one group. 3 T-units and 3 dependent clauses 2. Dependent clauses analysed separately. Adjective clauses 3 T-units and 0 dependent clauses Nominal clauses 3 T-units and 2 dependent clauses Adverbial clauses 3 T-units and 1 dependent clause

Table 3 Performance comparisons between tasks. Mean and standard variation Dependent

Task 1

Variables

M

Syntactic DC per T-unit ADJ-DC per T-unit NOM-DC per T-unit ADV-DC per T-unit Lexical MSTTR

Task 2 SD

M

Task 3 SD

M

SD

1.41 1.05 1.18 1.16

.20 .06 .10 .11

1.40 1.07 1.20 1.10

.15 .08 .10 .08

1.35 1.07 1.17 1.09

.14 .08 .08 .08

82.00

3.53

82.82

4.00

83.81

3.52

number of dependent clauses and T-units in that text were added then divided by the number of T-units. To ascertain the ratio of dependent clauses to T-units with dependent clauses separated in the same text, the number of T-units was added to the number of whichever dependent clause was being analysed. A mean segmental type-token ratio was used in the analysis of lexical variation because it is a measure that accounts for text length by dividing each text into segments of equal word length. To avoid, as much as possible, wasting data, the texts from the letter-writing tasks were divided into segments of 40 words. This number was based on the size of the work provided by the students. Syntactic complexity was analysed using a repeated measures ANOVA with a confidence level of .05. This provided a score representing the mean length of T-units for each task (see Table 3). Lexical complexity was also analysed using a repeated measures ANOVA with the same confidence level. This provided a score representing the mean number of varied lexical items for each task (see Table 3). For both syntactic and lexical complexity, a Bonferroni corrected post hoc pairwise comparison with a confidence level of .017 was used as needed. Before proceeding with the analyses, the data were checked for normality. The histograms revealed slight positive skews in the distribution; however, a Friedman Test confirmed that this skew did not affect the validity of the ANOVA results. Results Table 3 shows the mean score and standard deviation of dependent clauses per T-unit for all three tasks, with dependent clauses analysed as one group (DC per T-unit). The result was not statistically significant, F(2,32) = 3.02, p = .06. There was no significant mean scores variation among Task 1, Task 2, and Task 3 when testing for increases in task complexity on the number of dependent clauses per T-units, across all dependent clause types. Table 3 also illustrates the mean scores and standard deviation for dependent clauses per T-unit where adjectival (ADJ-DC per T-unit), nominal (NOM-DC per T-unit), and adverbial (ADV-DC per T-unit) dependent clauses were measured separately. Repeated measures ANOVAs established no statistical differences in syntactic complexity between the three tasks for adjectival dependent clauses, F(2,32) = .70, p = .50 or for nominal dependent clauses, F(2,32) = 2.21, p = 0.12. However, a statistically significant result was found for adverbial dependent clauses, F(2,32) = 4.72, p = .01. The effect size, calculated using partial eta-squared (h2p ), was .228, which is considered large. A Bonferroni corrected post hoc pairwise comparison with an alpha level of .017 was performed on the adverbial dependent clauses. This revealed statistically significant mean scores variation (p = .02) between Task 1 (low complexity) and Task 2 (medium complexity) and a significant mean scores variation (p = .02) between Task 1 (low complexity) and Task 3 (high complexity). This shows that Task 1 has a significantly higher mean score for adverbial dependent clause per T-unit than Task 2 and Task 3. Additionally, no significant change in the mean number of adverbial dependent clauses per T-unit, attributable to variations in task complexity between Task 2 and Task 3, was established.

52

M.W. Frear, J. Bitchener / Journal of Second Language Writing 30 (2015) 45–57

The mean scores and standard deviations of the mean segmental type-token ratios (MSTTR) for all three tasks are also illustrated in Table 3. A repeated measures ANOVA yielded a statistically significant result, F(2,32) = 3.40, p = .04 among Task 1, Task 2, and Task 3. The effect size, calculated using partial eta-squared (h2p ), was .176, which is considered large. Though lexical variety incrementally increased between each task, not each increase was statistically significant. A Bonferroni corrected post hoc pairwise comparison with an alpha level of .017 was performed on the mean segmental type-token ratios. This revealed that the only statistically significant mean score variation (p = .03) was between Task 1 (low complexity) and Task 3 (high complexity).

Discussion This study had four questions related to the central objective of the effects of increases in cognitive task complexity (reasoning demands and number of elements) on writing complex output (ratio of dependent clauses to T-units and mean segmental type-token ratio). 1. What are the effects of cognitive task complexity on syntactic complexity when measured (a) as the ratio of dependent clauses to T-units across all dependent clauses, and (b) as the ratio of dependent clauses to T-units when dependent clauses are measured separately? The first objective was comparing the effects of increases in cognitive task complexity on two measures of ratio of dependent clauses to T-units, and to analyse the findings at individual dependent clause level. The results for ratio of dependent clauses to T-units across all dependent clauses appeared to show that increases in cognitive task complexity effected no statistically significant change (p = .06) in the mean length of T-units when dependent clauses were analysed as a single entity. This result is consistent with previous findings (Kuiken and Vedder, 2007, 2008, 2012; Sercu et al., 2006). Characteristics inherent in the writing process such as planning time, selective control, and recursion (Ellis & Yuan, 2004; Grabe & Kaplan, 1996; Grabowski, 2007; Kormos & Trebits, 2012) were predicted to facilitate the production of complex linguistic structures that require attention. These characteristics were expected to aid processes predicted by the Cognition Hypothesis (Robinson, 2001a, 2001b, 2005, 2007, 2011) in which increases in cognitive task complexity would create increases in conceptual development, leading to the production of complex language needed to express those concepts. Contrary to expectations, increases in cognitive task complexity did not result in an increase in syntactic complexity as measured. Nevertheless, the non-significant results of this study may be an indicator that applied levels of cognitive task complexity were incorrectly aligned with the degree to which the participants’ ability to use subordination was automatized, thus overtaxing limited attentional resources. Potentially, the findings represent incorrect variable alignment as opposed to a refutation of the Cognition Hypothesis. Whereas the ratio of dependent clauses to T-units across all dependent clauses showed no significant results, the ratio of dependent clauses to T-units analysing each dependent clause separately revealed more detailed variations, which included one significant result. The results showed statistically significant findings for decreases in adverbial dependent clauses, nonsignificant increases and decreases in nominal dependent clauses, and almost no variation in adjectival dependent clauses across all three tasks. Because these varied results for each individual dependent clause resulted from the application of the same amount of cognitive task complexity, it appears that other elements have contributed to these variations. Adverbial dependent clauses exhibited statistically significant variations with Task 1 (low complexity) having a higher incidence of adverbial clauses than Task 2 (medium complexity) and Task 3 (high complexity). One possible explanation for these findings is that compared to Task 1, both Task 2 and Task 3 may have been too complex. In other words, beyond a certain level of cognitive task complexity, no differences were found in the linguistic complexity of the output because writer’s attention was overtaxed. This overtaxation, in turn, may be a function of learners’ proficiency, in particular, the extent to which they may have automatized the production of certain language structures. We suggest that assessing such threshold language proficiency effects on the relationsip between task complexity and output complexity is an important area for future research. Other prospective causes for these findings include the pragmatic requirements of the tasks (Bygate, 1999; RyshinaPankova, in press; Ryshina-Pankova & Byrnes, 2013) and personal choice (Pallotti, 2009). In a study that utilized argument tasks with similar complexity demands (giving reasons and justifications) as the current study, Bygate (1999) found that different task types led to different frequencies of subordination. Ryshina-Pankova (in press) also makes the argument that form and function should not be separated in the study of complexity. Ryshina-Pankova and Byrnes (2013) noted that increases in complexity, operationalized as the occurrence of nominalization, were associated with increased mastery of the academic register. It is possible, therefore, that the differences in findings between different subordinate clauses in our study were also a by-product of the task type in that particular dependent clauses were not frequently needed as a means of fulfilling the pragmatic requirements of the task. With regard to personal preference, Pallotti (2009) suggested that differences in accurate, complex, and fluent output should not only be attributed to psycholinguistic factors (such as memory) because sometimes the elements of production may just be a matter of choice. However, we suggest that psycholinguistic factors like memory are central to making language choices. Furthermore, the notion that the participants

M.W. Frear, J. Bitchener / Journal of Second Language Writing 30 (2015) 45–57

53

in this study just simultaneously choose to use the same language seems less likely than their language choices being a response to the task-related factors to which they were all exposed. 2. What are the effects of increases of cognitive task complexity on lexical variety? The second objective was to investigate the effects of increased cognitive task complexity on lexical variety. Unlike the decreases that mostly characterized the findings for syntactic complexity, there were increases in lexical complexity, specifically between Task 1 (low complexity) and Task 3 (high complexity). The findings seemed to provide some evidence that increases along resource-directing dimensions (Robinson, 2001a, 2001b, 2005, 2007, 2011) could trigger the production of more complex language. However, it is possible that the increases in lexical complexity were the result of the increasing amount of lexical items supplied with each increasingly complex set of instructions because writers can produce texts using information taken from task instructions or memory (Kormos, 2011). Therefore, the findings may not be the result of pushed complex output due to increased complexity. Rather, the retrieval of lexical item from the instructions may be a confounding variable. Ellis and Yuan (2004) claim that, under some conditions, when learners formulate what they will write, they may prioritize the search for lexical items over the generation of grammar, similar to Levelts’ (1989) psycholinguistic speaking model. This prioritizing may also explain the increases in lexical variety while syntactic complexity remains constant or, in the case of adverbial dependent clauses, decreases. Participants may have used the limited cognitive resources available to them to focus on the easier lexical means of meeting the pragmatic requirements of the tasks. If true, then the findings for lexical complexity might only ostensibly support Robinsons’ Cognition Hypothesis, with trade-offs between syntactic and lexical means of expression having more in common with the Trade-off Hypothesis (Skehan, 1998, 2003, 2014; Skehan & Foster, 1999, 2001). The pressure brought to bear on limited attentional resources by cognitive task complexity as well as the cognitive pressure of producing language that was not fully automatized may have required a trade-off. In this instance, the trade-off was between more or less demanding types of complex expression. 3. Does using patently low complexity tasks contribute to the tracking of cognitive task complexity on complex writing? The third objective was to examine whether patently low complexity tasks contributed to the tracking of cognitive task complexity on complex writing because cognitive task complexity increases or decreases cannot be accurately controlled and measured between tasks. As a result, findings might be obscured if tasks were either too complex or not complex enough. The findings in this study indicate that the patently lower complexity task clarified the effects of cognitive task complexity on syntactic complexity. In the case of adverbial dependent clause length and mean segmental type-token ratio, the patently low complexity task (Task 1) appeared to provide clearer indications of the effects of increases in cognitive task complexity. Statistically significant results were found between Task 1 (low complexity) and one or both of the other more complex tasks. However, two points need to be made about these results. First, the fact that the patently low complexity task (Task 1) was, by necessity, performed separately and not included in the counterbalancing, has to be considered as a potential factor affecting these results. Second, a qualification needs to be made regarding the low complexity task. To create a task of patently less complexity, Task 1 was designed to encourage the participants to only utilize their own resources (as opposed to the synthesizing of extra information supplied in the task instruction for Tasks 2 and 3). Additionally, Task 1 instructions avoided utilizing words or phrases that might elicit any opinions or reasoning demands. Problematically, the potential exists for the low complexity task to have different pragmatic requirements. This may be a contributing factor affecting subordinate output in conjunction with variations in cognitive task complexity. In future research, this potential mitigating factor needs to be addressed. This study was framed as a partial replication of the work of Kuiken and Vedder (2007, 2008, 2012) and Sercu et al. (2006), studies that used similar independent and dependent variables. Considering that the significant variations in the current study were all observed between the patently low complex task (something not used in past research) and the more complex tasks, this might have highlighted an unaccounted for variable affecting past findings, notably the inability to accurately measure cognitive task complexity between complex tasks. It is worth noting that the significant results were seemingly generated by the combination of both the patently lower complexity task and the more detailed measure of subordination, elements that were both absent in previous studies. This highlights the possibility that the results of previous work might have been affected by the omission of these variables. As mentioned earlier, partial replication studies may contribute to fully understanding the relationship between one set of variables and any extra unaccounted for variables that might be affecting the results. This paper may have illuminated two such factors, specifically, the added effects of using patently low complexity tasks and a more detailed analysis of subordination on studies in complexity and writing. Limitations and future research As with most studies, there remain a number of limitations with this research. First, the potential difference in pragmatic task requirements between the patently lower complexity task and the two higher complexity tasks is a possible limitation.

54

M.W. Frear, J. Bitchener / Journal of Second Language Writing 30 (2015) 45–57

This may need to be addressed in future research to provide a stronger case for differences in performances being related to variations in task complexity. Second, there may be limitations related to our analyses of structural complexity, in which subordination might not have accurately captured the means by which intermediate learners express written complexity. Contrary to Norris and Ortega’s (2009) proposed stages of syntactic complexity development, in which subordination is viewed as an appropriate expression of complexity for intermediate writers, findings by Bulte´ and Housen (2014) and Crossley and McNamara (2014) point to different stages of development. In a study that targeted intermediate and advanced writers, Bulte´ and Housen (2014) found that over time, increases in complex writing, operationalized as clausal coordination and phrasal elaboration, were evident; however, this was not the case with subordination. Future research on writing complexity may need to consider additional measures in addition to subordination when investigating syntactic complexity. Third, more investigations are needed in the field of cognitive task complexity to ensure that cognitive burdens, of the type stipulated by Robinson (2001a, 2001b, 2005, 2007, 2011), are actually being triggered by the independent variables (Norris, 2010; Re´ve´sz, 2014). A number of ways that may contribute to a better understanding of cognitive burden have been suggested. For example, individual differences aside, a combination of subjective self-ratings, secondary task methodology, subjective time estimations, and some psychophysiological techniques may be a good way to estimate the cognitive load purported to be created by tasks complexity (Re´ve´sz, 2014). Another method may be to test cognitively complex tasks, like those used in this study, on language users who have attained a high level of proficiency. This may provide a clearer indication of how these tasks affect learners under optimal processing conditions and thus provide a clearer insight as to what might be expected in comparison to learners situated at different stages of the language learning continuum. Fourth, the sample size of this research could be another issue of concern. The present study has a sample size of 34 participants, which could be considered relatively small and thus affecting statistical power. If power is low due to the sample size, only large effects are likely to have been detected. This may have impacted the main effects and the significance of the pairwise comparisons. Future research on the same variables may benefit from larger samples. Fifth, it could be argued that the difference between the current findings and the previous work (Kuiken & Vedder, 2007, 2008, 2012; Sercu et al., 2006) might be a matter of the degrees of cognitive task complexity applied between tasks. Had the cognitive task complexity levels used in this study been further increased between tasks, the non-significant findings may have resulted in a statistically significant decrease in dependent clause production. Given the lack of a clear means by which degrees of cognitive task complexity can be accurately and incrementally varied, it is difficult to know what degree of variations would have elicited a statistically significant difference. Focusing on means by which cognitive task complexity might be more accurately measured and applied may be a central concern for future research in this area. Research might also want to focus specifically on the effects of cognitive task complexity on acquisition, or the introduction of pre-task planning time as an extra variable. It might be worthwhile to ascertain whether the extra processing time afforded by planning time makes a difference to the type of cognitively stressful performances that were investigated in the current research. Additionally, assessing the thresholds where task complexity overtaxes attentional resourse on language that is not fully automatized is also an important area for future research. Finally, future research should take into consideration the social impacts on cognitive task complexity and production. Given the predicted effects of motivation on attention levels, elements like motivation or the effects of task motivation might also be considered. Concluding remarks This study investigated the effects of variations in cognitive task complexity (reasoning demands and number of elements) on writing complexity. For the most part, the results from this research appear to have met the main objectives of this study, with the use of a patently non-complex task and a more detailed analysis of subordinate clauses appearing to contributing to significant finding. Robinson’s Cognition Hypothesis, a theory originally framed around oral production, was tested in the written modality, with the belief that aspects of the writing process might facilitate complex output under cognitive pressure. Though lexical complexity appears to have increased as a result of increases in cognitive task complexity, this was not the case for subordination. However, we do not rule out the potential for cognitive task complexity to facilitate increases in subordination under certain conditions, such as when the application of appropriate measures of cognitive task complexity align with the appropriate degree to which a language structure is automatized.

M.W. Frear, J. Bitchener / Journal of Second Language Writing 30 (2015) 45–57

Appendix A. Task 1 (low complexity)

Letter one.

Name

This activity is about writing a letter to a friend. Read the information below then write a letter based on the situation and following the instructions. Situation: 1. You have a close English-speaking friend called Peter. 2. Peter is thinking about coming to New Zealand. Instructions: 1. You have to write a letter to your friend of about 200 to 250 words. 2. In this letter, you should tell Peter about New Zealand. 3. Start the letter below. Dear Peter, Appendix B. Task 2 (medium complexity)

Letter two.

Name

This activity is about writing a letter to a friend. Read the information below then write a letter based on the situation and following the instructions. Situation: 1. Your friend John is coming to New Zealand for one weekend, and there are two restaurants he really wants to try. 2. There is only time to go to one restaurant. As a result, John wants you to choose one restaurant. 3. Neither of the restaurants you have checked are perfect for John and your requirements. Instructions: 1. Look at John’s requirements in list A. 2. Look at the restaurant information in list B. 3. Consider your own personal preferences. 4. Using the information from lists A and B and your own preferences, write John a letter of between 200 and 250 words telling him which restaurant you have chosen and why you choose it. 5. Start the letter below. Dear John,

Task 2: Supplementary information lists A and B List A. Information regarding you and John. John’s information: 1. He is arriving on Saturday morning and leaving on the following Monday afternoon. 2. Seafood and Pork are his favourite food. 3. He generally eats a lot. 4. He doesn’t particularly love sweet food, but enjoys dessert some times. 5. He likes to drink a glass of wine with dinner. 6. He only speaks English. 7. He will be staying with you during his time here, so transportation will be your responsibility. Your information: 1. When you are considering the restaurant, consider your actual personal preferences.

55

56

M.W. Frear, J. Bitchener / Journal of Second Language Writing 30 (2015) 45–57

List B: Restaurant information. Restaurant 1: Opening times: 4:00 pm to 8:00 pm Monday to Saturday. Prices: Main courses (main meal) cost around $20. Availability: Usually the restaurant is very busy and bookings (reserve a table) are necessary to get a table. Critic’s review of food quality: The seafood selection (what is available) is good, and is considered very high quality. The beef is average quality. The pork is average quality. There are no desserts (ice cream etc.) at this restaurant. The portions (size of meal) are average size. Drink: The beer and wine is expensive. There is no BYO (bring your own drinks). Staff: Some staff speak English, some staff only speak Japanese. Service: The service is quick, but the staff do not appear friendly. Entertainment: Karaoke after 7:00pm. Location: In Auckland’s’ central city. Parking: Restaurant supplies no parking. Restaurant 2: Opening times: 4:00pm to 9:00 pm Tuesday to Sunday. Prices: Main courses cost around $28. Availability: Quiet during the week, sometimes busy on the weekend. No booking is necessary. Critic’s review of food quality: The seafood selection is good and the quality is good. The pork is good quality. The beef is high quality. The desserts are very high quality. The chicken is average quality. Portions are larger than average size. Drink: Beer and wine are very expensive, however, you can BYO. Staff: Some staff members speak English, and some staff members speak Chinese. Service: The service is efficient, and the members of staff are helpful. Entertainment: There is no entertainment supplied by the restaurant. Location: In Auckland’s central city. Parking: Restaurant supplies a small amount of parking for customers, though much less than the restaurant requires

References Belcher, D., & Hirvela, A. (2001). Introduction. In D. Belcher & A. Hirvela (Eds.), Linking literacies: Perspectives on L2 reading–writing connections. (pp. 1–14). Ann Arbor: University of Michigan Press. Brown, G., Anderson, A., Shilcock, R., & Yule, G. (1984). Teaching talk: Strategies for production and assessment. Cambridge: Cambridge University Press. Bulte´, B., & Housen, A. (2012). Defining and operationalising L2 complexity. In A. Housen, F. Kuiken, & I. Vedder (Eds.), Dimensions of L2 performance and proficiency: Complexity, accuracy and fluency in SLA (pp. 21–46). Amsterdam: John Benjamins. Bulte´, B., & Housen, A. (2014). Conceptualizing and measuring short-term changes in L2 writing complexity. Journal of Second Language Writing, 26(42–65). Bygate, M. (1999). Task as context for the framing, reframing and unframing of language. System, 27, 33–48. Carless, D. (2012). TBLT in EFL settings: Looking back and moving forward. In A. Shehadeh & C. A. Coombe (Eds.), Task-based language teaching in foreign language contexts: Research and implementation (pp. 345–358). Amsterdam: John Benjamins. Cheng, P. W. (1985). Restructuring versus automaticity: Alternative accounts of skill acquisition. Psychological Review, 92, 414–423. Cheung, H., & Kemper, S. (1992). Competing complexity metrics and adults’ production of complex sentences. Applied Psycholinguistics, 13, 53–76. Cook, V. J. (1994). Linguistics and second language acquisition. London: Macmillan. Cooper, T. C. (1976). Measuring written syntactic patterns of second language learners of German. Journal of Educational Research, 69, 176–183. Crossley, S. A., & McNamara, D. S. (2014). Does writing development equal writing quality? A computational investigation of syntactic complexity in L2 learners. Journal of Second Language Writing, 26, 66–79. Crowhurst, M. (1983). Syntactic complexity and writing quality: A review. Canadian Journal of Education, 8, 1–16. Ellis, R. (2003). Task-based language learning and teaching. Oxford: Oxford University Press. Ellis, R., & Barkhuizen, G. (2005). Analyzing learner language. Oxford: Oxford University Press. Ellis, R., & Yuan, F. (2004). The effects of planning on fluency, complexity, and accuracy in second language narrative writing. Studied in Second Language Acquisition, 26, 59–84. Flower, L. S., & Hayes, J. R. (1981). A cognitive process theory of writing. College Composition and Communication, 32, 365–387. Grabe, W., & Kaplan, R. B. (1996). Theory and practice of writing: An applied linguistic perspective. London: Longman. Grabowski, J. (2007). The writing superiority effect in the verbal recall of knowledge: Source and determinants. In M. Torrace, L. van Waes, & D. Galbraith (Eds.), Writing and cognition: Research and applications (pp. 165–179). Bingley, UK: Emerald Group. Hake, R., & Williams, J. M. (1979). Sentence expanding: Not can, or how, but when. In D. Daiker, A. Kerek, & M. Morenberg (Eds.), Sentence combining and the teaching of writing. Akron, OH: L & S Books. Hudson, R. (2009). Measuring maturity. In R. Beard, D. Myhill, J. Riley, & M. Nystrand (Eds.), The SAGE handbook of writing development (pp. 349–362). London: Sage Publications. Hunt, K. W. (1965). Grammatical structures written at three grade levels. Urbana, IL: The National Council of Teachers of English.

M.W. Frear, J. Bitchener / Journal of Second Language Writing 30 (2015) 45–57

57

Ishikawa, T. (2007). The effect of manipulating task complexity along the [here-and-now] dimension on L2 written narrative discourse. In M. del Pilar Garcı´a Mayo (Ed.), Investigating tasks in formal language learning (pp. 136–156). Clevedon: Multilingual Matters. Kormos, J. (2011). Task complexity and linguistic and discourse features of narrative writing performance. Journal of Second Language Writing, 20, 148–161. Kormos, J., & Trebits, A. (2012). The role of task complexity, modality, and aptitude in narrative task performance. Language Learning, 62, 439–472. Kuiken, F., & Vedder, I. (2007). Task complexity and measures of linguistic performance in L2 writing. International Review of Applied Linguistics in Language Teaching, 45, 261–284. Kuiken, F., & Vedder, I. (2008). Cognitive task complexity and written output in Italian and French as a foreign language. Journal of Second Language Writing, 17, 48–60. Kuiken, F., & Vedder, I. (2012). Syntactic complexity, lexical variation and accuracy as a function of task complexity and proficiency level in L2 writing and speaking. In A. Housen, F. Kuiken, & I. Vedder (Eds.), Dimensions of L2 performance and proficiency: Complexity, accuracy and fluency in SLA (pp. 143–170). Amsterdam: John Benjamins. Levelt, W. (1989). Speaking: From intention to articulation. Cambridge, MA: MIT Press. Lord, C. (2002). Are subordinate clauses more difficult? In J. L. Beebe & M. Noonan (Eds.), Complex sentences in grammar and discourse: Essays in honour of Sandra A Thompson (pp. 223–233). Amsterdam: John Benjamins. McLaughlin, B. (1990). Restructuring. Applied Linguistics, 11, 113–128. Mendhalson, D. J. (1983). The case for considering syntactic maturity in ESL and EFL. International Review of Applied Linguistics in Language Teaching, 21, 299–311. Norris, J. M. (2010). Understanding instructed SLA: Constructs, contexts, and consequences. Plenary address delivered at the annual conference of the European Second Language Association (EUROSLA). Norris, J. M., & Ortega, L. (2009). Towards an organic approach to investigating CAF in instructed SLA: The case of complexity. Applied Linguistics, 30, 555–578. Ong, J. (2014). How do planning time and task conditions affect metacognitive processes of L2 writers? Journal of Second Language Writing, 23, 17–30. Ong, J., & Zhang, L. J. (2010). Effects of task complexity on the fluency and lexical complexity in EFL students’ argumentative writing. Journal of Second Language Writing, 19, 218–233. Ong, J., & Zhang, L. J. (2013). Effects of the manipulation of cognitive processes on EFL writers’ text quality. TESOL Quarterly, 47, 375–398. Pallotti, G. (2009). CAF: Defining, refining and differentiating constructs. Applied Linguistics, 30, 590–601. Porte, R., & Richards, K. (2012). Focus article: Replication in second language writing research. Journal of Second Language Research, 21, 284–293. Re´ve´sz, A. (2014). Towards a fuller assessment of cognitive models of task-based learning: Investigating task-generated cognitive demands and processes. Applied Linguistics, 35, 87–92. Robinson, P. (2001a). Task complexity, cognitive resources, and syllabus design: A triadic framework for examining task influences on SLA. In P. Robinson (Ed.), Cognition and second language instruction (pp. 287–318). Cambridge: Cambridge University Press. Robinson, P. (2001b). Task complexity, task difficulty, and task production: Exploring interactions in a componential framework. Applied Linguistics, 22, 27–57. Robinson, P. (2005). Cognitive complexity and task sequencing: Studies in A componential framework for second language task design. International Review of Applied Linguistics in Language Teaching, 43, 1–32. Robinson, P. (2007). Criteria for classifying and sequencing pedagogic tasks. In M. del Pilar Garcia Mayo (Ed.), Investigating tasks in formal language learning (pp. 7–26). Clevedon: Multilingual Matters. Robinson, P. (2011). Second language task complexity, the Cognition Hypothesis, language learning, and performance. In P. Robinson (Ed.), Second language task complexity: Researching the Cognition Hypothesis of language learning and performance (pp. 3–37). Amsterdam: John Benjamins. Ryshina-Pankova, M. (in press). A meaning-based approach to complexity in L2 writing: The case of grammatical metaphor. Journal of Second Language Writing. http://dx.doi.org/10.1016/j.jslw.2015.06.005 (in press) Ryshina-Pankova, M., & Byrnes, H. (2013). Writing as learning to know: Tracing knowledge construction in L2 German compositions. Journal of Second Language Writing, 22, 179–197. Schmidt, R. (2001). Attention. In P. Robinson (Ed.), Cognition and second language instruction (pp. 3–32). Cambridge: Cambridge University Press. Sercu, L., De Wachter, L., Peters, E., Kuiken, F., & Vedder, I. (2006). The effect of task complexity and task conditions on foreign language development and performance: Three empirical studies. International Journal of Applied Linguistics, 152, 55–84. Skehan, P. (1996). A framework for the implementation of task based instruction. Applied Linguistics, 17, 38–62. Skehan, P. (1998). A cognitive approach to language learning. Oxford: Oxford University Press. Skehan, P. (2003). Task-based instruction. Language Teaching, 36, 1–14. Skehan, P. (2014). Limited attentional capacity, second language performance, and task-based pedagogy. In P. Skehan (Ed.), Processing perspectives on task performance (task-based language teaching) (pp. 211–260). Amsterdam: John Benjamins. Skehan, P., & Foster, P. (1999). The influence of task structure and processing conditions on narrative retellings. Language Learning, 49, 93–120. Skehan, P., & Foster, P. (2001). Cognition and tasks. In P. Robinson (Ed.), Cognition and second language instruction (pp. 183–205). Cambridge: Cambridge University. Skehan, P., & Foster, P. (2012). Complexity, accuracy, fluency and lexis in task-based performance: A synthesis of the Ealing research. In A. Housen, F. Kuiken, & I. Vedder (Eds.), Dimensions of L2 performance and proficiency: Complexity, accuracy and fluency in SLA (pp. 120–199). Amsterdam: John Benjamins. Slobin, D. (1993). Adult language acquisition: A view from child language study. In C. Perdue (Ed.), Adult language acquisition: Cross-linguistic perspectives: Vol. 2. The results (pp. 239–252). Cambridge: Cambridge University Press. Swain, M. (1985). Communicative competence: Some roles of comprehensible input and comprehensible output in its development. In S. Gass & C. Madden (Eds.), Input in second language acquisition (pp. 235–253). New York: Newbury House. Wolfe-Quintero, K., Inagaki, S., & Kim, H. Y. (1998). Second language development in writing: Measures of fluency, accuracy, and complexity. Honolulu: University of Hawaii Press. Doctor Mark Frear is currently teaching academic writing at Qatar University in Qatar. He is currently researching the various effects of cognitive task complexity, pre-task planning time, and motivation on the written complexity of second language writers. Professor John Bitchener is currently supervising 11 doctoral students in areas of L2 teaching and learning at AUT University, Auckland, NZ, and has published more than 50 articles in top Applied Linguistics journals. He is author of one book on thesis writing and another on Written Correction for Second Language Acquisition and Writing (with Ferris). Two new books (one on written CF for L2 development, with Storch, and Addressing the writing issues of L2 writers) are due 2015.