Measuring strategic control in artificial grammar learning

Measuring strategic control in artificial grammar learning

Consciousness and Cognition 20 (2011) 1920–1929 Contents lists available at ScienceDirect Consciousness and Cognition journal homepage: www.elsevier...

299KB Sizes 2 Downloads 48 Views

Consciousness and Cognition 20 (2011) 1920–1929

Contents lists available at ScienceDirect

Consciousness and Cognition journal homepage: www.elsevier.com/locate/concog

Measuring strategic control in artificial grammar learning Elisabeth Norman a,b,⇑, Mark C. Price a, Emma Jones a a b

Faculty of Psychology, University of Bergen, Postboks 7807, 5020 Bergen, Norway Haukeland University Hospital, 5021 Bergen, Norway

a r t i c l e

i n f o

Article history: Received 21 January 2011 Available online 6 August 2011 Keywords: Strategic control Intentional control Flexibility Implicit learning Artificial grammar learning Consciousness Fringe consciousness Unconscious Intuition Cognitive feelings

a b s t r a c t In response to concerns with existing procedures for measuring strategic control over implicit knowledge in artificial grammar learning (AGL), we introduce a more stringent measurement procedure. After two separate training blocks which each consisted of letter strings derived from a different grammar, participants either judged the grammaticality of novel letter strings with respect to only one of these two grammars (pure-block condition), or had the target grammar varying randomly from trial to trial (novel mixed-block condition) which required a higher degree of conscious flexible control. Random variation in the colour and font of letters was introduced to disguise the nature of the rule and reduce explicit learning. Strategic control was observed both in the pure-block and mixed-block conditions, and even among participants who did not realise the rule was based on letter identity. This indicated detailed strategic control in the absence of explicit learning. Ó 2011 Elsevier Inc. All rights reserved.

1. Introduction During implicit learning, knowledge acquisition occurs largely unintentionally and at least the details of the learned knowledge are not consciously available to the learner. In implicit learning research, it is becoming increasingly common to measure the degree of strategic control that participants have over their acquired knowledge – i.e., to measure the ability to use the knowledge in a flexible, controlled, context dependent manner (Destrebecqz & Cleeremans, 2001, 2003; Fu, Dienes, & Fu, 2010; Norman, Price, & Duff, 2006; Norman, Price, Duff, & Mentzoni, 2007; Wilkinson & Shanks, 2004). In this paper we suggest some limitations in the way that strategic control has been measured in artificial grammar learning (AGL) which is one of the most studied implicit learning paradigms. We then present an experiment which demonstrates how measurement of strategic control can be improved, and which assesses whether strategic control over knowledge is possible without full explicit knowledge of what has been learned. Following theoretical work on consciousness by Baars (1988, 2002), and the process dissociation procedure of Jacoby and colleagues (Jacoby, 1991, 1994; Jacoby, Toth, & Yonelinas, 1993), strategic control is often taken as a hallmark of conscious or explicit knowledge rather than of unconscious knowledge whose influences are much more automatic. Indeed strategic control has been used methodologically as a broad criterion to decide whether acquired knowledge should be regarded as conscious and explicitly learned, or as unconscious and implicitly learned. For example some studies have argued that the learning of sequences of visual target positions in the serial reaction time task (SRT) – as measured by reaction times of rapid key presses to indicate consecutive target positions – is implicit since participants cannot deliberately withhold the influence of their learned knowledge in a generation task using ‘‘exclusion’’ instructions (Destrebecqz & Cleeremans, 2001, 2003; Goschke, 1998). By contrast other studies have argued that such learning is explicit since they found the influence of learned knowledge can be deliberately withheld (Wilkinson & Shanks, 2004). ⇑ Corresponding author at: Psychology Faculty, University of Bergen, Christies gate 13, 5020 Bergen, Norway. Fax: +47 55 58 48 60. E-mail address: [email protected] (E. Norman). 1053-8100/$ - see front matter Ó 2011 Elsevier Inc. All rights reserved. doi:10.1016/j.concog.2011.07.008

E. Norman et al. / Consciousness and Cognition 20 (2011) 1920–1929

1921

Moving beyond a simple dichotomisation between explicit and implicit learning, measurement of strategic control has been included alongside other subjective and objective measures of learning to assess exactly which aspects of acquired knowledge participants are conscious of. For example, Dienes, Altmann, Kwan, and Goode (1995) have explored strategic control in artificial grammar learning (AGL). In a typical AGL experiment (Reber, 1967), participants are first presented with a series of letter strings in which the identity and order of letters is governed by a complex rule set referred to as a finite state grammar. Strings that conform to the rule set are ‘‘grammatical’’ and strings that violate it are ‘‘ungrammatical’’. However, participants are not informed of the existence of a grammar during training. The structure of the grammar can be conceived of as a network of nodes between which only some transitions are permitted, with each legal transition corresponding to a letter (see Fig. 1). In a subsequent test phase, participants are informed of the existence of a grammar and asked to classify whether each of a series of novel letter strings follows this grammar or violates it. The innovation introduced by Dienes et al. (1995) was to passively expose all participants in an AGL experiment to two sets of letter strings obeying two different finite state grammars (A or B). They then tested participants’ ability to classify whether a novel set of letter strings followed specifically one of these grammars. Within any block of trials the target grammar for string classification was either always A or always B. Participants could classify novels strings with above-chance accuracy in all experiments, regardless of whether they were told to classify according to rule set A or rule set B. These experiments therefore showed strategic control over rule knowledge. From the perspective that strategic control should be seen as a hallmark of conscious knowledge, one interpretation is that learning was explicit. This was also supported by an overall significant relationship between classification accuracy and confidence ratings in two out of three experiments. However, strategic control also occurred on the subset of trials where participants rated that they guessed their classification response. It has therefore been argued by Dienes et al. (1995) that participants might be able to strategically control which grammar they are applying even when they are not conscious of the details of the rules of that grammar. Later, Dienes and Scott (2005) distinguished between judgement knowledge, which refers to knowledge of whether particular items are grammatical, and structural knowledge, which refers to knowledge of the rules of the grammar. The findings by Dienes et al. (1995) could be considered as an example of strategic control in the absence of conscious structural knowledge. A similar finding was reported in a recent study by Wan, Dienes, and Fu (2008) who found strategic control over knowledge of two artificial grammars to which people had been exposed, even on trials where participants rated their grammar classification response to be based on intuition, familiarity, or even random choice rather than rule awareness. However, the procedure used to measure strategic control in these experiments is not without limitations. Participants in the studies of Dienes et al. and Wan et al. successfully attempted, during any given block of test trials, to classify novel letter strings according to only one of two learned grammars. However, as suggested by Norman et al. (2006), successful performance on this type of AGL task may reflect an ability to voluntarily control whether or not acquired knowledge of a rule is generally activated and allowed to influence task performance, rather than reflect moment by moment control over acquired knowledge. Switching the target grammar from trial to trial would therefore be a more demanding test of strategic control than instructing participants to activate one mental set rather than another at the start of a block of trials (i.e., instructing to use grammar A vs. use grammar B). If a dissociation between strategic control and metacognitive awareness

Fig. 1. The two finite-state grammars used, (a) Grammar A, and (b) Grammar B (Dienes et al., 1995; Reber, 1969). The figures are adapted from Dienes et al. (1995).

1922

E. Norman et al. / Consciousness and Cognition 20 (2011) 1920–1929

could be shown even when participants are required to switch between the grammars from trial to trial this would provide stronger evidence for strategic control in AGL. Note that the concerns related to Dienes et al.’s pure-block procedure also apply to a two-grammar AGL study by Higham, Vokey, and Pritchard (2000) where participants showed some ability to classify whether novel strings were from grammar A or from grammar B or from neither. Although participants had to maintain two active grammar representations at the same time, and judge which was closest to the current letter string, they did not have to switch mental sets and selectively control the application of each grammar. (It might even be argued that applying both grammars on any one trial is less demanding than maintaining one grammar while inhibiting another over the course of a block as in Dienes et al. or Wan et al.). Similar criticism of how strategic control is measured has already been addressed in the serial reaction time task (SRT), which is another widely used implicit learning paradigm (Nissen & Bullemer, 1987). In the SRT task participants observe a simple target such as a circle that appears in rapid succession in different screen locations. The locations follow a repeating sequence. Learning of the sequence is measured partly by the reaction times of rapid key presses to indicate consecutive target positions, and partly by participants’ ability to freely generate key press sequences that follow the learned rule. In the context of the SRT task, the ability to comply with instructions to avoid generating legitimate sequences has been proposed as an example of strategic control (Destrebecqz & Cleeremans, 2001; Goschke, 1998). However, it has been argued that participants might be able to strategically suppress the influence of learned sequence knowledge by adopting response strategies such as perseveration of key presses (Wilkinson & Shanks, 2004), or via a global inhibition of sequence knowledge that does not require any trial-by-trial control over the knowledge (Dienes & Scott, 2005; Norman et al., 2006). Norman et al. (2007) therefore developed a more demanding measure of strategic control that was immune to these criticisms. They showed that participants in a modified SRT experiment could successfully use their ‘‘intuitive’’ sequence knowledge to predict the next target move in a flexible manner according to stimulus–response mappings that varied randomly from trial to trial. We now report an experiment that tests whether trial-by-trial strategic control is also possible in the context of an AGL task. All participants were first exposed to two sets of stimuli consisting of letter strings structured according to two different finite state grammars taken from Dienes et al. (1995) and Reber (1969) (see Fig. 1). Grammar knowledge was later tested by asking participants on each test trial to pick which of three simultaneously-presented letter strings conformed to one of the target grammars. A between-subjects design was used. For participants in a pure-block condition the target grammar in the test phase was always grammar A or always grammar B, even though all participants were initially exposed to both grammars. For participants in a mixed-block condition, the target grammar varied randomly from trial to trial during the test phase. We also wanted to determine whether any ability to select target grammars under the more strategically demanding mixed-block procedure would be accompanied by sufficient metaknowledge to generate predictive confidence ratings. To this end we asked participants to give confidence ratings of their grammar judgement at the end of each trial. In order to reduce the likelihood of explicit learning, and to facilitate our ability to identify participants who did not develop explicit knowledge, we made an important modification to the appearance of the stimuli used in our study. One aspect of the standard AGL paradigm that may promote explicit learning at the cost of more implicit learning is that the only stimulus dimension that varies between individual letter strings is the identity and order of the letters. This makes it easy for participants to develop a general understanding of the nature of the rule, i.e., that it indeed has to do with the identity and order of letters. Participants will readily suspect this during the training phase even if they are not informed of the existence of rules. The likelihood that participants will engage in conscious hypothesis-testing is therefore increased, perhaps resulting in the development of explicit knowledge of at least fragments of the rules (see e.g., Wilkinson & Shanks, 2004). Using a technique introduced by Norman et al. (2007), we therefore modified the AGL task so that it was less apparent that the rule had to do with letter sequence. Norman et al. showed that trial by trial flexibility in the use of rule knowledge in the SRT task could still be expressed when the nature of the rule that governed target position was disguised by random variation in the shapes and colours of targets, even for participants who were now unaware that the rule was based on sequences of locations. Following Norman et al. (2007), we attempted to disguise the fact that our artificial grammars were based on letter sequences by randomly varying the colour and font in which the letters were presented. Even if some participants still realised that the rule was probably based on letter identity, the added perceptual complexity of the displays might still make it more difficult for them to develop explicit knowledge of the rule sets. 2. Methods 2.1. Participants Forty paid student volunteers aged 18–28 years (M = 20.7) took part. 2.2. Apparatus and stimuli The AGL task was programmed in E-prime 2.0 (Schneider, Eschman, & Zuccolotto, 2002a, 2002b) on a Pentium 4 PC and displayed by a 1900 monitor in a psychology testing room. Viewing distance was approximately 55 cm. All instructions and rating materials were presented on screen in Norwegian.

E. Norman et al. / Consciousness and Cognition 20 (2011) 1920–1929

1923

Fig. 2. Examples of test phase grammar strings, showing (a) three Grammar A letter strings, (b) three Grammar B strings, and (c) three ungrammatical strings.

Stimuli were letter strings, from 5 to 9 characters long. The two sets of grammatical letter strings (Grammar A strings vs. Grammar B strings) as well as the set of ungrammatical strings, were identical to those employed by Dienes et al. (1995). The letter strings are presented in Appendix A. Each letter in the strings was written in one of three colours (red, green, or blue) and in one of three fonts (Arial, Times New Roman, or Courier New). These colours and fonts of individual letters within letter strings varied randomly from trial to trial (with no additional constraints) and were thus completely unrelated to grammar structure (see Fig. 2). 2.3. Procedure 2.3.1. Training phase On each trial one letter string was presented for 5 s. Each participant was trained on two different grammars in two separate blocks. Within each of the blocks, 32 letter strings from one grammar were presented three times each in random order. The order in which grammars were trained (A first or B first) was counterbalanced across participants. The interval between trials was 1 s (blank screen). To remind participants which of the two training phases they were currently in, the words ‘‘Part 1’’ remained at the top of the screen throughout the first training block, and the letter string was surrounded by a thin, black, solid line which also remained on the screen throughout the entire block. Similarly ‘‘Part 2’’ was always present in the upper screen throughout the second training block, together with a thin, black, dotted line surrounding the stimulus area. Participants were instructed to silently read each letter string repeatedly for as long as it was presented on the screen. 2.3.2. Test phase At the start of the test phase participants were informed that each letter string in part 1 of the training phase had followed one complex rule, and that each letter string in part 2 had followed a different complex rule. The test phase consisted of one of two types of block. Twenty participants received a pure-block of trials, where the target grammar was either the grammar that had been presented first or second during the training phase. This was counterbalanced across participants. The pure-block condition used a conceptually similar procedure to Dienes et al. (1995) and Wan et al. (2008). The other 20 participants received a mixed-block of trials where target grammar varied randomly from trial to trial (half A and half B). Each block consisted of 60 test trials. On all trials three novel letter strings were presented simultaneously – one grammar A item, one grammar B item, and one ungrammatical item. The three strings were presented in a vertical column and labelled A, B and C from top to bottom. Three vertically arranged response keys on a standard keyboard (keys 8, 5, and 2 on the keyboard number pad) were labelled correspondingly. Participants were to indicate which of the three letter strings they thought followed the target grammar by pressing the spatially corresponding response key. On each trial of a pure-block, the instruction was to identify the stimulus that followed either the ‘‘Part 1’’ training phase rule or the ‘‘Part 2’’ training phase rule. To help participants remember which rule to apply the stimulus area was surrounded by either a solid or dotted line, and the words ‘‘Rule 1?’’ or ‘‘Rule 2?’’ remained in the upper screen throughout the entire block. In the mixed-block, whether the participant had to classify according to their ‘‘Part 1’’ or ‘‘Part 2’’ training phase rule varied randomly from trial to trial. To indicate which classification rule had to be used the text ‘‘Rule 1?’’ or ‘‘Rule 2?’’ appeared

1924

E. Norman et al. / Consciousness and Cognition 20 (2011) 1920–1929

in the upper screen simultaneously with the three stimuli, and the stimulus area was surrounded by a corresponding solid or dotted line. After each classification response, participants were asked to indicate how confident they felt that their response was correct. They were to press a right key (key 6 on the keyboard number pad) if they felt ‘‘more confident’’ and a left key (key 4) if they felt ‘‘less confident’’. To reduce the expression of response bias they were instructed to use the two response alternatives approximately equally often (Kunimoto, Miller, & Pashler, 2001). 2.3.3. Questionnaire phase After the experiment, participants received a questionnaire where they were asked to indicate by multiple choice which of the following descriptions best reflected the nature of the rule sets: (a) the ordering and/or combination of colours, (b) the ordering and/or combination of fonts, (c) the ordering and/or combination of letters, (d) the ordering and/or combination of colours and fonts, (e) the ordering and/or combination of fonts and letters, and (f) the ordering and/or combination of letters and colours. The response alternatives were listed on the response sheet in three different orders, which were counterbalanced across participants.

3. Results An initial set of ANOVAs showed that the order in which the two grammars had been presented in the training phase had no main effect on any of the dependent variables below, and did not interact with test condition. Similarly, a set of ANOVAs conducted in the pure-block condition showed no influence on any dependent variable of whether the target grammar during the test phase corresponded to the first or the second training grammar, or whether the test grammar in the pure-block condition was Grammar A or Grammar B. Order of training grammars in both conditions, and allocation of test grammar in the pure-block condition, are therefore omitted from the analyses reported below. Classification accuracy, i.e., the tendency to select the target grammar, was significantly above chance level (33.3%) both in the pure-block condition [M = 46.3%, SE = 4.52, t(19) = 2.87, p (two-tailed) < .01, r2 = .30] and in the mixed-block condition [M = 45.4%, SE = 2.85, t(19) = 4.25, p (two-tailed) < .001, r2 = .49. The tendency to select the nontarget grammar was nearly below chance level in the pure-block condition [M = 27.8%, SE = 3.03, t(19) = 1.84, p (two-tailed) < .08, r2 = .15] and significantly so in the mixed-block condition [M = 25.8%, SE = 2.40, t(19) = 3.16, p (two-tailed) < .01, r2 = .34]. The tendency to select ungrammatical items was significantly below chance both in the pure-block condition [M = 26.0%, SE = 2.13, t(19) = 3.44, p (two-tailed) < .01, r2 = .38] and in the mixed-block condition [M = 28.8%, SE = 1.83, t(19) = 2.46, p (twotailed) < .05, r2 = .24]. To capture a participant’s ability to pick the target grammar rather than the nontarget grammar we calculated what is known as a strategic score (Dienes et al., 1995). We initially calculated this score as [(number of consistent selections  number of inconsistent selections)/(total number of consistent + inconsistent selections)], where consistent selections refer to those trials where the target grammar is correctly chosen and inconsistent selections refer to those trials where the nontarget grammar is chosen. Chance level (i.e., equal tendency to select either grammar) is zero. Participants were able to strategically control which of two learned grammars to apply. This was shown by above-chance strategic scores in both the pure-block condition [M = 0.22, SE = 0.09, t(19) = 2.52, p (two-tailed) < .05, r2 = .25], and in the mixed-block condition [M = 0.27, SE = 0.07, t(19) = 4.10, p (two-tailed) < .001, r2 = .47]. There was no difference between the pure-block and the mixed-block conditions [t(38) = 0.49, p (two-tailed) = .63, r2 = .01]. Note that our calculation of strategic scores differs slightly from the method of Dienes et al. (1995, Experiments 2 and 3) and Wan et al. (2008) for whom the denominator in the above formula was the total number of trials, including trials where nongrammatical strings were chosen. Our own method of calculation more directly reflects the tendency to select the correct rather than incorrect grammar, given that a grammatical string is chosen in the first place. However it is more susceptible to distortion from participants who choose a large proportion of nongrammatical strings and who could generate extreme strategic scores that were based on a very small and unrepresentative number of trials. We therefore also calculated strategic scores using the method of Dienes et al. (1995) and Wan et al. (2008). Strategic scores were still significantly above chance in both the pure-block condition [M = 0.19, SE = 0.07, t(19) = 2.50, p (two-tailed) < .05, r2 = .25], and in the mixed-block condition [M = 0.20, SE = 0.05, t(19) = 3.98, p (two-tailed) < .001, r2 = .45]. There was also still no advantage for the pure-block condition over the mixed-block condition [t(38) = 0.13, p (two-tailed) = .90, r2 < .01]. The obligatory score captures a participant’s tendency to choose the nontarget grammar rather than the ungrammatical string on trials where the target grammar is not chosen; i.e., to be attracted to a grammatical string rather than an ungrammatical string, even if it is the wrong grammar. Following Dienes et al. (1995) and Wan et al. (2008) we calculated this as the number of inconsistent grammar selections divided by the total number of inconsistent and ungrammatical selections. Chance level (i.e., an equal tendency, on incorrect trials, to select inconsistent and ungrammatical strings) is 0.5. On incorrect trials, participants in both conditions were equally likely to pick the inconsistent grammar as the ungrammatical letter string. Obligatory score did not differ from chance in the pure-block condition [M = 0.48, SE = 0.04, t(19) = 0.69, p (two-tailed) = .50, r2 = .02] or in the mixed-block condition [M = 0.46, SE = 0.03, t(19) = 1.3, p (twotailed) = .20, r2 = .08]. Scores in the two conditions did not differ [t(38) = 0.36, p (two-tailed) = .72, r2 < .01.

E. Norman et al. / Consciousness and Cognition 20 (2011) 1920–1929

1925

Fig. 3. Scatterplot of the regression of strategic scores (Y) on Type 2 d0 scores (X) with the regression line shown.

The tendency to express higher confidence in correct responses (hits) than incorrect responses (false alarms) was expressed as Type II d0 , derived from the signal detection theory (SDT) statistic d0 (Macmillan & Creelman, 1991). A positive d0 indicates an ability to discriminate metacognitively between correct and incorrect responses. This measure of discrimination ability reduces the influence of response bias although Evans and Azzopardi (2007) have warned that it should not be conceived of as totally bias free. A d0 of zero would indicate that confidence was unrelated to accuracy. A trend for the distribution of d0 scores to be above chance for the mixed-block condition failed to reach significance [M = 0.23, SE = 0.12, t(19) = 1.88, p (two-tailed) = .08, r2 = .16] and was not significantly above chance in the pure-block condition [M = 0.15, SE = 0.12, t(19) = 1.30, p (two-tailed) = 0.21, r2 = .08. However d0 scores did not differ significantly in the two conditions [t(38) = 0.47, p (two-tailed) = 0.64, r2 = .01] and overall, pooled across both conditions, d0 was above chance [M = 0.19, SE = 0.08, t(39) = 2.28, p (two-tailed) < .05, r2 = .12]. Pooling across both experimental conditions, 8/40 participants (four in the pure-block condition and four in the mixedblock condition) were classified as ‘‘unaware’’ of the nature of the sequence rule because they did not select a response alternative that involved letters. Of these participants, one attributed the rule to colours only, and seven to a combination of colours and fonts. Of the remaining participants, seven were classified as potentially ‘‘fully aware’’ of the nature of the sequence rule because they reported that the rule was related to letters only. The remaining 25 participants were classified as ‘‘partially aware’’ of the nature of the rule. Of these participants, 15 reported that the rule was related to colours and letters, and 10 that it was related to fonts and letters. Strategic scores pooled across condition were above chance even among participants who claimed to be unaware of the nature of the rule [M = 0.17, SE = 0.07, t(7) = 2.45, p (two-tailed) < .05, r2 = .46]. This was also the case when scored using the method by Dienes et al. (1995) and Wan et al. (2008), [M = 0.12, SE = 0.05, t(7) = 2.46, p (two-tailed) < .05, r2 = .46]. Obligatory scores were not different from chance level [M = 0.49, SE = 0.03, t(7) = 0.43, p = .68, r2 = .03], and neither were confidence rating d0 scores [M = 0.08, SE = 0.18, t(7) = 0.46, p = .66, r2 = .03]. As an additional check on whether strategic control occurred in the absence of explicit rule knowledge, we calculated whether the linear regression of strategic score on Type II d0 score showed a significant positive intercept – i.e., whether there was evidence that strategic score was still showing positive values as the ability to give veridical confidence ratings fell to zero1. Because there were no differences between the two conditions on either strategic scores or d0 scores, the two conditions were pooled. The regression of strategic scores on d0 scores was significant [Adjusted R2 = 0.28, F(1, 38) = 16.15, p < .001]. Strategic scores (Y) could be predicted from d0 scores (X) by the formula [Y = .18 + .36X] (see Fig. 3). Both the slope of the regression line and the intercept were significantly different from zero (p < .001)2,3. The same analyses were repeated for the mixed-block condition only. The regression of strategic scores on d0 scores was still significant [Adjusted R2 = 0.36, F(1, 18) = 11.47, p < .01], 1

We thank one of the reviewers for suggesting this analysis. To check for the possible influence of outliers on this regression analysis we checked the standardised residuals. No participants showed residuals >2 except for one person in the pure-block condition who showed a residual of 3.07. Repeating the regression analysis with this participant removed still gave a significant regression and intercept. 3 It should be noted that this method assumes that the Type II d0 scores were perfectly measured. Noise in measurement of the X variable in a regression flattens the regression line about the mean, tending to increase the intercept. However, as the zero point for Type II d0 fell squarely in the body of the data, this assumption of regression is unlikely to account for the observed results. 2

1926

E. Norman et al. / Consciousness and Cognition 20 (2011) 1920–1929

and the slope of the regression line and the intercept [Y = .19 + .34X] were again significantly different from zero (p < .01)4. The results remained significant when analyses were repeated using the strategic scores formula of Dienes et al. (1995) and Wan et al. (2008). 4. Discussion The ability of participants in an artificial grammar learning experiment to apply one of two sets of learned grammar rules in a test phase was compared between conditions where the classification rule was either the same on each test trial (i.e., a pure-block condition), as in previous studies (Dienes et al., 1995; Wan et al., 2008), or alternated randomly from trial to trial (i.e., a mixed-block condition). The initial research question was whether participants would still be able to selectively apply two grammars when the test phase involved a mixed-block procedure which aimed to provide a more stringent measure of strategic control over grammar knowledge than the pure-block procedure used in previous studies. Strategic control over acquired knowledge was indeed found in the mixed-block condition. This was shown by above-chance strategic scores, indicating an ability to select the letter string that followed the target grammar rather than the inconsistent grammar, and by obligatory scores that did not exceed chance, indicating that participants were no more likely to select letter strings that followed the inconsistent grammar than to select ungrammatical strings. By showing that participants can select the target grammar even when it alternates randomly between two possible grammars on a trial-by-trial basis, our data provide a more robust measure of strategic control than previous studies where the target grammar was held constant over whole blocks of trials. A related aim of our study was to establish whether strategic control can occur without explicit knowledge of the grammars, even under the more exacting conditions of the mixed-block procedure. The fact that confidence ratings showed an overall tendency to be related to classification accuracy across the pure-block and mixed-block conditions initially cautioned against this conclusion. Note that a similar relationship between confidence and classification accuracy has been reported in some, but not all, previous experiments of this kind (Dienes et al., 1995). Our own finding of a correlation between confidence and classification accuracy was compatible with two possibilities: (1) It might reflect that strategic control was mediated by explicit knowledge of the grammars. (2) It might reflect that strategic control was mediated by conscious feelings or intuitions that allowed grammars to be distinguished, and which can be labelled as conscious judgement knowledge (Dienes & Scott, 2005). Our overall confidence rating data, taken on its own, therefore argued against fully nonconscious grammar knowledge and prevented us from ruling out the possibility of explicit grammar knowledge. Given a global workspace theory of consciousness (e.g., Baars, 1988), another aspect of our data also cautioned against the interpretation that acquired knowledge was fully nonconscious, namely that the level of strategic control did not appear to differ in the mixed-block and pure-block conditions. If performance was mediated purely by unconscious orientation responses one would expect to see a behavioural disadvantage in the mixed-block condition, where flexible, trial-by-trial alternation between the two grammars is required. However there were also aspects of the data that supported strategic control in the absence of full explicit knowledge. Unlike traditional AGL experiments that use letter strings written in black font, our stimuli contained random variation in letter, colour and font. This relatively complex stimulus environment was introduced to reduce the likelihood of deliberate hypothesis-testing and of developing correct hypotheses about the nature of the rule. The manipulation also made it possible to explore whether strategic control was limited to those participants who expressed rule awareness (Norman et al., 2007). One indication that this procedural modification may have helped to limit rule awareness is that less than a fourth of the participants correctly attributed the rule solely to the letter modality. The majority of participants instead attributed the rule fully or partly to irrelevant aspects of the stimulus displays (although it is possible that for some of these participants the attribution was prompted by demand characteristics of the response choices). Importantly, strategic control was found in the subgroup of eight participants who expressed no awareness that the rule was related to the ordering and/or combination of letters, but instead rated the rule as being related to the stimulus dimensions of colour or font. This indicated that strategic control was not limited to participants who might have learned the rules explicitly. In addition it indicated that strategic control was unlikely to be attributable solely to participants using explicit heuristics related to uncontrolled differences between the particular letter strings constructed from the two grammars, for instance differences in simple letter frequencies. We also found that confidence ratings for this subgroup of participants did not reflect the ability to discriminate between grammars; i.e., participants showed no correlation between their confidence ratings and their strategic scores. Since it is reasonable to assume that explicit knowledge will enable predictive confidence ratings (at least in an environment where predictive confidence ratings are shown generally to occur), this absence of predictive ratings further supports the suggestion that the ‘‘unaware subgroup’’ had no explicit rule knowledge. Although null effects of this kind in small groups of participants are unreliable and prone to Type 2 error, the results were convergent with those of a regression analysis of all participants, which found that strategic scores were still significantly above chance when the accuracy of confidence ratings fell to zero, even in the mixed-block condition. Therefore both the design and several convergent aspects of the results support the conclusion that a demanding degree of strategic control was occurring in the absence of fully explicit learning.

4 Repeating the regression analysis after the removal of one outlier who showed a standardised residual of 2.26, still gave a significant regression and intercept.

E. Norman et al. / Consciousness and Cognition 20 (2011) 1920–1929

1927

Previous studies which argued for strategic control without explicit learning have relied on participants making relatively complex metacognitive appraisals of their own performance. These studies have claimed that strategic control is found even on ‘‘guess’’ trials (Dienes et al., 1995) and on trials attributed by the participant to ‘‘implicit’’ decision strategies (Wan et al., 2008). A potential concern with these approaches, when used on their own, is that they rely on the absolute accuracy of participant’s ratings (see Norman & Price, 2010). Especially under conditions where an overall correlation between confidence ratings and performance has been observed, it is possible that ratings such as for example a ‘‘guess’’ are just an expression of low confidence. Results from our own procedure are convergent with those of Dienes et al. (1995) and Wan et al. (2008) but do not rely on participants making such complex metacognitive appraisals of their performance on each trial. Instead our procedure involved confidence ratings on a simple binary scale, which Tunney and Shanks (2003) argued is a more sensitive measure of phenomenal states than complex continuous scales such as those used by Dienes et al. (1995), in conjunction with a more global post-trial evaluation of the nature of the rule. Obviously different approaches have their merits and weaknesses and convergence between approaches provides the best evidence for any finding. We would therefore argue that, in addition to providing one of the most stringent tests to date that detailed strategic control is possible during AGL tasks, our data makes an important contribution to the debate over whether strategic control is only possible when learning is explicit. Even though our findings are convergent in this respect with those of previous implicit learning experiments (Dienes et al., 1995; Fu et al., 2010; Norman et al., 2007; Wan et al., 2008), our conclusion of strategic control without explicit knowledge is based on small effects and on a relatively small subgroup of unaware participants. The approach we have used in the current paper should therefore be developed and extended in future research. 5. Conclusions In response to our concerns over existing measures of strategic control in AGL, we have introduced the more stringent mixed-block testing procedure in which the target grammar varies from trial to trial. Our data suggest that trial-by-trial strategic control over AGL is possible under conditions where performance falls short of what we might expect from fully explicit learning. One line of evidence against fully explicit learning is that even though less than a fourth of the participants reported full awareness of the letter-based nature of the grammar rule, participants overall showed significant strategic control and predictive confidence ratings. From a global workspace perspective on consciousness (Baars, 1988), the very fact that strategic control was occurring is an argument that performance was in some sense consciously mediated. Similarly, a relationship between confidence and accuracy would be seen as indicating consciousness from the perspective of higher-order thought theory (Dienes et al., 1995). One interpretation is therefore that strategic control and predictive confidence – in the absence of full awareness of the nature of the rule – occurred because participants oriented to target strings on the basis of conscious metacognitive feelings that reflected unconscious knowledge of the grammars (Norman, Price, & Duff, 2010; Price & Norman, 2008, 2009). Using another terminology, participants may have demonstrated conscious judgement knowledge without full conscious structural knowledge of the grammars (Dienes & Scott, 2005). Performance may then have been mediated in an ‘‘intuitive’’ manner that was neither fully conscious nor fully unconscious, similar to what has been proposed in a variety of SRT task where strategic control over learned sequences was found even among participants who were unaware the rule was based on sequences of locations (Norman et al., 2007). Another line of evidence against fully explicit learning is that strategic control was found even in our subgroup of ruleunaware participants. However, in this subgroup strategic control was not associated with predictive confidence ratings. Similarly a regression analysis of all participants showed a degree of strategic control when confidence was unrelated to classification accuracy. One interpretation is that, for some participants, unconscious structural knowledge was strategically controlled in the absence of conscious judgement knowledge (Dienes et al., 1995). However this sits at odds with the general supposition, consistent with a global workspace theory of consciousness (Baars, 1988), that strategic control over grammar knowledge reflects some degree of conscious global access to that knowledge. Another possibility is therefore that even these participants oriented to target strings on the basis of conscious metacognitive feelings reflecting unconscious structural knowledge of the grammars. A potential problem with this suggestion is that intuitive feelings might be expected to support veridical confidence ratings and, as pointed out above, we found some evidence for strategic control when these ratings were not predictive. However it is possible that some participants were simply unmotivated or unskilled in making confidence ratings. Speculatively, it might be alternatively possible that participants can experience cognitive feelings of grammar category that are able to support strategic control but are too introspectively impenetrable to support detailed confidence ratings (see Price, 2002, for a similar suggestion). We would therefore argue that even these participants may have responded on the basis of conscious, ‘‘intuitive’’ feelings that reflected unconscious grammar knowledge. In general we suggest that conscious feelings – of rightness or wrongness, of familiarity, or of preference for the different response alternatives (Norman et al., 2006, 2007; Scott & Dienes, 2008) may play an important role in mediating strategic control. However further studies need to disambiguate this possibility from rival explanations in terms of fragmentary explicit knowledge of the learned grammars (Johnstone & Shanks, 2001; Perruchet & Pacteau, 1990; Redington & Chater, 1996) and to address more precisely the phenomenology and neuro-cognitive basis of the signals that participants are responding to in these paradigms.

1928

E. Norman et al. / Consciousness and Cognition 20 (2011) 1920–1929

Acknowledgment This research was supported by a postdoctoral grant (911274) to the first author from the Western Norway Regional Health Authority (Helse Vest). We thank the reviewers for their helpful suggestions. Appendix A Training strings: Grammar A

Training strings: Grammar B

Test strings: Grammar A

Test strings: Grammar B

Test strings: Ungrammatical

XMXRTTVTM VTTVTRVM XMMXRVM VTVTM XXRVTM VTTTVTM XXRVTRTVM VVTRTTVTM XMXRTTTVM XMMXRTVM VTVTRTVTM VTTTVTRVM XXRVTRVM VTVTRVM XXRTTTVTM VTTVTM VVTRTTVTM XMMMXRVTM XMXRVTM XXRTTVTM VVTRTVM XMXRVTRVM XMMMXRTVM VVTRVM VTTVTRVTM XMXRTVM VVTRVTRVM XMMXM VTVTRTVM XXRTTVM XMMXRTVTM XXRTVTRVM

XMTRRRM VVTRXRM XMTRM XXRRM XMVRMTM VVTTRXRRM XMVRMTRM VVTTTRMTM XMVRMVRXM VVTTRMTRM VVRMVRXRM VTRRM VVTRXRRM XMVTRMTRM XMVRMTRRM VVRXRRRM XMVTRXRRM VTRRRM XMVRXM XMVTRXM VVRMTRRM XMVRXRRM VVTRMTM VVRXRM XMVTTRMTM VVRMVTRXM VVRMTRM VVTRMVRXM VVTTTRXRM VVTRMTRRM VVTTRXRM XMVTTRXM

XXRTVTM VVTRVTM XMXRTVTM XMMXRVTM XXRVM XXRVTRVTM XXRTVM VTVTRTTVM XMXRTTVM VTTVM VVTRTTTVM XXRTTTVM VTVTRVTM VTTVTRTVM XMMMXRVM VVTRTVTM VTTTVM XMXRVM XMMXRTTVM XMMMXM

XMVRXRM XMVRXRRRM VVTRMTRM XMVTRMTM VVRXRRM XMTRRM VVTTRXM VVRMVRMTM XMVTRXRM VVTRXM VVTTRMTM XMVTTRXRM XMVTTTRXM VVTTTRXM VVRMTRRM XXRRRM VVTRXRRRM VVRMTM VVRMVRXM VVRXM

XMTVVXRM VVTRMM VTXRM XMTVM XMXVVXM VVRMTMVM VVTRMXMM XMVXRMVTM XMTXM VTRTVTVM XXVTXM XMRMXM XMMXTRMXM VTVMMXTVM VVMVTM XXRTRMN XXVTVXM VVXMM VVTTXXTRM VTRRVVM

References Baars, B. J. (1988). A cognitive theory of consciousness. New York, NY: Cambridge University Press. Baars, B. J. (2002). The conscious access hypothesis: Origins and recent evidence. Trends in Cognitive Sciences, 6(1), 47–52. Destrebecqz, A., & Cleeremans, A. (2001). Can sequence learning be implicit? New evidence with the process dissociation procedure. Psychonomic Bulletin & Review, 8(2), 343–350. Destrebecqz, A., & Cleeremans, A. (2003). Temporal effects in sequence learning. In L. Jiménez (Ed.). Attention and implicit learning. Advances in consciousness research (Vol. 48, pp. 181–213). Amsterdam, Netherlands: John Benjamins Publishing Company. Dienes, Z., Altmann, G. T. M., Kwan, L., & Goode, A. (1995). Unconscious knowledge of artificial grammars is applied strategically. Journal of Experimental Psychology: Learning, Memory, and Cognition, 21(5), 1322–1338. Dienes, Z., & Scott, R. B. (2005). Measuring unconscious knowledge: Distinguishing structural knowledge and judgment knowledge. Psychological Research, 69(5–6), 338–351. Evans, S., & Azzopardi, P. (2007). Evaluation of a ‘‘bias-free’’ measure of awareness. Spatial Vision, 20(1–2), 61–77. Fu, Q., Dienes, Z., & Fu, X. (2010). Can unconscious knowledge allow control in sequence learning? Consciousness and Cognition, 19(1), 462–474. Goschke, T. (1998). Implicit learning of perceptual and motor sequences: Evidence for independent learning systems. In M. A. Stadler & P. A. Frensch (Eds.), Handbook of implicit learning (pp. 401–444). Thousand Oaks, CA: Sage Publications. Higham, P. A., Vokey, J. R., & Pritchard, J. L. (2000). Beyond dissociation logic: Evidence for controlled and automatic influences in artificial grammar learning. Journal of Experimental Psychology: General, 129(4), 457–470. Jacoby, L. L. (1991). A process dissociation framework: Separating automatic from intentional uses of memory. Journal of Memory and Language, 30(5), 513–541. Jacoby, L. L. (1994). Measuring recollection: Strategic versus automatic influences of associative context. In C. Umilta & M. Moscovitch (Eds.), Attention and performance 15: Conscious and nonconscious processing (pp. 661-679). Cambridge, MA: The MIT Press.

E. Norman et al. / Consciousness and Cognition 20 (2011) 1920–1929

1929

Jacoby, L. L., Toth, J. P., & Yonelinas, A. P. (1993). Separating conscious and unconscious influences of memory: Measuring recollection. Journal of Experimental Psychology General, 122(2), 139–154. Johnstone, T., & Shanks, D. R. (2001). Abstractionist and processing accounts of implicit learning. Cognitive Psychology, 42, 61–112. Kunimoto, C., Miller, J., & Pashler, H. (2001). Confidence and accuracy of near-threshold discrimination responses. Consciousness and Cognition, 10, 294–340. Macmillan, N. A., & Creelman, C. D. (1991). Detection theory: A user’s guide. Cambridge, UK: Cambridge University Press. Nissen, M. J., & Bullemer, P. (1987). Attentional requirements of learning: Evidence from performance measures. Cognitive Psychology, 19, 1–32. Norman, E., & Price, M. C. (2010). Measuring ‘‘intuition’’ in the SRT generation task. Consciousness and Cognition, 19, 475–477. Norman, E., Price, M. C., & Duff, S. C. (2006). Fringe consciousness in sequence learning: The influence of individual differences. Consciousness and Cognition, 15(4), 723–760. Norman, E., Price, M. C., & Duff, S. C. (2010). Fringe consciousness: A useful framework for clarifying the nature of experience-based feelings. In A. Efklides og & P. Misailidi (Eds.), Trends and prospects in metacognition research (pp. 63–80). New York, NY: Springer. Norman, E., Price, M. C., Duff, S. C., & Mentzoni, R. A. (2007). Gradations of awareness in a modified sequence learning task. Consciousness and Cognition, 16, 809–837. Perruchet, P., & Pacteau, C. (1990). Synthetic grammar learning: Implicit rule abstraction or explicit fragmentary knowledge? Journal of Experimental Psychology: General, 119(3), 264–275. Price, M. C. (2002). Measuring the fringes of experience. Psyche, 8(16). . Price, M. C., & Norman, E. (2008). Intuitive decisions on the fringes of consciousness: Are they conscious and does it matter? Judgment and Decision Making, 3(1), 28–41. Price, M. C., & Norman, E. (2009). Cognitive feelings. In T. Bayne, A. Cleeremans, & P. Wilken (Eds.), Oxford companion to consciousness (pp. 141–144). Oxford: Oxford University Press. Reber, A. S. (1967). Implicit learning of artificial grammars. Journal of Verbal Learning and Verbal Behavior, 77, 317–327. Reber, A. S. (1969). Transfer of syntactic structure in synthetic languages. Journal of Experimental Psychology, 81(1), 115–119. Redington, M., & Chater, N. (1996). Transfer in artificial grammar learning: A reevaluation. Journal of Experimental Psychology: General, 125(2), 123–138. Schneider, W., Eschman, A., & Zuccolotto, A. (2002a). E-prime reference guide. Pittsburgh, PA: Psychology Software Tools Inc.. Schneider, W., Eschman, A., & Zuccolotto, A. (2002b). E-prime user’s guide. Pittsburgh, PA: Psychology Software Tools Inc.. Scott, R. B., & Dienes, Z. (2008). The conscious, the unconscious, and familiarity. Journal of Experimental Psychology: Learning, Memory, and Cognition, 34(5), 1264–1288. Tunney, R. J., & Shanks, D. R. (2003). Subjective measures of awareness in implicit cognition. Memory & Cognition, 31(7), 1060–1071. Wan, L., Dienes, Z., & Fu, X. (2008). Intentional control based on familiarity in artificial grammar learning. Consciousness and Cognition, 17(4), 1209–1218. Wilkinson, L., & Shanks, D. R. (2004). Intentional control and implicit sequence learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30(2), 354–369.