Brain & Language 118 (2011) 53–71
Contents lists available at ScienceDirect
Brain & Language journal homepage: www.elsevier.com/locate/b&l
The neural basis of the right visual field advantage in reading: An MEG analysis using virtual electrodes Laura Barca a,b, Piers Cornelissen a, Michael Simpson c, Uzma Urooj a, Will Woods c, Andrew W. Ellis a,⇑ a
Department of Psychology, University of York, York YO10 5DD, United Kingdom Pediatric Rehabilitation Department, Children’s Hospital Bambinu Gesù – IRCCS, Via Torre di Palidoro 00050, Passoscuro Fiumicino, Rome, Italy c York Neuroimaging Centre (YNiC), University of York, York YO10 5DG, United Kingdom b
a r t i c l e
i n f o
Article history: Available online 6 October 2010 Keywords: Reading Word recognition Hemispheres Right visual field advantage Magnetoencephalography MEG Visual word form area
a b s t r a c t Right-handed participants respond more quickly and more accurately to written words presented in the right visual field (RVF) than in the left visual field (LVF). Previous attempts to identify the neural basis of the RVF advantage have had limited success. Experiment 1 was a behavioral study of lateralized word naming which established that the words later used in Experiment 2 showed a reliable RVF advantage which persisted over multiple repetitions. In Experiment 2, the same words were interleaved with scrambled words and presented in the LVF and RVF to right-handed participants seated in an MEG scanner. Participants read the real words silently and responded ‘‘pattern” covertly to the scrambled words. A beamformer analysis created statistical maps of changes in oscillatory power within the brain. Those whole-brain maps revealed activation of the reading network by both LVF and RVF words. Virtual electrode analyses used the same beamforming method to reconstruct the responses to real and scrambled words in three regions of interest in both hemispheres. The middle occipital gyri showed faster and stronger responses to contralateral than to ipsilateral stimuli, with evidence of asymmetric channeling of information into the left hemisphere. The left mid fusiform gyrus at the site of the ‘visual word form area’ responded more strongly to RVF than to LVF words. Activity in speech-motor cortex was lateralized to the left hemisphere, and stronger to RVF than LVF words, which is interpreted as representing the proximal cause of the RVF advantage for naming written words. Ó 2010 Elsevier Inc. All rights reserved.
1. Introduction If right-handed participants fixate a central point on a screen and maintain their focus on that point, then familiar words presented briefly to the right of fixation are recognized better than words presented to the left of fixation. This is true whether the performance measure is the accuracy of reporting words after very brief presentations, the accuracy of indicating which of two letters occurred in a given position in stimulus words, or the speed with which words can be read aloud, distinguished from nonwords, or judged for their meaning (Banich, 2003; Bourne, 2006; Ellis, 2004; Ellis, Ansorge, & Lavidor, 2007a, 2007b; Ellis, Young, & Anderson, 1988; Jordan, Patching, & Milner, 2000; Lindell, Nicholls, & Castles, 2002). The existence of the right visual field (RVF) advantage for visual word recognition has been known for over 50 years (Bradshaw & Nettleton, 1983; Hellige, 1993; Young, Bion, & Ellis, 1980). During that time a number of theories have been proposed as to why it might exist. The most widely-held view has always been that the RVF advantage has to do with access to the ⇑ Corresponding author. Fax: +44 1904 433181. E-mail address:
[email protected] (A.W. Ellis). 0093-934X/$ - see front matter Ó 2010 Elsevier Inc. All rights reserved. doi:10.1016/j.bandl.2010.09.003
language-dominant hemisphere which, for most right-handers, is the left hemisphere (Knecht et al., 2000; Springer & Deutsch, 1997). The anatomical connections from the retina to the brain are such that visual information originating in the RVF projects initially to primary visual cortex in the left cerebral hemisphere, while visual information originating in the left visual field (LVF) projects initially to primary visual cortex in the right hemisphere. RVF information therefore enjoys relatively direct access to the language-dominant left hemisphere (in most people), while LVF information has to be transferred across the corpus callosum if it is to access left hemisphere language areas (Banich, 2003; Bourne, 2006; Bradshaw & Nettleton, 1983; Hellige, 1993). Alternative theories have proposed that the RVF advantage in languages like English may be linked to a lifetime’s experience of reading a script that runs from left to right, resulting in a perceptual or attentional bias towards words located to the right of fixation (e.g., Bryden & Mondor, 1991; Nazir, 2000, 2004; White, 1969). Strong evidence favoring language dominance over perceptual or attentional biases has recently been reported by Hunter and Brysbaert (2008). The participants in Hunter and Brysbaert’s (2008) study were all left-handed. As a group, left handers show a mixture of left hemisphere, right hemisphere and mixed language
54
L. Barca et al. / Brain & Language 118 (2011) 53–71
dominance (Knecht et al., 2000). Hunter and Brysbaert (2008) tested their participants on lateralized object and word naming tasks and selected those participants who showed consistent LVF or RVF advantages on both tasks. The selected participants then performed a word production task in an fMRI scanner. The task involved generating as many words as they could think of that began with each of 10 different letters. Activation was measured in the left inferior prefrontal region (an area closely involved in speech production) and in its right hemisphere homologue. Those activations were contrasted with a condition in which the participants repeated a meaningless nonword. The results showed highly significant correlations between left-right asymmetries in fMRI responses during word generation and left-right visual field differences in object and word naming, such that participants with left hemisphere language dominance showed RVF advantages in the behavioral tasks while participants with right hemisphere language dominance showed LVF advantages. Given that the participants were all left-handed, and that they all grew up reading the same left-to-right language (English), this result points unequivocally in the direction of a structural account of visual field advantages linked to patterns of hemisphere dominance. The fact that left-handers display a range of different language dominance patterns (from left through bilateral to right hemisphere dominance) also helps to explain why left handers as a group show a reduced visual field advantages for word recognition compared with right handers, despite having had the same lifetime reading experiences (Bradshaw, 1980; Bub & Lewine, 1988; Schmuller & Goodman, 1979). A structural account of visual field advantages couched in terms of hemispheric dominance rather than perceptual or attentional biases can also explain why RVF advantages are observed in right-handed readers of languages such as Hebrew or Farsi that are read from right to left rather than from left to right (Faust, Kravetz, & Babkoff, 1993; Lavidor, Ellis, & Pansky, 2002; Melamed & Zaidel, 1993). If the presence of an RVF advantage in visual word recognition is linked to left hemisphere language dominance, that leaves open a range of possibilities in terms of the lexical processing capabilities of the two hemispheres. Some theories have suggested that both hemispheres are capable of processing written words to some degree; for example recognizing letter strings as familiar and accessing the meanings of at least some words (e.g., Marsolek & Deason, 2007; Shillcock, Ellison, & Monaghan, 2000). In order to explain the RVF advantage, such theories must either propose that in most right handers, the left hemisphere processes letter strings in a manner that is in some way better adapted to the task of reading (Marsolek, 2004; Marsolek & Deason, 2007; see Ellis et al., 2007a, 2007b, for discussion), or that the left hemisphere is specialized for specific aspects of language that are necessary for reading, such as phonological processing (Shillcock et al., 2000). Other theories propose that visual word recognition in reading is essentially a left hemisphere task, and that wherever a word appears in space, visual information about that word is channeled into the left hemisphere in order for it to be recognized, understood and pronounced (e.g., Cohen et al., 2000, 2002; Ellis, 2004; Whitney, 2001; Whitney & Cornelissen, 2008). On the latter view, words presented in the LVF, which project first to visual cortex in the right cerebral hemisphere, must be transferred across to the left hemisphere via fibers that pass through the posterior section (splenium) of the corpus callosum (Ben-Shachar, Dougherty, & Wandell, 2007; Binder & Mohr, 1992). The RVF advantage might then be presumed to arise because interhemispheric transfer incurs a time delay and/or some loss of fidelity of information, particularly if words are displayed only briefly (Banich, 2003; Cohen et al., 2000, 2002; Ellis, 2004; Whitney, 2001; Zaidel, Clarke, & Suyenobu, 1990), though it is also possible that hemispheric differences in early processes of letter identification, or in the interaction
between earlier and later processing stages, could contribute towards the observed differences between RVF and LVF performance (Ellis et al., 2007a, 2007b, 2009).
1.1. The neural basis of the RVF advantage for visual word recognition What light, if any, can functional brain imaging studies shed on the neural basis of the RVF advantage for visual word recognition? A currently influential theory developed by Cohen, Dehaene and colleagues proposes that an area of cortex lateral to the mid-portion of the left fusiform gyrus (also known as the occipitotemporal gyrus) plays a particularly important role in visual word recognition (see Dehaene, Cohen, Sigman, & Vinckier, 2005; McCandliss, Cohen, & Dehaene, 2003; Vinckier et al., 2007). Cohen, Dehaene and colleagues refer to this patch of left fusiform cortex as the ‘visual word form area’ (VWFA). Functional MRI studies which contrast the processing of written words with non-alphabetic stimuli such as fixation points, checkerboard patterns, or false fonts reliably find greater activation of the VWFA by words than by the non-alphabetic control stimuli (Jobard, Crivello, & Tzourio-Mazoyer, 2003; Mechelli, Gorno-Tempini, & Price, 2003). While activation typically peaks at a point with MNI co-ordinates around x = 43, y = 54, z = 12 (the proposed site of the VWFA), activation in the left hemisphere often extends several centimeters along the fusiform gyrus in the anterior–posterior direction. Dehaene et al. (2005) and Vinckier et al. (2007) have developed a hierarchical account of visual word recognition according to which the VWFA forms part of a ventral occipitotemporal processing stream in which neurons are tuned to progressively larger and more complex components of words (see also Tagamets, Novick, Chalmers, & Friedman, 2000). The proposed processing stream begins with the analysis of visual features in posterior occipital cortex, extends through the analysis of individual letter forms in lateral occipital and posterior fusiform regions (Dehaene et al., 2004; Tagamets et al., 2000), and progresses through two- and three-letter groupings (bigrams and trigrams) to whole word strings at the VWFA (see Glezer, Jiang, & Riesenhuber, 2009). That processing stream occupies at least 5 cm of the left fusiform gyrus and extends forwards towards semantic processing areas in anterior temporal regions (Vinckier et al., 2007). According to this view, the VWFA serves as a gateway that provides access to more anterior temporal and prefrontal regions concerned with semantic and phonological processing. The notion that there is a region in the left mid fusiform gyrus that is specialized for visual word recognition, and which acts as a necessary gateway to further processing, has not gone uncriticized. For example, the left mid fusiform gyrus has been shown to be activated by images of familiar objects as well as familiar words (Ben-Shachar, Dougherty, Deutsch, & Wandell, 2007; Ellis, Burani, Izura, Bromiley, & Venneri, 2006; Price & Devlin, 2003; Starrfelt & Gerlach, 2007), leading to the suggestion that its function may extend beyond words to other classes of visual object (Devlin, Jamison, Gonnerman, & Matthews, 2006; Price & Devlin, 2003; Starrfelt & Gerlach, 2007). Almost all of our current understanding of the neural processing of written words has been gained from studies in which words were presented centrally at fixation. Relatively little attention has been given to the neural basis of the RVF advantage for visual word recognition. In one of only three studies to have investigated the neural response to lateralized words using fMRI, Cohen et al. (2000) recorded BOLD responses to high frequency nouns presented briefly in the LVF or RVF. The right-handed participants employed in that experiment had previously shown a RVF advantage for both speed and accuracy of word naming in a behavioral task. BOLD responses to words were compared against responses to fixation points. Extrastriate occipital areas (area V4) were identified
L. Barca et al. / Brain & Language 118 (2011) 53–71
in both hemispheres whose response was greater to contralateral than to ipsilateral stimuli. Mid fusiform (VWFA) activity was only observed in the left hemisphere, irrespective of whether words appeared in the LVF or the RVF. In a separate session, EEG was used to record event-related potentials (ERPs) from same participants to the same words. Consonant strings were employed as a comparison condition in this analysis. ERPs showed an early P1 response over left hemisphere electrodes that peaked sooner for RVF inputs (120 ms) than for LVF inputs (136 ms). These were followed by a sharp posterior negativity (N1) that peaked at 150–160 ms, was strongest at inferior temporal electrode sites, and was stronger to contralateral than to ipsilateral inputs, but showed no difference between words and consonant strings. A prolonged left temporal negativity was observed from 240 to 360 ms that was greater for words than for consonant strings, but was comparable in magnitude for LVF and RVF inputs. Cohen et al. (2000) proposed that for the first 150 ms or so, letter strings are processed by separate systems within each hemisphere that are devoted to the featural analysis of stimuli presented in the opposite visual field, and that processing then converges on the left mid fusiform gyrus by about 180–200 ms and is reflected in the N1 response (see also Mainy et al., 2008). Cohen et al. (2000) noted that they had expected that the ERP signature of left mid fusiform activation would be delayed for LVF words compared with RVF words because of the need to transfer the LVF words across the corpus callosum, but they were unable to detect any such delay. A second study by Cohen et al. (2002) presented high frequency, imageable words to right-handed participants for 200 ms in the LVF or RVF. Blocks of trials involved presenting consecutive stimuli of the same type in the same visual field. Participants were instructed simply to ‘pay attention’ to the stimuli in all conditions while maintaining fixation on the central cross. The resulting BOLD responses were compared with responses elicited by consonant strings and checkerboard patterns. A number of activations were observed. As in their previous study, Cohen et al. (2002) found posterior sites in left and right occipital extrastriate cortex that showed stronger activation to stimuli in the contralateral than the ipsilateral visual field. The right hemisphere occipital site showed no difference in activation levels to words, consonant strings and checkerboards, but a subset of voxels within the left occipital site responded more strongly to words and consonant strings than to checkerboards. The left mid fusiform gyrus (VWFA) was activated more strongly by alphabetic stimuli than by checkerboard patterns, and more strongly at the peak voxel by words than by consonant strings. As in Cohen et al. (2000), there was no significant activation of the right mid fusiform gyrus by alphabetic stimuli. Direct comparison of the VWFA response to LVF and RVF words failed to show a significant difference, leading Cohen et al. (2002) to declare that the VWFA shows ‘‘invariance for spatial position”. A difference between LVF and RVF inputs was, however, observed in a comparison of the VWFA response to alphabetic stimuli (words and consonant strings) relative to checkerboard patterns, with more of a difference for RVF than LVF inputs. Cohen et al. (2002, p. 1059) suggested that this difference ‘‘might be related to the behavioral RVF advantage in reading tasks” but did not elaborate further. The RVF advantage for alphabetic stimuli relative to checkerboard patterns was no stronger for real words than for consonant strings whereas in behavioral studies, RVF advantages have generally been found to be weaker for consonant strings than for words or pronounceable nonwords (e.g., Young, Ellis, & Bion, 1984). The pattern of results obtained in Cohen et al.’s (2002) Experiment 1 was repeated in a second, event-related fMRI experiment. A site defined as left Broca’s area responded more strongly to alphabetic stimuli than to checkerboards, and more strongly to words than to consonant strings. The difference between alphabetic stimuli and checkerboards at that site was
55
significant for RVF but not for LVF stimuli. Sites around the left and right central (Rolandic) fissure associated with the control of facial and articulatory movements also responded more strongly to alphabetic stimuli than to checkerboards. A third study of fMRI responses to lateralized words by BenShachar, Dougherty, Deutsch, et al. (2007; Expt. 4) presented four-letter words, object drawings and checkerboard patterns to right-handed participants in the LVF and RVF using 200 ms presentations. The explicit task for participants was to detect small color changes to the fixation point, encouraging sustained fixation. Processing of the words and object drawings was therefore incidental. A region of interest fMRI analysis focusing on the VWFA and its right hemisphere homologue found stronger responses to words and line drawings than to checkerboards in both hemispheres, with no significant difference between left and right mid fusiform gyri in the strength of the response to words and drawings (cf. Price & Devlin, 2003; Starrfelt & Gerlach, 2007). As in Cohen et al. (2000, 2002), posterior ventral cortex in both hemispheres was activated more strongly by contralateral than ipsilateral words (and objects). VWFA activity was not significantly greater for RVF than LVF stimuli. Starrfelt and Gerlach (2007) suggest that rather than being specific for visual word recognition, the left mid fusiform gyrus may be specialized for the integration of shape elements into more elaborate shape descriptions representing whole visual objects, of which written words form one class. We would argue that the lack of any significant difference in the VWFA response to LVF and RVF words in the studies of Cohen et al. (2000, 2002) and Ben-Shachar, Dougherty, Deutsch, et al. (2007) is problematic. If the VWFA acts as a gateway to higher semantic and phonological processing for written words wherever they occur in visual space, then one would expect a VWFA response to both LVF and RVF words. But right-handed participants show faster, more accurate responses to RVF than LVF words in a wide variety of lexical processing tasks, including reading aloud and semantic decision; that is for tasks that required access to either phonological or semantic representations (Ellis, 2004). If behavioral responses to lateralized words originating within anterior left temporal and/or prefrontal regions show a consistent RVF advantage, and if the VWFA serves as an interface between retinotopic visual processing in occipital areas and non-retinotopic processing in those left hemisphere regions, then we would expect the VWFA to reflect the RVF advantage, in terms of the speed of its response to words presented in the two visual fields, the strength of its response, or both. Issues of statistical power certainly arise when it comes to asking why the three studies that have analyzed fMRI responses to lateralized words at the VWFA have failed to find significant differences between responses to LVF and RVF words. There were only seven participants in the final analyses of Cohen et al. (2002) and Ben-Shachar, Dougherty, Deutsch, et al. (2007), and five participants in the analyses reported by Cohen et al. (2000). There are indications in the published data of trends in the direction of stronger VWFA responses to RVF than LVF words, even if those trends were not significant across a small number of participants. For example, Fig. 3 of Cohen et al. (2002) shows the percentage signal change in the VWFA and its right hemisphere homologue in response to LVF and RVF words and consonant strings for one participant in that study. That participant showed a distinct trend towards stronger responses to RVF than LVF words and consonants at the VWFA. Fig. 5B of Ben-Shachar, Dougherty, Deutsch, et al. (2007) also shows trends in the direction of stronger BOLD responses at the VWFA to RVF than LVF words. There are therefore suggestions in the fMRI studies that the VWFA might indeed respond more strongly to RVF than LVF words, even though the group analyses of the VWFA response, based on small numbers of participants, did not show significant differences.
56
L. Barca et al. / Brain & Language 118 (2011) 53–71
1.2. Investigating the neural basis of the RVF advantage using magnetoencephalography (MEG) One problem with using fMRI to investigate the neural basis of the RVF advantage in word recognition may be that fMRI depends on measuring blood oxygen levels aggregated over intervals measured in seconds. The BOLD response therefore provides little useable information about the time course of neural events. ERPs of the sort analyzed by Cohen et al. (2000) have the temporal resolution required for detecting latency differences, although these authors did not localize the sources. Magnetoencephalography (MEG) exploits the fact that when populations of neurons oriented in the same direction fire simultaneously, the summed electric currents they generate create magnetic fields which can be detected using sensors positioned around the head. It is thought that the signals detected by MEG arise primarily from pyramidal cells in cerebral cortex, and that for a response to be detected outside the head requires simultaneous activity from 104 to 105 pyramidal cells coming from an area of around 40 mm2 of cortex (Hämäläinen, Hari, Ilmoniemi, & Lounasmaa, 1993; Vrba & Robinson, 2001). MEG retains the same fine temporal resolution of EEG, down to millisecond accuracy (Hillebrand, Singh, Holliday, Furlong, & Barnes, 2005; Hämäläinen et al., 1993; Singh, 2006). There are various ways that the magnetic signals detected by MEG sensors can be analyzed. One possibility is to combine the responses obtained at the different sensors in a way that is broadly analogous to the way that EEG responses are processed in order to produce event-related potentials (ERPs). Pylkkänen and Marantz (2003) reviewed MEG research of this nature from studies of visual word recognition. They noted that the presentation of a written word generates a response (the M170) which peaks between 150 and 200 ms, is bilateral and occipito-temporal, is associated with letter string processing (cf. Tarkiainen, Helenius, Hansen, Cornelissen, & Salmelin, 1999), and appears to correspond to the N1/N170 response seen in ERP studies (Bentin, Mouchetant-Rostaing, Giard, Echallier, & Pernier, 1999; Brem et al., 2009; Cohen et al., 2000). Pylkkänen and Marantz (2003) also noted reports of an M250 response which is left lateralized and an M350 response which is also left lateralized and shows a sensitivity to word frequency (cf. Embick, Hackl, Schaeffer, Kelepir, & Marantz, 2001). MEG is not, however, limited to ‘sensor space’ analyses of this sort. A number of techniques are now available that allow the localization within the brain of neural events detected by surface sensors. The spatial resolution of neural sources by MEG is not as precise as with fMRI, but can be accurate to within 3–5 mm (Leahy, Mosher, Spencer, Huang, & Lewine, 1998) and, importantly, can be combined with fine temporal resolution. Three ‘source space’ techniques that have been employed in the study of visual word recognition and other cognitive tasks are equivalent current dipole modeling (e.g., Cornelissen, Tarkiainen, Helenius, & Salmelin, 2003; Salmelin, Schnitzler, Schmitz, & Freund, 2000; Tarkiainen et al., 1999; Wilson, Leuthold, Lewis, Georgopoulos, & Pardo, 2005), minimum norm estimation (e.g., Dhond, Buckner, Dale, Marinkovic, & Halgren, 2001) and beamforming (e.g., Cornelissen et al., 2009; Pammer, Hansen, Holliday, & Cornelissen, 2006; Pammer et al., 2004; Wheat, Cornelissen, Frost, & Hansen, 2010). The approach used in the present study is beamforming. In a beamforming analysis, the neuronal signal at a location of interest in the brain is constructed as the weighted sum of the signals recorded by the MEG sensors. For a whole brain analysis, a cubic lattice of spatial filters is defined within the brain (here with 5 mm spacing), and an independent set of weights is computed for each location. The beamformer weights are determined by an optimization algorithm so that the signal from a location of interest contributes maximally to the beamformer output, while the signal from other locations is suppressed. The sensor weights are computed for each location to
create three spatial filters, one for each orthogonal current direction. The outputs of the three spatial filters at each location in the brain are then summed to generate the total power at each socalled ‘virtual electrode’ (VE) over a given temporal window and within a given frequency band. One advantage of beamforming is that it can image directly both sources that are phase-locked to the onset of stimuli (‘evoked responses’) and neural responses that are not tightly phase-locked to the onset of stimuli (‘induced responses’). Dipole analyses and minimum norm estimation, like ERP analysis of EEG data, are limited to localizing sources based on evoked (phase-locked) responses only. We used two approaches to analyzing the results produced by beamforming to investigate neural responses to words presented briefly in the LVF and RVF. The first was to aggregate responses to LVF and RVF words over time windows of 200 ms. We compared the strength of the neural responses observed in ‘active’ time windows (after stimulus onset) with the strength of the responses in ‘passive’ time windows when no word was present. When this is done, differences between active and passive windows may be reflected in either increases or decreases in oscillatory power. Increases in power (or amplitude) in active relative to passive time windows are commonly referred to as ‘event-related synchronizations’ while decreases are known as ‘event-related desynchronizations’ (Neuper & Pfurtscheller, 2001; Pfurtscheller & Lopes de Silva, 1999). These terms can be misleading in that a decrease in oscillatory power in an active window compared with a passive window does not necessarily imply a reduction in overall neural activity during the active window: if neuronal firing increases in a region of the brain, but the firing also becomes less synchronized (coherent), that will be detected as an event-related desynchronization. Recent discussions have suggested that event-related desynchronizations may reflect brain states characterized by neurons firing independently to maximize the operational capacity of particular brain regions (Kinsey et al., 2009; Lee et al., 2010; Yamagishi, Goda, Callan, Anderson, & Kawato, 2005). We will therefore avoid the terms ‘event-related synchronization’ and ‘event-related desynchronization’ and talk instead of increases and decreases in oscillatory power in the knowledge that decreases in oscillatory power relative to a defined passive period may actually reflect increases in neural activity. The positions of the passive windows in our study remained fixed while the active windows progressed in increments from 0–200 ms to 300–500 ms post-stimulus. In the domain of visual word recognition, this approach was first used by Pammer et al. (2004) to analyze the neural responses generated when five-letter words and anagrams were presented centrally (at fixation) to right-handed participants whose task was to indicate whether they saw a word or not (lexical decision). In that study, significant changes in oscillatory power within the frequency range 10– 20 Hz were identified by comparing a 200 ms passive time window (defined as the period from 700 and 500 ms before stimulus onset) with a sequence of active windows progressing from 0–200 ms to 300–500 ms post-stimulus. For word stimuli, significant changes were observed in posterior occipito-temporal areas. By 100– 300 ms post-onset, the response had spread anteriorly along the left fusiform gyrus to include the VWFA. In later time windows, the signal extended further within the left hemisphere and across to the right hemisphere. The 100–300 ms time window also revealed activity in the left inferior frontal gyrus (BA44/46) for words that was hypothesized to reflect the activation of phonological (speech-motor) representations (see Wheat et al., 2010). The posterior occipito-temporal and left fusiform response to centrally-presented words identified by Pammer et al. (2004) provided a good match to responses obtained by studies using different MEG source space analysis methods; for example, studies by Cornelissen et al. (2003), Salmelin et al. (2000) and Tarkiainen
L. Barca et al. / Brain & Language 118 (2011) 53–71
et al. (1999) using equivalent current dipole (ECD) modeling, Dhond et al. (2001) using minimum norm estimation (MNE), and Kujala et al. (2007) using dynamic imaging of coherent sources (DICS). Wilson et al. (2005) employed a single moving dipole approach and found that very similar locations were activated by words and pseudowords (pronounceable nonwords), mostly in left hemisphere lateral occipital, temporal, parietal, and inferior prefrontal regions. The difference between words and pseudowords lay in the time course rather than the location of the activation. Whole-brain comparisons of moving active windows against stationary passive windows are helpful in providing an overview of the brain’s evolving response to stimuli. In the case of the present study, they demonstrate the recruitment of the standardly-recognized ‘reading network’ to the processing of LVF and RVF words. They are of less use, however, when it comes to comparing the response at specific regions of interest in different conditions of an experiment because they fail to take advantage of the fine temporal resolution inherent in the MEG signals. For that purpose, MEG studies have taken the same spatial filter that is employed to generate whole-brain maps and used it investigate how the neural response at particular points evolves over time. This is referred to as inserting virtual electrodes at particular points in the brain. It allows researchers to conduct region of interest analyses comparing the changing neural responses at specific locations (centres of regions of interest) in different conditions of an experiment. In the present study they are used to compare neural responses to LVF and RVF stimuli at left and right occipital, mid fusiform and precentral sites. Examples of the use of virtual electrode methodology in studying the perception of words, faces and other stimuli can be found in Brookes et al. (2005), Cornelissen et al. (2009), Hall et al. (2005), Maratos, Anderson, Hillebrand, Singh, and Barnes (2007), Millman, Prendergast, Kitterick, Woods, and Green (2010), Simpson et al. (2009) and Wheat et al. (2010). The present study reports two experiments. Experiment 1 was a behavioral study designed to establish that right-handed participants would show the normal RVF advantage for the words selected for use in the MEG experiment, and to determine whether the RVF advantage would persist over repeated presentations of the same items (a requirement for the MEG study). In Experiment 2, the same stimulus words were briefly presented in the LVF or the RVF to a different group of right-handed participants seated in an MEG scanner. Participants were given prior training on the word set to minimize recognition errors in the scanner. The real words were interleaved with ‘scrambled’ words and presented in a random order in the LVF or RVF. Participants were instructed to name the real words silently, and to say ‘‘pattern” covertly when they saw a scrambled word. The motivation for including the scrambled word condition was to allow the processing of familiar words to be compared with the processing of non-alphabetic stimuli with similar visual properties. Each participant also underwent a structural MRI scan to allow co-registration of their individual MEG data onto their individual brain structure. The co-registered MEG data was analyzed using beamforming to compare changes in power between passive time periods and 200 ms active time windows, and to analyze responses to LVF and RVF words and scrambled words at middle occipital, mid fusiform, and precentral sites. The primary objective was to discover when and where differences might be observed in the neural responses to LVF and RVF words that may form the basis of the RVF advantage in behavioral studies of lateralized word recognition. Following Pammer et al. (2004), Cornelissen et al. (2009) and others, we expected to see early increases in oscillatory activity in posterior extrastriate visual processing areas followed by later decreases in oscillatory activity in inferotemporal, parietal and prefrontal regions. Following Cohen et al. (2000, 2002) and Ben-Shachar, Dougherty, Deutsch, et al. (2007), we expected the posterior extrastriate
57
responses to be stronger to contralateral than to ipsilateral stimuli. We expected the left mid fusiform (VWFA) region to respond to both LVF and RVF words, but if our hypothesis that visual field differences should be reflected at the VWFA is correct, the response to RVF words should be faster and/or stronger than the response to LVF words. We expected that prefrontal activation would be concentrated in left hemisphere speech motor regions (e.g., Cohen et al., 2002; Cornelissen et al., 2009), but given that word naming responses are faster to RVF than LVF words (Bub & Lewine, 1988; Ellis et al., 2009; Lindell et al., 2002), we expected that a time-sensitive methodology like MEG should be capable of detecting a RVF advantage in the neural response in left prefrontal regions specialized for speech-motor processing. 2. Experiment 1: Naming words presented in the LVF and RVF with multiple repetitions Experiment 1 was a behavioral study in which the same 20 words that would be used in the MEG study (Experiment 2) were presented to right-handed participants in a lateralized naming task. After familiarizing participants with the stimulus words, each word was presented six times in the LVF and six times in the RVF for rapid naming. The aims were to establish that accuracy levels would be high following pre-exposure to the stimulus set, that the words would yield a robust RVF advantage for word naming speed, and that the magnitude of the RVF advantage would be stable across multiple presentations (all requirements for the MEG study). 2.1. Methods 2.1.1. Participants Sixteen participants (12 male; four female) with a mean age of 23 years (range 20–26) took part in Experiment 1. All were students at the University of York and had normal or corrected-tonormal vision. All had English as their first language and were right handed, with a score of at least 70/100 on the Edinburgh Handedness Inventory (Oldfield, 1971). All gave written consent to participate in the experiment which was approved by the Research Ethics and Governance Committee of the York Neuroimaging Centre and Department of Psychology at the University of York. 2.1.2. Stimuli The experimental stimuli comprised 20 five-letter words with a mean frequency of 252 occurrences per million words of spoken and written English (range 111–533) according to the CELEX database (Baayen, Piepenbroack, & Van Rijn, 1993). To facilitate activation of the voice key, words beginning with voiceless fricatives or affricates such as ‘‘f” or ‘‘sh” were avoided. Examples of the words used are BASIC, DRINK and VALUE. 2.1.3. Design The experiment included two factors: the visual field in which the words were presented (LVF or RVF), and the block of trials in which a stimulus word was presented (blocks 1 to 6). Each of the 20 stimulus words was presented once in the LVF in each block and once in the RVF. Participants were familiarized with the stimuli before the start of the experiment. 2.1.4. Procedure Presentation of the stimuli and monitoring of the responses was controlled by E-prime experiment generator software (Schneider, Eschman, & Zuccolotto, 2002). Each trial began with the presentation of a white fixation cross on a grey background for 500 ms at the centre of a computer screen. A single word was then presented
58
L. Barca et al. / Brain & Language 118 (2011) 53–71
for 150 ms in the LVF or RVF, with the fixation cross remaining on the screen during this interval. The fixation cross stayed on the screen for a further 820 ms before the screen went blank for 1000 ms. The subsequent re-appearance of the fixation cross signaled the start of the next trial. The words were presented in white upper case lettering on a grey background using 18 point FixedSystem font which spaces letters evenly so that all the stimulus words had the same physical length on the screen. The innermost edges of the stimuli were 1.5° to the left or right fixation, and extended to 6°. Stimuli were presented in a single sequence containing within it six blocks in which each of the 20 stimulus words was presented once in the LVF and once in the RVF. Order of presentation was pseudo-random with the constraint that no more than three consecutive stimuli occurred in the same visual field. Participants were positioned so that their eyes were approximately 67 cm from the computer screen, with their chin on a chin rest so as to maintain a fixed distance from the screen. The participants were instructed to maintain their gaze on the central cross throughout the period of each trial when the fixation cross was on the screen, and to read each word aloud as quickly and as accurately as possible when it appeared on the screen. Reaction times (RTs) were measured from the presentation of a word to the initiation of each vocalization which was detected by a microphone connected to the computer via a voice key. Errors were noted by the experimenter and eliminated from the analysis of RTs. The experiment was preceded by a practice session consisting of a block of 10 trials in which five practice words were presented in the RVF and five in the LVF. 2.2. Results and discussion Mispronunciations, omissions, hesitations etc were treated as errors. Error rates were generally low, averaging only 3.8 errors (1.6%) per participant across the six blocks of trials, with a range across participants from 0 to 14 errors (5.8%) in 240 trials. The resulting error rates were analyzed by subjects (F1) and by items (F2) using analysis of variance. There were significantly more errors to LVF words (mean 2.6%) than to RVF words (mean = 0.57%) [F1(1, 15) = 12.74, MSe = 7.92, p < .01, partial g2 .46; F2(1, 19) = 10.55, MSe = 6.34, p < .01, partial g2 .36]. Error rates also showed a significant decline across blocks, from 3.6% in block 1% to 0.8% in block 6 [F1(5, 75) = 8.78, MSe = 1.69, p < .001, partial g2 = .37; F2(5, 95) = 6.33, MSe = 1.35, p < .01, partial g2 .25]. A significant visual field blocks interaction [F1(5, 75) = 5.35, MSe = 0.96, p < .05, partial g2 = .26; F2(5, 95) = 3.34, MSe = 0.76, p < .05, partial g2 .15] resulted from the fact that the difference between visual fields in early blocks (Block 1: LVF–RVF = 4.7%) was greater than in later blocks when error rates in both visual fields were very low (Block 6: LVF–RVF = 0.9%). Only correct responses were included in the RT analysis. The data were further trimmed by excluding RTs less than 250 ms and longer than 1000 ms. The total percentage of items included from this analysis after removal of errors and RT outliers were 4.0% in the LVF and 1.8% in the RVF. The results are shown in Fig. 1. The RVF advantage for naming speed was highly significant [F1(1, 15) = 34.71, MSe = 48,295, p < .001, partial g2 .70; F2(1, 19) = 97.01, MSe = 59167.47, p < .001, partial g2 .84], with faster overall responses to RVF words (mean = 483 ms) than to LVF words (mean = 514 ms). The trend towards faster responses in the later blocks resulted in a main effect of blocks that was significant in the by-items analysis though not in the by-subjects analysis [F1(5, 75) = 1.83, n.s.; F2(5, 95) = 11.26, MSe = 6143.15, p < .001, partial g2 .37]. Importantly, the visual field blocks interaction was not significant in either analysis [F1(5, 75) = 1.34; F2(5, 95) = 0.97], reflecting the fact that the RVF advantage remained fairly constant at around 32 ms across the six blocks of trials (see
Fig. 1. Mean naming latencies in milliseconds (with standard error bars) for words presented in the LVF and RVF across six blocks of trials (Experiment 1).
Fig. 1). Only one of the 16 participants failed to show a RVF advantage for naming RTs across the six blocks (LVF: 450 ms; RVF 450 ms): the remaining 15 participants showed RVF advantages ranging from 12 to 88 ms. Experiment 1 therefore established that the word set to be used in Experiment 2 generated an RVF advantage for word naming accuracy and RTs that was constant across participants and which remained constant across six presentations of the same items (cf. Weems & Zaidel, 2005). The stimulus set and the conditions of the experiment satisfied the requirements for the MEG study. 3. Experiment 2: MEG analysis of covert naming responses to words and scrambled words presented in the LVF and RVF Experiment 2 presented the same 20 stimulus words as Experiment 1 in six blocks, with each word occurring once in the LVF and once in the RVF in each block, giving 120 trials per condition. The requirement for the participant to remain as still as possible during MEG meant that overt naming responses were not possible. Participants were therefore asked to respond silently by reading the real words clearly ‘in their heads’ and by saying ‘‘pattern” silently in response to the scrambled versions of words that were used as a comparison condition. 3.1. Methods 3.1.1. Participants Twenty participants (two male, 18 female) with a mean age of 24 years (range 20–27) took part in Experiment 2. All were students at the University of York and had normal vision or had their vision corrected to normal through the use of contact lenses. All had English as their first language and were right handed, with a score of at least 70/100 on the Edinburgh Handedness Inventory (Oldfield, 1971). All gave written consent to participate in the experiment which was approved by the Research Ethics and Governance Committee of the York Neuroimaging Centre and Department of Psychology at the University of York. 3.1.2. Stimuli The experimental stimuli comprised the same 20 five-letter words that were used in Experiment 1 plus ‘scrambled’ versions of the 20 words. Scrambled stimuli were created by subdividing the images of each of the 20 words into 144 (12 12) equal-sized segments and randomly re-sorting the fragments. An example of a scrambled word stimulus can be seen in Fig. 3.
L. Barca et al. / Brain & Language 118 (2011) 53–71
3.1.3. Design The experiment included three factors – the visual field in which stimuli were presented (LVF or RVF), the block of trials in which a stimulus word was presented (blocks 1–6), and whether the stimulus was a word or a scrambled word. Each of the 20 words and 20 scrambled words was presented twice in each block of trials, once in the LVF and once in the RVF. Participants were familiarized with the stimuli before the start of the main experiment. 3.1.4. Procedure Experiment 2 was run in the York Neuroimaging Centre (YNiC: https://www.ynic.york.ac.uk/). During MEG data acquisition, participants were seated comfortably with their head positioned within the helmet-shaped dewar of a 248-channel Magnes 3600 whole-head MEG scanner using SQUID magnetometers (4D Neuroimaging, San Diego, California) situated within a dimly-lit, electromagnetically shielded room. Participants were instructed to keep their heads and bodies still throughout the data acquisition but had free view of a translucent screen positioned 100 cm in front of their eyes. Stimuli were backprojected onto the screen using E-prime experiment generator software (Schneider et al., 2002) driving a Dukane 8942 ImagePro 4500 lumens LCD projector. Each experimental trial began with a central fixation cross presented in white on a grey background for 500 ms. The fixation point remained on the screen while a word or scrambled word was presented for 150 ms in the LVF or RVF. The fixation cross stayed on the screen for a further 550 ms after stimulus offset. The screen then went blank for 1300 ms before the presentation of a new fixation cross signaled the start of the next trial. Each experimental trial therefore lasted a total of 2500 ms. The stimulus words were presented in white, upper case, 18 point FixedSystem font on a grey background. Words and scrambled words had a physical length of 15 mm on the screen and a height of 70 mm. The innermost edge of each stimulus was positioned 1.5° from the centre (fixation point). Trigger codes were recorded in the MEG data at the onset of each visual stimulus. The experiment contained six blocks of trials presented consecutively, without a break. Within each block, the 20 stimulus words were presented once in the LVF and once in the RVF. The 20 scrambled words were also presented once in the LVF and once in the RVF in each block. Words and scrambled words, and LVF and RVF trials, were interleaved and presented in a fixed, semi-random order with the constraint that if a word or scrambled word was presented in the LVF in the first half of a block it was presented in the RVF in the second half, and conversely. There were a further 10% ‘catch’ trials in each block in which five red hash marks (#####) appeared in the LVF or RVF with the same exposure times as for words and scrambled words. The entire experiment involved six presentations of each word and scrambled word in the LVF and a further six presentations in the RVF, plus 48 catch trials. Participants were instructed to maintain central fixation from the appearance of the fixation cross and during stimulus presentation. If the stimulus was a word, they were instructed to name the word silently ‘in their head’ as quickly and as accurately as possible. If the stimulus was a scrambled word, they were instructed to say ‘pattern’ silently. If the stimulus was a catch trial, participants were instructed to press a button on a response device held in their left hand (to minimize left hemisphere activation due to the motor response). The main experiment lasted approximately 18 min. To ensure that participants were thoroughly familiar with a rather complex set of requirements, and to minimize recognition errors in the MEG experiment, two practice sessions were given before the start of the experiment proper. The first practice session occurred outside the MEG room, and involved participants familiarizing themselves with the 20 target words by reading the list
59
of words aloud five times in different random orders. The second practice session occurred in the MEG room after the presentation of the instructions and before the start of data acquisition. This practice, which was designed to give participants experience of the conditions of the MEG experiment, involved 22 trials using five-letter filler words and scrambled words (five LVF words, five scrambled LVF words, five RVF words, five RVF scrambled words) plus two catch trials. At some point after the MEG experiment, participants had a structural MRI scan so that the MEG data could be co-registered onto their individual brains (Kozinska, Carducci, & Nowinski, 2001). T1-weighted MR images were obtained with a GE 3.0-T Signa Excite HDx system (General Electric, Milwaukee, USA) using an 8-channel head coil and a 3-D fast spoiled gradient-recalled sequence. TR/TE/flip angle = 8.03 ms/3.07 ms/20°, spatial resolution of 1.13 mm 1.13 mm 1.0 mm, in-plane resolution of 256 256 176 contiguous slices. For group analyses in source space, the individuals’ data were spatially normalized to the Montreal Neurological Institute (MNI) standard brain which is based on the average of 152 individual T-1 weighted structural MR images (Evans, Collins, Mills, Brown, & Kelly, 1993). 3.1.5. MEG data analysis Before data acquisition, a 3-D digitizer (Polhemus fast-track headshape digitization) was used to record the shape of the participant’s head. MEG data were collected in continuous mode with a sampling rate of 678.17 Hz and passband filtered between 1 and 200 Hz. The MEG data were subjected to a global noise filter subtracting external, non-biological noise detected by the MEG reference channels. The data were converted into epochs lasting for 2000 ms from the initial presentation of the fixation point on each trial. Individual trials were checked for the presence of major artifacts caused by eye movements, eye blinks and external noise, and removed if affected. Six participants were eliminated from the experiment at this point because of excessive numbers of artifacts, low rates of detection on the catch trials, or movement within the MEG scanner. The 14 remaining participants contributed a minimum of 60 epochs to each of the four conditions (two types of stimulus two visual fields) and averaged 88% correct detection responses on the catch trials. Epochs that survived artifact rejection were averaged for each participant in each condition. 3.1.5.1. Magnetic field patterns (isofield contour maps). Fig. 2 shows the magnetic field patterns in sensor space that we observed for two participants in response to words presented in the RVF. For each participant, these field patterns are consistent with the first two time points in the evoked averaged signal for which a single dipolar source could be inferred. At time point 1 (around 100–125 ms), the dipolar field was detected by sensors overlying the midline occipital cortex. At time point 2 (around 160– 175 ms), the dipolar field was detected by sensors overlying the left occipito-temporal region. Previous analyses of magnetic field patterns indicate that centrally-presented words generate an initial medial occipital response followed by an occipito-temporal response that is more left lateralized (e.g., Pylkkänen & Marantz, 2003; Tarkiainen et al., 1999). The direction of the inferred dipole source (current flow) changes between those responses, being reflected in a change in the relative position of positive and negative fields. We note that those responses have been observed for words displayed centrally. The conditions of the present experiment differed from those of previous MEG studies of visual word recognition in two ways. First, the stimulus words were presented laterally rather than centrally. Second, a fixation point appeared centrally on the screen for 500 ms prior to presentation of the lateralized stimuli and remained on screen during and after the experimental stimuli. One should not
60
L. Barca et al. / Brain & Language 118 (2011) 53–71
Fig. 2. Sensor-space magnetic field patterns (isofield contour maps) showing the magnetic responses to words presented in the RVF for two participants at two points in time. Positive magnetic fields emerging from the brain are shown in red while negative fields re-entering the brain are shown in blue. For participant S1, Time 1 is 123 ms poststimulus onset and Time 2 is 160 ms post-onset. For participant S2, Time 1 is 107 ms and Time 2 is 175 ms.
therefore expect a precise correspondence between the present data and results obtained with centrally-presented words. Nevertheless, Fig. 2 shows a pattern resembling that previously reported for central words, with a central occipital response followed by an occipitotemporal response lateralized to the left. The remaining analyses presented here are performed in source space. 3.1.5.2. Comparison of active and passive windows using beamforming. A linearly-constrained minimum-variance beamformer (Barnes & Hillebrand, 2003; Barnes, Hillebrand, Fawcett, & Singh, 2004; Huang et al., 2004; Van Veen, van Drongelen, Yuchtman, & Suzuki, 1997; Vrba & Robinson, 2001) was used to analyze changes in oscillatory activity consequent upon stimulus processing. Such beamformers have been used extensively in cognitive neuroscience investigations of perceptual and cognitive processing (e.g., Cornelissen et al., 2009; Hagan et al., 2009; Lee et al., 2010; McNab, Rippon, Hillebrand, Singh, & Swithenby, 2007; Millman et al., 2010; Pammer et al., 2004, 2006; Simpson et al., 2009; Wheat et al., 2010). A grid of points 5 5 5 mm was applied to the whole brain. The neural activity index was computed at each grid point for each point in time by applying a spatial filter designed to be fully sensitive to the activity of the targeted grid point while minimizing the activity from other brain regions. Total power (evoked and induced) was compared between active and passive time
periods and significance maps were generated based on the difference. Data were band pass filtered to obtained signals in frequency bands of interest. Source reconstruction was achieved by generating a multisphere head model based on multiple local spheres fitted to the curvature of the inner surface of the skull immediately beneath each sensor, to obtain estimates of source strength at different position in the cortex (Kozinska et al., 2001). The initial analyses measured the oscillatory power at each of the defined points within the grid that encompassed the brain during 200 ms post-stimulus active periods, and compared the power during those periods with the power observed during pre-stimulus passive periods of the same length. The active windows were progressed in increments of 100 ms from 0–200 ms to 400–600 ms post-stimulus onset (cf. Cornelissen et al., 2009; Pammer et al., 2004). This was done separately for four selected frequency bands (1–10 Hz, 10–20 Hz, 20–30 Hz and 30–60 Hz). (Note that with time windows any shorter than about 200 ms it would not be possible to detect changes in oscillatory power at lower frequencies.) T-statistic maps were generated for each point in source space using paired t-tests to compare oscillatory power between active and passive time periods. The statistical maps were then superimposed on the MNI template brain with the cerebellum removed, using a statistical threshold of t > 2, p < .05. The frequency bands showing the most relevant activity in our data (i.e. differences between
L. Barca et al. / Brain & Language 118 (2011) 53–71
active and passive time periods) were the 1–10 and 10–20 Hz bands. Those frequency bands were selected for further analysis. Examples of changes in oscillatory power between active and passive windows are presented in Figs. 3 and 5, and discussed below. 3.1.5.3. Derivation of virtual electrode time series (event-related fields) from the beamforming data. Exactly the same beamformer algorithm that was employed to generate the active-passive comparisons was then employed to plot changes in power over time at selected points in the brain when the contribution from other sources is minimized by the beamformer. There is no comparison of active and passive conditions in these analyses. The choice of points for the generation of virtual electrodes was determined partly from the comparison of active and passive time intervals, and partly from the wider functional imaging literature. The event-related field plots were based on a measure of the strength of the evoked response in each condition of the experiment at each region of interest within individual brains across the full range of frequencies. To obtain these plots, virtual electrode time series were first generated for the first, second and third orthogonal components derived from a principal components analysis of the neural activity index. The output of the three components was then combined into a single time series by averaging each component across epochs, then squaring and summing them. The result is a measure of the total power of the neural response over time at each location. Because our interest was in differences in responses between the two cerebral hemispheres, each analysis of a left hemisphere location was accompanied by an analysis of the results obtained at the corresponding point in the right hemisphere (defined by reversing the value of the x co-ordinate in standardized MNI space before the transformation back to the individual’s brain shape). The analysis concentrated on three pairs of sites: (1) the left and right middle occipital gyri at the points where Cohen et al. (2000, 2002) found hemifield-dependent responses; (2) the left mid fusiform gyrus at the site of the fMRI-defined VWFA, together with its right hemisphere homologue, and
61
(3) the left precentral gyrus at the peak of the MEG activation for RVF words and its right hemisphere homologue. In order to identify the location in each participant’s brain that corresponded to a particular location of interest, Talairach co-ordinates were converted to MNI co-ordinates using the transform described by Brett (http://imaging.mrc-cbu.cam.ac.uk/imaging/MniTalairach). The MNI coordinates were then mapped onto the individual brains using the BET tool in FSL (Jenkinson, Pechaud, & Smith, 2005; Smith, 2002). A point of interest defined by mapping from standardized coordinates to an individual brain may not represent the point of peak activity in that local area of the brain. In an attempt to find the strongest response from each brain at each location, seven virtual electrodes were positioned automatically around each point of interest, one at the location identified through the transformation of the standard coordinates and six more at ±4 mm in the x, y and z directions. This meant that the virtual electrodes sampled the central point of the region of interest plus six points on the surface of an imaginary sphere of 4 mm radius around the chosen point. The virtual electrode selected for further analysis was the one which showed the strongest overall response to contralateral words (RVF for left hemisphere sites and LVF for right hemisphere sites) in the period 0–700 ms following stimulus presentation. The final step involved averaging the time series derived from each participant at each location of interest to produce a group plot. Inspection of the virtual electrode outputs revealed considerable differences in the strength of responses between sites and between participants. The danger in such circumstances is that the results of the analysis will be dominated by participants showing the strongest responses at particular sites, or the largest absolute differences. This potential problem was addressed through a normalization procedure that corrects for up to an order of magnitude difference in mean response/baseline activity across participants, and makes for a fairer analysis of the responses at paired electrodes across experimental conditions. For each participant, the mean response strength and the standard deviation were computed for
Fig. 3. Group comparison using beamforming analysis of brain activity in the 1–10 Hz band to words in three active time windows (0–200, 100–300, 200–400 ms) compared with a passive window of 200 to 0 ms. The results are superimposed on a canonical brain with the cerebellum removed.
62
L. Barca et al. / Brain & Language 118 (2011) 53–71
Fig. 4. Virtual electrode results for the left and right middle occipital gyri showing aggregated evoked responses (top) and normalised average time series (bottom) for words and scrambled words presented in the LVF and RVF. Examples of word and scrambled word stimuli are also shown.
each left hemisphere location of interest and its right hemisphere homologue across the four experimental conditions (words and scrambled words in the LVF and RVF) for the period 0–1200 ms post-stimulus presentation. The magnitude of the response at each virtual electrode site was then normalized to zero mean and unit standard deviation across that time period, retaining the original sampling rate of 678.17 Hz. The resulting normalized time series were averaged to produce group plots which were smoothed using the Savitzky–Golay (1964) filter with a smoothing window of 11 data points. Because the original time series are squared and summed, all the values before normalization are positive. Re-setting the mean for the group plots to zero across the four conditions and the period 0–1200 ms results in time series that can take
either positive or negative values depending on whether the strength of the evoked response in a particular condition at a particular moment in time was above or below the global mean. The resulting normalized evoked time series are shown in Figs. 4, 6 and 7 for the period 0–700 ms, when the evoked responses were generally strongest (and hence were mostly positive, being above the global average for 0–1200 ms). 3.2. Results 3.2.1. Posterior, hemifield-dependent increases in oscillatory power Early, posterior, hemifield-dependent responses were revealed most clearly as increases in oscillatory power in the 1–10 Hz
L. Barca et al. / Brain & Language 118 (2011) 53–71
63
Fig. 5. Group comparison using beamforming analysis of brain activity in the 10–20 Hz band to words in four active time windows (0–200, 100–300, 200–400, 300–500 ms) compared with a passive time window of 700 to 500 ms. The results are superimposed on a canonical brain with the cerebellum removed.
frequency band against a passive window of 200 to 0 ms (i.e., immediately preceding presentation of the words or scrambled words). Those responses are shown in Fig. 3 for the active time windows 0–200, 100–300 and 200–400 ms. In the first active window (0–200 ms), significant increases in power were present in the middle occipital gyri (BA 18) strictly contralateral to the hemifield stimulated, with the peak response at MNI co-ordinates x = 35, y = 85, z = 13 for the left occipital response to RVF words, and x = 27, y = 96, z = 9 for the right occipital response to LVF words. By 100–300 ms the activity was more widespread, involving the lingual gyrus, cuneus and supramarginal gyrus on each side. By 200–400 ms, the visual field-specific character of the response had largely disappeared, with activity concentrated in the cuneus, lingual gyri, and middle occipital gyri bilaterally. A feature of Fig. 3 to which we would draw attention is the indication in the 100– 300 ms active time window of greater transfer of the response from the right to the left hemisphere for LVF presentations than from the left to the right hemisphere for RVF presentations. Virtual electrodes were generated at the peaks of the left and right occipital responses to contralateral word stimuli in the 0–200 ms time window (i.e., x = 35, y = 85, z = 13 and x = 27, y = 96, z = 9). Time series (event-related fields) were generated separately for each participant for the responses to words and scrambled words presented in the LVF and RVF, then averaged across participants. The column graph at the top of Fig. 4 shows the mean normalized amplitudes of the evoked responses in each condition for the time period 0–700 ms. Analysis of variance was carried out on the mean responses for each participant, with repeated measures factors of the hemisphere in which the virtual electrode was positioned (left vs right), the visual field in which the stimulus appeared (LVF vs RVF), and the stimulus type (words vs scrambled words). The main effect of hemisphere was significant [F(1, 13) = 11.39, MSe = 6.45, p < .01, partial g2 .467], with stronger overall responses in the left occipital site (mean = 0.569) than in the right occipital site (mean = 0.089). The main effects of visual field and stimulus type were not significant, but the interaction between hemisphere and visual field was significant
[F(1, 13) = 42.33, MSe = 4.63, p < .001, partial g2 .765]. Bonferronicorrected t-tests (a = .05) found stronger responses to RVF than LVF stimuli for the left hemisphere virtual electrode [t(13) = 3.95, p < .01], and stronger responses to RVF than LVF stimuli for the right hemisphere virtual electrode [t(13) = 6.39, p < .001]. No other interactions approached significance. Thirteen of the 14 participants showed stronger responses to contralateral than ipsilateral words in the left occipital site. All 14 participants showed stronger responses to contralateral than ipsilateral words in the right occipital site. The averaged event-related field plots (Fig. 4, bottom) show the evoked responses that are phase-locked to the presentation of the stimulus. The left occipital response to RVF words began at around 80 ms and showed a first peak at 128 ms (normalized amplitude 3.1), followed by a second peak at 184 ms (normalized amplitude 2.6). The right occipital response to LVF words showed a similar course but with somewhat lower normalized amplitudes (1st peak: 122 ms, normalized amplitude 2.2; 2nd peak: 192 ms, normalized amplitude 1.9). The peaks produced by contralateral scrambled words showed similar time courses to the real words, and a similar difference in normalized amplitude between left hemisphere and right hemisphere sites (left hemisphere 1st peak: 126 ms, normalized amplitude 2.7; 2nd peak: 189 ms, normalized amplitude 2.5; right hemisphere 1st peak: 115 ms, normalized amplitude 2.3; 2nd peak: 195 ms, normalized amplitude 2.1). Responses to ipsilateral stimuli were delayed in comparison to the responses to contralateral stimuli, occurring between about 150 and 400 ms, reaching maximum values (around 1.4) between 200 and 350 ms, but with lower amplitudes than the responses to contralateral stimuli and less sharply defined peaks. The stronger left occipital response to LVF words than right occipital response to RVF words that is apparent in the averaged data (Fig. 4, top) can be seen in the event-related fields (Fig. 4, middle). The right middle occipital gyrus response to words arriving directly from the LVF is shown as the red line in the middle right panel of Fig. 4, with peaks at 122 ms and 192 ms. The transfer of that information across to the left hemisphere can be seen in the red line in
64
L. Barca et al. / Brain & Language 118 (2011) 53–71
Fig. 6. Virtual electrode results for the left and right mid fusiform gyri showing aggregated evoked responses (top) and normalised average time series (bottom) for words and scrambled words presented in the LVF and RVF (middle, bottom).
the middle left panel of Fig. 4. The left occipital response to LVF words transferred across the corpus callosum is weaker than the right occipital response to the same LVF words, and weaker than the left occipital response to RVF words. The left occipital response to ipsilateral (LVF) words is, however, stronger than the corresponding right occipital response to ipsilateral (RVF) words (the blue line in the right middle panel of Fig. 4). This evidence of asymmetric transfer of information between the two hemispheres
matches the evidence for asymmetric transfer seen in the 100– 300 ms time windows in Fig. 3. In summary, activity in posterior extrastriate (middle occipital gyrus) sites appeared in the comparison of active and passive time windows as increases in oscillatory power at low frequencies (1–10 Hz). The evoked response was hemifield-dependent, with the left occipital site responding more strongly to RVF inputs while the right occipital site responded more strongly to LVF inputs.
L. Barca et al. / Brain & Language 118 (2011) 53–71
65
Fig. 7. Virtual electrode results for the left and right precentral gyri showing aggregated evoked responses (top) and normalised average time series (bottom) for words and scrambled words presented in the LVF and RVF (middle, bottom).
There was no detectable difference at these sites in their responses to real words compared with scrambled words. The responses to contralateral words began around 80 ms and showed a sequence of peaks between 120 and 200 ms. Responses to ipsilateral words were slower and of lower amplitude. There was evidence of transfer of information from the right to the left hemisphere being greater than in the opposite direction, with the left occipital
response to LVF words occurring from around 150 to 400 ms and being weaker than the same area’s response to RVF words. 3.2.2. Decreases in oscillatory power Other neural responses to lateralized words and scrambled words were revealed as decreases in oscillatory power. These appeared most clearly in the 10–20 Hz frequency band against a
66
L. Barca et al. / Brain & Language 118 (2011) 53–71
passive window of 700 to 500 ms. Fig. 5 shows the responses seen in active windows from 0–200 to 300–500 ms. LVF and RVF inputs induced similar posterior occipito-temporal responses which included parts of the middle and inferior occipital gyri, the precuneus, lingual gyri and the fusiform gyri bilaterally. The significant left occipito-temporal response to RVF words in the 200– 400 ms active window extended in the posterior–anterior direction from approximately y = 95 to y = 35, with a peak at x = 46, y = 76, z = 12. That peak is more posterior than the VWFA (x = 43, y = 54, z = 12), but we note that the significant activation also included the site of the VWFA. The comparable response in right occipitotemporal cortex was equally extensive. We also note the existence of a significant response in the left precentral gyrus (BA6; x = 43, y = 3, z = 35) that was present in all four time windows in Fig. 5, but was only significant for RVF inputs, and only in the left hemisphere. 3.2.2.1. Mid fusiform gyri. The second pair of virtual electrode sites probed the responses of the two mid fusiform gyri. For this analysis a virtual electrode was positioned at the coordinates of the VWFA in the left hemisphere as reported by McCandliss et al. (2003); i.e. x = 43, y = 54, z = 12. An equivalent right hemisphere electrode site was defined by reversing the value of the x coordinate in MNI space; i.e. x = 43, y = 54, z = 12. Event-related fields were plotted for words and scrambled words in the LVF and RVF. The procedure for averaging, normalizing and smoothing the time series was the same as for the posterior sites. Fig. 6 shows the resulting time series (bottom) along with the mean responses for the period 0–700 ms (top). Analysis of variance was carried out on the mean responses for the 0–700 ms time period (Fig. 6, top). The main effects of hemisphere, visual field and stimulus type were not significant [all Fs < 1.3]. The interaction between hemisphere and visual field was, however, significant [F(1, 13) = 18.95, MSe = 0.12, p < .02, partial g2 .593]. Bonferroni-corrected t-tests (a = .05) found stronger responses to RVF than LVF stimuli for the left hemisphere virtual electrode [t(13) = 3.12, p < .05] and stronger responses to RVF than LVF stimuli for the right hemisphere virtual electrode [t(13) = 3.86, p < .01]. No other interactions approached significance. Thirteen of the 14 participants showed stronger responses to contralateral than ipsilateral words in the left mid fusiform site. Twelve showed stronger responses to contralateral than ipsilateral words in the right mid fusiform site. The event-related field plots for the left mid fusiform (VWFA) response to RVF words (Fig. 6, left middle panel) showed a response which began at around 80 ms and continued to at least 375 ms, with peaks of similar normalized amplitudes (1.5) at 128, 237 and 307 ms. A considerably weaker right mid fusiform response to LVF words (Fig. 6, right middle panel) occurred from 120 to 400 ms, peaking at 240 ms (normalized amplitude 0.8). The left mid fusiform response to scrambled words was again stronger to RVF than to LVF stimuli. There were peaks for RVF scrambled words at 178, 243 and 284 ms (normalized amplitudes 1.6.–1.2) but no clear peaks for LVF scrambled words. The right mid fusiform response to LVF words showed peaks at 110, 174, 221 and 299 ms, with normalized amplitudes from 1.2 to 1.4. The right mid fusiform response to RVF words showed a single peak at 223 ms (normalized amplitude 1.2). The same response to scrambled words showed peaks to LVF stimuli at 172, 218 and 269 ms, with normalized amplitudes of 2.0, 1.5, 1.4. The response to RVF scrambled words at the right mid fusiform site showed a single peak at 215 ms with a normalized amplitude of 1.7. In summary, activity in the mid fusiform gyri appeared in the comparison of active and passive time windows as decreases in neural activity in the 10–20 Hz frequency band. Virtual electrode analyses of the VWFA and its right hemisphere homologue showed
stronger responses to contralateral than ipsilateral stimuli. There was no overall difference in activity levels between the left and right hemisphere sites, and no detectable difference in their response to word compared with scrambled words. The VWFA response to RVF words extended from around 80 ms to beyond 375 ms, with three distinct peaks between 125 and 310 ms. The VWFA response to LVF words was slower and weaker, peaking at 240 ms.
3.2.2.2. Precentral gyri. A third pair of virtual electrodes probed the evoked responses of the left and right precentral gyri. The left hemisphere virtual electrode was generated in the left precentral gyrus (BA 6) at the peak of the response to RVF words in the 0–200 ms time window (x = 40, y = 4, z = 22). An equivalent right hemisphere electrode site was identified by reversing the value of the x coordinate (x = 40, y = 4, z = 22). The results are shown in Fig. 7. Analysis of variance on the mean responses to words and scrambled words in the LVF or RVF (Fig. 7, top) found no significant main effects of hemisphere, visual field or stimulus type. There was a highly significant interaction between hemisphere and stimulus type [F(1, 13) = 26.16, MSe = 0.417, p < .001, partial g2 .668]. Bonferroni-corrected t-tests (a = .05) found that the left precentral site showed significantly stronger responses to words (mean = .041) than to scrambled words (mean = .061) [t(13) = 2.72, p < .05], while the right precentral site showed significantly stronger responses to scrambled words (mean = .081) than to words (mean = .061) [t(13) = 5.56, p < .01]. The interaction between hemisphere and visual field was also significant [F(1, 13) = 11.42, MSe = 0.09, p < .01, partial g2 .486]. Bonferroni-corrected t-tests (a = .05) found that the left precentral site showed significantly stronger response to RVF inputs (mean = .042) than to LVF inputs (mean = .062) [t(13) = 2.27, p < .05], while the right precentral site showed no significant difference between LVF inputs (mean = .013) and RVF inputs (mean = .007) [t(13) = 0.13]. No other interactions approached significance. All 14 participants showed stronger responses to contralateral than ipsilateral words in the left precentral site. Twelve showed stronger responses to contralateral than ipsilateral words in the right precentral site. The event-related field plots to RVF words in the left precentral gyrus (Fig. 7, middle left panel) showed two periods when the response was elevated in comparison to the response to LVF words. The first period of difference ran from about 250 to 400 ms, with peaks at 279 and 320 ms. The second period of difference ran from about 450 to 650 ms, with peaks at 499 and 571 ms. The normalized amplitudes of these peaks varied from 1.3 to 1.7. The left precentral response to LVF words rose gradually, peaking at 507 and 569 ms, with normalized amplitudes of 1.2 and 1.0. The left precentral response to scrambled words failed to reach a value of 1.0 for either RVF or LVF inputs. The right precentral gyrus response to words (Fig. 7, middle right panel) showed a single, small peak for LVF words at 224 ms (normalized amplitude 1.1). The response to RVF words never exceeded 1.0. There was a slow response to scrambled words, with discernible peaks for LVF inputs at 501 and 528 ms (normalized amplitudes 1.2 and 1.5) but no peaks to RVF inputs with normalized amplitudes greater than 1.0. In summary, activity in the precentral gyri also appeared in the comparison of active and passive time windows as decreases in neural activity in the 10–20 Hz frequency band. In those analyses, only the left hemisphere showed a response, and only to RVF inputs. One virtual electrode was positioned at the site of the peak response in the left precentral gyrus to RVF words while another was positioned at its right hemisphere homologue. The left precentral site responded more strongly to words than to scrambled words, and more strongly to RVF than LVF stimuli. Hence the left precentral site showed a relatively word-specific
L. Barca et al. / Brain & Language 118 (2011) 53–71
evoked response which, like the left middle occipital and mid fusiform responses, also displayed a RVF advantage. The left precentral evoked response to RVF words was delayed relative to the responses at the more posterior sites, being maximal from 250 to 400 ms, and again from 450 to 650 ms. 4. General discussion Experiment 1 found a robust RVF advantage in word naming for the same words that were the employed in the MEG study (Experiment 2). The RVF advantage in Experiment 1 was consistent across multiple presentations of the same items (Fig. 1). That stable latency difference is compatible with a structural account of the RVF advantage based on interhemispheric transfer of LVF information from the right hemisphere to the left (Ellis, 2004; Young & Ellis, 1985). Experiment 1 also showed that prior exposure to a small set of experimental words resulted in low error rates (average 1.6%). Participants in Experiment 2 were pre-trained on the stimulus words to maximize the probability that they would identify the target words correctly in the MEG scanner. Those words were presented six times in the LVF and six times in the RVF, randomly interleaved with an equal number of scrambled words. Participants were instructed to read the real words silently ‘in their heads’ and responded ‘‘pattern” covertly when they saw a scrambled word. The comparison of total (evoked and induced) power in moving active windows and fixed passive windows for different frequency bands revealed a pattern of neural responses to lateralized words that was similar to that reported for centrally-presented words in previous MEG studies that have used analyses based on beamforming (Cornelissen et al., 2009; Pammer, 2009; Pammer et al., 2004, 2006; Wheat et al., 2010) apart from the fact that, as one would expect, initial processing of lateralized words was confined to visual areas within the hemisphere to which the words were projected (Figs. 3 and 4). Over time, and in different frequency bands, responses could be seen in all components of the standardly-recognized ‘reading network’ (Dehaene et al., 2005; Jobard et al., 2003; Vigneau, Jobard, Mazoyer, & Tzourio-Mazoyer, 2005). Unlike the fMRI studies of Cohen et al. (2000, 2002) and Ben-Shachar, Dougherty, Deutsch, et al. (2007), clear differences emerged in the strength and timing of neural responses to LVF and RVF words – differences which, we believe, reflect the RVF advantages observed in a wide variety of lexical processing tasks. Those differences were consistent across participants at all of the sites that were probed using virtual electrodes. We will focus our discussion on the neural responses to real words. Responses to the scrambled word stimuli are discussed in Section 4.4. 4.1. Responses to words in speech-motor cortex The immediate cause of an RVF advantage in word naming like that seen in Experiment 1 is presumably the fact that the time interval between a word appearing on the screen and articulatory commands leaving speech-motor cortex in the left hemisphere is less for words that appear in the RVF than for words that appear in the LVF. We will begin our discussion of the MEG analyses by considering the results obtained for the left precentral gyrus, which responded more strongly to RVF words than to LVF words, and more strongly to words than to scrambled words. The comparison of neural activity in active and passive time windows (Fig. 5) showed a significant response in the left precentral gyrus to RVF (contralateral) words that was visible in each active time window from 0–200 to 300–500 ms (see Fig. 5). The peak of that response (x = 40, y = 4, z = 22) lay squarely in that part of the motor strip that is concerned with the control of articulation
67
(Brown, Ngan, & Liotto, 2008; Price, 2000). It was close to the Rolandic site which Cohen et al. (2002) reported as showing a stronger fMRI response to words than consonant strings when those stimuli were presented in the RVF, but not when they were presented in the LVF, and close to the site in the left inferior frontal gyrus where Pammer et al. (2004), Cornelissen et al. (2009) and Wheat et al. (2010) found responses to centrally-presented words within the first 200 ms. The left precentral response to LVF (ipsilateral) words did not achieve significance in the comparison of active and passive windows. Activation of the left precentral gyrus has been reported in several PET and fMRI studies of central visual word processing (e.g., Carreiras, Mechelli, Estévez, & Price, 2007; Fiez & Petersen, 1998; Mechelli et al., 2003; Rumsey et al., 1997; Vigneau et al., 2005; Vinckier et al., 2007). Carreiras et al. (2007) and Rumsey et al. (1997) reported stronger left precentral gyrus responses in reading aloud than lexical decision, while Vigneau et al. (2005) found a stronger response to words than to nonwords. Price et al. (1996) observed similar left precentral gyrus activation in a PET study when participants repeated spoken words. Activation has been reported in covert reading of centrally-presented words (Dietz, Jones, Gareau, Zeffiro, & Eden, 2005) as well as in a task where production of the spoken name was delayed (Salmelin et al., 2000). Taken together, these observations indicate that speech-motor cortex is active during both overt and covert word production, whether that production is in response to visual or auditory input. The comparison between the power of the neuromagnetic responses in active and passive time windows is performed on unaveraged data, hence both evoked (phase locked) and induced (non-phase locked) activity contribute to the observed differences. The virtual electrode analyses, in contrast, focused on the evoked (phase-locked) responses. Event-related fields will only be detectable if different stimuli and different participants generate responses with similar time courses: responses with very different time courses will average out and be lost. We suspect that finding evoked responses with similar time courses across items and participants was helped by the fact that we used a small set of words high frequency words that were all of the same length, that participants were familiarized with the stimulus words before the start of the MEG experiment, and that the participants were all righthanded undergraduate students without any history of reading difficulties who were all native speakers of English. All of these factors are likely to have contributed to producing neural responses that followed similar time courses across both items and participants. Analysis of the evoked responses at the left precentral site (speech-motor cortex) supported the indications coming from the comparison of active and passive time windows: responses were stronger to RVF than LVF words, and stronger to real words than to scrambled words (Fig. 7). The event-related field plots showed elevated amplitudes in the response to RVF words from around 250 to 400 ms, and again from around 450 to 650 ms. In contrast, the left speech-motor response to LVF words rose more slowly and was weaker than the response to RVF words. The right precentral gyrus showed very little in the way of an evoked response to either LVF or RVF words. The mean naming latencies for the same words in Experiment 1 were 483 ms for RVF presentations and 514 ms for LVF presentations. We suggest that the faster growth of a stronger left speech-motor cortex response to RVF than LVF results in faster triggering of articulatory responses to RVF than LVF words and therefore represents the proximal cause of the RVF advantage in word naming seen in Experiment 1 for the same word stimuli. The question that then needs to be addressed is at what point in processing does the RVF advantage originate, and is it also present at the VWFA, as we have argued it should be?
68
L. Barca et al. / Brain & Language 118 (2011) 53–71
4.2. Responses to words in the middle occipital gyri The speech-motor responses represent the final stage of processing before the production of a (covert) speech response. The earliest responses we analyzed were in left and right middle occipital gyri (extrastriate visual cortex). Responses in those regions appeared in the first time window of the active–passive comparison (0–200 ms), taking the form of increases in oscillatory power in the 1–10 Hz frequency band (Fig. 3). Central presentation of words has been observed to produce responses in the same extrastriate areas in PET and fMRI studies (Carreiras et al., 2007; Fiez & Petersen, 1998; Jobard et al., 2003; Mechelli et al., 2003; Pugh et al., 1996), as well as in MEG studies (Cornelissen et al., 2003, 2009; Dhond et al., 2001; Kujala et al., 2007; Marinkovic et al., 2003; Pammer et al., 2004; Salmelin et al., 2000; Tarkiainen et al., 1999; see Pammer, 2009, for a review). One possibility widely considered is that these areas are involved in extracting visual features from letters and other complex visual stimuli, and are therefore involved in an early stage in the process of visual word recognition (Salmelin, 2007). Analysis of the event-related fields showed hemifield-dependent power changes in the middle occipital sites, with stronger responses to RVF than LVF words in the left hemisphere, and stronger responses to LVF than RVF words in the right (Fig. 4). Similar hemifield-dependent extrastriate responses were observed by Cohen et al. (2000, 2002) and Ben-Shachar, Dougherty, Deutsch, et al. (2007) in their fMRI studies of responses to laterallypresented words. The present analyses tell us more, however, about the spatiotemporal evolution of these early visual responses. The comparison of active and passive time windows (Fig. 3) found the middle occipital responses to words to be largely contralateral in the 0–200 ms period. Two hundred milliseconds later, at 200–400 ms, the response was more or less independent of where the stimulus had first appeared. The intermediate 100– 300 ms time window in Fig. 3 appears to show more spread of activation from the right to the left hemisphere in response to LVF words than from the left to the right hemisphere in response to RVF words (compare the two central panels of Fig. 3, especially the posterior views). Fig. 4 also shows a stronger evoked response to ipsilateral (LVF) words in the left middle occipital gyrus than to ipsilateral (RVF) words in the right middle occipital gyrus. Both analyses, which are based on the same beamforming data, suggest asymmetrical transfer of information between the two hemispheres in the early stages of visual word recognition, with greater transfer from the right to the left hemisphere than in the opposite direction over a period from about 100 to 350 ms (compare the red line in the middle left panel of Fig. 4 showing the left extrastriate response to LVF words with the blue line in the middle right panel showing the right extrastriate response to RVF words). The purpose of this asymmetrical transfer is presumably to gather visual information concerning written words into the left hemisphere for further analysis by left-lateralized language processes irrespective of where in space the words appeared (cf. Cohen et al., 2000, 2002). The use of beamforming and virtual electrode analyses has enabled us to visualize interhemispheric transfer of visual information for the first time and to demonstrate its asymmetric character. 4.3. Responses to words in inferior occipitotemporal cortex, including the VWFA The left occipito-temporal response to RVF words in the 200– 400 ms time window (Fig. 3) included the fMRI-defined peak for the VWFA while extending some 6 cm in the posterior–anterior direction. Similarly extensive inferior occipito-temporal responses
can be seen in the MEG studies of Cornelissen et al. (2009), Marinkovic et al. (2003) and Pammer et al. (2004). We note also that in Cohen et al.’s (2002) study, activation to RVF words extended 40 mm along the left fusiform gyrus in the posterior-anterior direction, and that Vinckier et al. (2007) observed changing patterns of responsivity along the length of the left fusiform gyrus to stimuli varying on wordlikeness (from false fonts through infrequent and frequent letters, bigrams and trigrams, to real words). The left and right occipito-temporal responses to words in Vinckier et al.’s (2007) fMRI study extended from y = 100 to y = 30. If, as proposed by Dehaene et al. (2005) and Vinckier et al. (2007), activation sweeps along a ventral occipitotemporal processing stream from extrastriate visual cortex to the VWFA and beyond, then the peak of any response will depend on the time period over which the moving response is aggregated as well as on the frequency band analyzed and the type of stimulus presented. The peak of the MEG response to RVF words in the 200–400 ms time window for 10–20 Hz (Fig. 5) was at x = 44, y = 76, z = 12, which lies within a region of the left fusiform gyrus which Vinckier et al. (2007) found to be maximally activated by words, but also activated to a degree by false fonts and nonwords. We note, though, that the location of such peaks depend on the frequency band and the length of the window chosen: when Cornelissen et al. (2009, Table 1) aggregated MEG responses to centrally-presented words in the 15–25 Hz frequency band over a long active window from 0 to 500 ms, they obtained a peak response in the left fusiform gyrus at x = 46, y = 57, z = 15, which coincides almost exactly with the fMRI-defined peak for the VWFA. The fusiform responses in Experiment 2 were more bilateral than in Cohen et al.’s (2000, 2002) fMRI studies. We note, however, that Ben-Shachar, Dougherty, Deutsch, et al. (2007) found no difference in the strength of the left and right mid fusiform responses to words, and that Vinckier et al. (2007) found extensive responses to centrally-presented words in both left and right occipitotemporal cortex. Bilateral activation of both left and right occipitotemporal cortex has also been observed in MEG studies of central word recognition (e.g., Cornelissen et al., 2003, 2009; Dhond et al., 2001; Pammer et al., 2004). The extent to which processing can be channeled selectively along either the left or the right ventral processing stream depending on the type of stimulus being analyzed remains to be determined. Virtual electrodes inserted at McCandliss et al.’s (2003) coordinates for the VWFA, and at its corresponding right hemisphere homologue, found a significant interaction between hemisphere and visual field over the period 0–700 ms. Importantly, VWFA responses were stronger to RVF words than to LVF words, beginning at around 80 ms and continuing to at least 375 ms, with peaks at 128, 237 and 307 ms. The response to LVF words was weaker, with a peak at 240 ms that was lower in amplitude than any of the peaks to RVF words. This evidence for stronger VWFA responses to RVF than LVF words stands in apparent contrast to the absence of visual field differences at the VWFA in the fMRI studies of Cohen et al. (2000, 2002) and Ben-Shachar, Dougherty, Deutsch, et al. (2007). We noted in the Introduction, however, the existence of apparent trends in the direction of stronger responses to RVF than LVF words in the data from both of those studies, trends which may have failed to achieve significance because of the lack of statistical power arising from the use of small numbers of participants. We would not want to rule out the possibility that more sensitive fMRI studies with greater statistical power could find neural correlates of the RVF advantage at the VWFA, but we do feel that the much finer temporal resolution of MEG, combined with the ability provided by beamforming and virtual electrode methods to probe activity in specific cortical regions, means that MEG is particularly well suited to detecting such effects.
L. Barca et al. / Brain & Language 118 (2011) 53–71
4.4. Responses to scrambled words Scrambled word stimuli (illustrated in Fig. 4) were incorporated in Experiment 2 in the hope of revealing differences in responses to lexical and non-alphabetic stimuli similar to those observed in fMRI studies (e.g., Ben-Shachar, Dougherty, Deutsch, et al., 2007; Cohen et al., 2000, 2002; see Jobard et al., 2003; Vigneau et al., 2005; Vinckier et al., 2007; for reviews). The results obtained for scrambled word stimuli were not entirely what we expected. The speech motor area in the left precentral gyrus showed a lexicality effect, being activated more strongly by words than by scrambled words. Thus, by the time processing within the left hemisphere reached speech-motor cortex, responses differentiated between familiar words (to which the required response was subvocal production of the spoken word-form) and scrambled words (to which the required response was the subvocal production of the word ‘pattern’). As noted above, the left speech-motor response was also stronger to RVF than LVF words. Given that participants were required to generate a verbal response to scrambled words as well as to real words, we are unsure regarding the interpretation that should be placed on the stronger response to scrambled words than real words that was observed in the right precentral gyrus between 400 and 700 ms (see Fig. 7). Posterior responses in the middle occipital gyri did not differentiate between words and scrambled words. As one would expect, both types of stimulus generated hemifield-dependent responses, with stronger activation in the contralateral to a stimulus than in the hemisphere ipsilateral to a stimulus, particularly within the first 200–300 ms. The stronger overall responses within the left than the right middle occipital gyrus were, however, true of both real and scrambled words, as were the indications of asymmetric transfer of information between the hemispheres. Mid fusiform responses also failed to differentiate between words and scrambled words (while successfully differentiating between contralateral and ipsilateral stimuli). We can think of a number of reasons why the occipital and fusiform responses to scrambled words might have been more similar to the responses to real words than we had anticipated when we designed Experiment 2. In that experiment, participants were faced with a series of interleaved words and scrambled words which appeared in an apparently random order in left or right hemispace. The stimuli were positioned beyond the fovea and were presented for just 150 ms. Participants were under instructions to pronounce the words covertly as quickly and as accurately as possible. Under such circumstances, the initial response to every stimulus may have been to process it as a potential word, bringing the visual and orthographic components of the reading network to bear on that stimulus until it became clear that it was not a real word. The fact that the scrambled words contained fragments of lines and curves similar to those occurring in the real words may have exacerbated this tendency which might have been less apparent if, for example, the comparison stimuli had been faces (cf. Cornelissen et al., 2009). It is also the case that we required covert naming responses (‘‘pattern”) to be made to the scrambled words. In doing so, we may have created a short-lived class of visual objects which participants were required to categorize as exemplars of a new type of visual stimulus (scrambled words) in order to assign them all the same subvocal label. This may have induced a more ‘object-like’ pattern of neural responses than we had anticipated. We noted in the Introduction that Price and Devlin (2003), and Starrfelt and Gerlach (2007) obtained left fusiform responses in an object naming task which also peaked at the VWFA, and which led Starrfelt and Gerlach (2007) to propose that the left mid fusiform gyrus may not be a dedicated word recognition area but may have a wider responsibility for integrating shape elements into larger structures that would apply to object as well as to word recognition.
69
Tagamets et al. (2000) reported that left mid fusiform activation did not distinguish between false fonts, letter strings, pseudowords and words whereas right mid fusiform activation decreased progressively from false fonts to words. Numerically at least, our results follow a similar pattern for words and scrambled words (Fig. 6). Other studies have questioned the selectivity of the VWFA (e.g., Devlin et al., 2006; Reinke, Fernandes, Schwindt, O’Craven, & Grady, 2008; Xue & Poldark, 2007) or have reported selectivity for written words in the left mid fusiform, but only for small areas whose precise location varies from participant to participant (e.g., Baker et al., 2007; Glezer et al., 2009). Such fine-grain resolution is beyond the scope of the present study because of the lower spatial resolution of MEG compared with fMRI, and because the virtual electrodes were positioned as far as possible in the same location in different participants. 4.5. Conclusions: the neural basis of the RVF advantage for visual word recognition The aim of this study was to gain a better understanding of the RVF advantage for visual word recognition seen in a wide range of different tasks. The earliest indications of processing that we examined were in occipital extrastriate visual cortex in the left and right middle occipital gyri. Both sites showed faster and stronger responses to contralateral than to ipsilateral stimuli. While the overall levels of responding to words and scrambled words, and LVF and RVF inputs, did not differ significantly, there were indications of asymmetric transfer of information between the hemispheres during the performance of this reading task. LVF words arrive first in right occipital cortex and must be transferred across the corpus callosum if they are to be registered by left extrastriate cortex and processed further within the left hemisphere. Although the response in the left middle occipital gyrus to LVF words was delayed in comparison with its response to RVF words, it was nevertheless stronger than the corresponding response of the right middle occipital gyrus to LVF words. It was as if knowledge that the stimuli required naming responses (reading the words covertly, and responding ‘‘pattern” to the scrambled words) caused visual information to be channeled into the left hemisphere more than the right. The faster, stronger response of the left middle occipital gyrus to RVF than LVF words continued within the fusiform gyrus where it was reflected in faster, stronger responses of the left mid fusiform gyrus (VWFA) to RVF than LVF words. On the basis of their fMRI results, Cohen et al. (2002, p. 1065) characterized to the response of the VWFA as ‘‘invariant for the spatial location of stimuli” while McCandliss et al. (2003, p. 295) claimed that, ‘‘In the VWFA . . . fMRI responses to words vs control stimuli are equivalent across both visual fields”. We believe that those are over-statements. If written words are processed by the VWFA wherever they appear in space, then the VWFA should indeed respond to any word that is identified successfully, whether that word appears in the RVF or in the LVF. But if RVF words are responded to more quickly and more accurately than LVF words whether the task emphasizes visual, semantic or phonological processing (Bub & Lewine, 1988; Ellis et al., 1988, 2007a, 2009; Jordan et al., 2000; Lindell et al., 2002; Young & Ellis, 1985), then the RVF advantage should be reflected at the VWFA response, as demonstrated here. The RVF superiority continued through to speech-motor cortex, which we take to be the neural reflection of the RVF advantage in word naming (as seen in Experiment 1). Acknowledgments Laura Barca and Andrew Ellis were members of the European Union Marie Curie Research Training Network on Language and
70
L. Barca et al. / Brain & Language 118 (2011) 53–71
Brain (http://www.hull.ac.uk/RTN-LAB/). We thank Gary Green and the staff of the York Neuroimaging Centre for help and comments. References Baayen, R. H., Piepenbroack, R., & Van Rijn, H. (1993). The CELEX lexical database (CD-ROM). Philadelphia: University of Pennsylvania, Linguistic Data Consortium. Baker, C. I., Liu, J., Wald, L. L., Kwong, K. K., Benner, T., & Kanwisher, N. (2007). Visual word processing and experiential origins of functional selectivity in human extrastriate cortex. Proceedings of the National Academy of Science, 104, 9087–9092. Banich, M. T. (2003). The divided visual field technique in laterality and interhemispheric integration. In K. Hugdahl (Ed.), Experimental methods in neuropsychology (pp. 47–63). Boston: Kluwer. Barnes, G. R., & Hillebrand, A. (2003). Statistical flattening of MEG beamformer images. Human Brain Mapping, 18, 1–12. Barnes, G. R., Hillebrand, A., Fawcett, I. P., & Singh, K. D. (2004). Realistic spatial sampling for MEG beamformer images. Human Brain Mapping, 23, 120–127. Ben-Shachar, M., Dougherty, R. F., & Wandell, B. A. (2007). White matter pathways in reading. Current Opinion in Neurobiology, 17, 258–270. Ben-Shachar, M., Dougherty, R. F., Deutsch, G. K., & Wandell, B. A. (2007). Differential sensitivity to words and shapes in ventral occipito-temporal cortex. Cerebral Cortex, 17, 1604–1611. Bentin, S., Mouchetant-Rostaing, Y., Giard, M. H., Echallier, J. F., & Pernier, J. (1999). ERP manifestations of processing printed words at different psycholinguistic levels: Time course and scalp distribution. Journal of Cognitive Neuroscience, 11, 35–60. Binder, J. R., & Mohr, J. P. (1992). The topography of callosal reading pathways. Brain, 115, 1807–1826. Bourne, V. J. (2006). The divided visual field paradigm: Methodological considerations. Laterality, 11, 373–393. Bradshaw, G. J. (1980). Right hemisphere language: Familial and nonfamilial sinistrals, cognitive deficits and writing hand positions in sinistrals, and the concrete–abstract, imageable–nonimageable dimensions in word recognition: A review of interrelated issues. Brain and Language, 10, 172–188. Bradshaw, G. J., & Nettleton, N. C. (1983). Human cerebral asymmetry. Eaglewood Cliffs, NJ: Prentice-Hall. Brem, S., Halder, P., Bucher, K., Summers, P., Martin, E., & Brandeis, D. (2009). Tuning of the visual word processing system: Distinct developmental ERP and fMRI effects. Human Brain Mapping, 30, 1833–1844. Brookes, M. J., Gibson, A. M., Hall, S. D., Furlong, P. L., Barnes, G. R., & Hillebrand, A. (2005). GLM-beamfomer method demonstrates stationary field, alpha ERD and gamma ERS co-localisation with fMRI BOLD response in visual cortex. Neuroimage, 26, 302–308. Brown, S., Ngan, E., & Liotto, M. (2008). A larynx area in the human motor cortex. Cerebral Cortex, 18, 837–845. Bryden, M. P., & Mondor, T. A. (1991). Attentional factors in visual field asymmetries. Canadian Journal of Psychology, 45, 427–447. Bub, D., & Lewine, J. (1988). Different modes of word recognition in the left and right visual fields. Brain and Language, 33, 161–188. Carreiras, M., Mechelli, A., Estévez, A., & Price, C. J. (2007). Brain activation for lexical decision and reading aloud: Two sides of the same coin? Journal of Cognitive Neuroscience, 19, 433–444. Cohen, L., Dehaene, S., Naccache, L., Lehéricy, S., Dehaene-Lambertz, G., & Hénaff, M. A. (2000). The visual word form area: Spatial and temporal characterization of an initial stage of reading in normal subjects and posterior split-brain patients. Brain, 123, 291–307. Cohen, L., Lehéricy, S., Cochon, F., Lemer, C., Rivaud, S., & Dehaene, S. (2002). Language-specific tuning of visual cortex? Functional properties of the Visual Word Form Area. Brain, 125, 1054–1069. Cornelissen, P., Tarkiainen, A., Helenius, P., & Salmelin, R. (2003). Cortical effects of shifting letter position in letter strings of varying length. Journal of Cognitive Neuroscience, 15, 731–746. Cornelissen, P. L., Kringelbach, M. L., Ellis, A. W., Whitney, C., Holliday, I. A., & Hansen, P. C. (2009). Activation of the left inferior frontal gyrus in the first 200 msec of reading: Evidence from magnetoencephalography (MEG). Plos ONE, 4(4), 1–13. e5359.
. Dehaene, S., Cohen, L., Sigman, M., & Vinckier, F. (2005). The neural code for written words: A proposal. Trends in Cognitive Sciences, 9, 335–341. Dehaene, S., Jobert, A., Naccache, L., Ciuciu, P., Poline, J. B., Le Bihan, D., et al. (2004). Letter binding and invariant recognition of masked words. Psychological Science, 15, 307–313. Devlin, J. T., Jamison, H. L., Gonnerman, L. M., & Matthews, P. M. (2006). The role of the posterior fusiform gyrus in reading. Journal of Cognitive Neuroscience, 18, 911–922. Dhond, R. P., Buckner, R. L., Dale, A. M., Marinkovic, K., & Halgren, E. (2001). Spatiotemporal maps of brain activity underlying word generation and their modification during repetition priming. Journal of Neuroscience, 21, 3564–3571. Dietz, N. A., Jones, K. M., Gareau, L., Zeffiro, T. A., & Eden, G. F. (2005). Phonological decoding involves left fusiform gyrus. Human Brain Mapping, 26, 81–93. Ellis, A. W. (2004). Length, formats, neighbors, and the processing of words presented laterally or at fixation. Brain and Language, 88, 355–366.
Ellis, A. W., Ansorge, L., & Lavidor, M. (2007a). Words, hemispheres, and dissociable subsystems: Effects of exposure duration, case alternation, priming and continuity of form on word recognition in the left and right visual fields. Brain and Language, 101, 292–303. Ellis, A. W., Ansorge, L., & Lavidor, M. (2007b). Words, hemispheres, and processing mechanisms: A response to Marsolek and Deason. Brain and Language, 101, 308–312. Ellis, A. W., Burani, C., Izura, C., Bromiley, A., & Venneri, A. (2006). Traces of vocabulary acquisition in the brain: Evidence from covert object naming. Neuroimage, 33, 958–968. Ellis, A. W., Ferreira, R., Cathles-Hagen, P., Holt, K., Jarvis, L., & Barca, L. (2009). Word learning and the cerebral hemispheres: From serial to parallel processing of written words. Philosophical Transactions of the Royal Society of London Series B, 364, 3675–3696. Ellis, A. W., Young, A. W., & Anderson, C. (1988). Modes of visual word recognition in the left and right cerebral hemispheres. Brain and Language, 35, 254–273. Embick, D., Hackl, M., Schaeffer, J., Kelepir, M., & Marantz, A. (2001). A magnetoencephalographic component whose latency reflects lexical frequency. Cognitive Brain Research, 10, 345–348. Evans, A. C., Collins, D. L., Mills, S. R., Brown, E. D., & Kelly, R. L. (1993). 3D statistical neuroanatomical models from 305 MRI volumes. In Proceedings of the IEEE – Nuclear science symposium and medical imaging conference (pp. 1813–1817). Faust, M., Kravetz, S., & Babkoff, H. (1993). Hemisphere specialization or reading habits: Evidence from lexical decision research with Hebrew words and sentences. Brain and Language, 44, 254–263. Fiez, J. A., & Petersen, S. E. (1998). Neuroimaging studies of word reading. Proceedings of the National Academy of Sciences of the United States of America, 95, 914–921. Glezer, L. S., Jiang, X., & Riesenhuber, M. (2009). Evidence for highly selective neuronal tuning to whole words in the ‘‘visual word form area”. Neuron, 62, 199–204. Hagan, C. C., Woods, W., Johnson, S., Calder, A., Green, G. G. R., & Young, A. W. (2009). MEG demonstrated a supra-additive response to facial and vocal emotion in the right superior temporal sulcus. Proceedings of the National Academy of Sciences of America, 106, 20010–20015. Hall, S. D., Holliday, I. E., Hillebrand, A., Furlong, P. L., Singh, K. D., & Barnes, G. G. (2005). Distinct cortical response functions in striate and extra-striate regions of visual cortex revealed with magnetoencephalography. Clinical Neurophysiology, 116, 1716–1722. Hämäläinen, M. S., Hari, R., Ilmoniemi, R. J., & Lounasmaa, O. V. (1993). Magnetoencephalography – Theory, instrumentation, and applications to noninvasive studies of the working human brain. Reviews of Modern Physics, 65, 413–497. Hellige, J. B. (1993). Hemispheric asymmetry? What’s right and what’s left? Cambridge, Mass: Harvard University Press. Hillebrand, A., Singh, K. D., Holliday, I. E., Furlong, P. L., & Barnes, G. R. (2005). A new approach to neuroimaging with magnetometry. Human Brain Mapping, 25, 199–211. Huang, M.-X., Mosher, J. C., & Leahy, R. M. (1999). A sensor-weighted overlappingsphere head model and exhaustive head model comparison for MEG. Physics in Medicine and Biology, 44, 423–440. Huang, M.-X., Shih, J. J., Lee, R. R., Harrington, D. L., Thoma, R. J., Weisend, M. P., et al. (2004). Commonalities and differences among vectorised beamformers in electromagnetic source imaging. Brain Topography, 16, 139–158. Hunter, Z. R., & Brysbaert, M. (2008). Visual half field experiments are a good measure of cerebral language dominance if used properly. Neuropsychologia, 46, 316–325. Jenkinson, M., Pechaud, M., & Smith, S. (2005). BET2: MR-based estimation of brain, skull and scalp surfaces. In Paper presented at eleventh annual meeting of the organization for human brain mapping, Toronto, Canada, June 2005. Jobard, G., Crivello, F., & Tzourio-Mazoyer, N. (2003). Evaluation of the dual-route theory of reading: A metanalysis of 35 neuroimaging studies. Neuroimage, 20, 693–712. Jordan, T. R., Patching, G. R., & Milner, A. D. (2000). Lateralized word recognition: Assessing the role of hemispheric specialization, modes of lexical access, and perceptual asymmetry. Journal of Experimental Psychology: Human Perception and Performance, 26, 1192–1208. Kinsey, K., Anderson, S., Hadjipapas, A., Nevado, A., Hillebrand, A., & Holliday, I. (2009). Cortical oscillatory activity associated with the perception of illusory and real visual contours. International Journal of Psychophysiology, 73, 265–272. Knecht, S., Dräger, B., Deppe, M., Bobe, L., Lohmann, H., Floel, A., et al. (2000). Handedness and hemispheric language dominance in healthy humans. Brain, 123, 2512–2518. Kozinska, D., Carducci, F., & Nowinski, K. (2001). Automatic alignment of EEG/MEG data set. Clinical Neurophysiology, 112, 1553–1561. Kujala, J., Pammer, K., Cornelissen, P. L., Roebroeck, P., Formisano, E., & Salmelin, R. (2007). Phase coupling in a cerebro-cerebellar network at 8–13 Hz during reading. Cerebral Cortex, 17, 1476–1485. Lavidor, M., Ellis, A. W., & Pansky, A. (2002). Case alternation and length effects in the two cerebral hemispheres: A study of English and Hebrew. Brain and Cognition, 50, 257–271. Leahy, R. M., Mosher, J. C., Spencer, M. E., Huang, M. X., & Lewine, J. D. (1998). A study of dipole localization accuracy for MEG and EEG using a human skull phantom. Electroencephalography and Clinical Neurophysiology, 107, 159–173. Lee, L. C., Andrews, T. J., Johnson, S. J., Woods, W., Gouws, A., Green, G. G. R., et al. (2010). Neural responses to rigidly moving faces displaying shifts in social
L. Barca et al. / Brain & Language 118 (2011) 53–71 attention investigated with fMRI and MEG. Neuropsychologia, 48, 477–490. Lindell, A. K., Nicholls, M. E. R., & Castles, A. (2002). The effect of word length on hemispheric word recognition: Evidence from unilateral and bilateralredundant presentations. Brain and Cognition, 48, 447–452. Mainy, N., Jung, J., Baciu, M., Kahane, P., Schoendorff, B., Minotti, L., et al. (2008). Cortical dynamics of word recognition. Human Brain Mapping, 29, 1215–1230. Maratos, F. A., Anderson, S. J., Hillebrand, A., Singh, K. D., & Barnes, G. R. (2007). The spatial distribution and temporal dynamics of brain regions activated during the perception of object and non-object patterns. Neuroimage, 34, 371–383. Marinkovic, K., Dhond, R. P., Dale, A. M., Glessner, M., Carr, V., & Halgren, E. (2003). Spatiotemporal dynamics of modality-specific and supramodal word processing. Neuron, 38, 487–497. Marsolek, C. J. (2004). Abstractionist versus exemplar-based theories of visual word priming: A subsystems resolution. Quarterly Journal of Experimental Psychology Section A – Human Experimental Psychology, 57, 1233–1259. Marsolek, C. J., & Deason, R. G. (2007). Hemispheric asymmetries in visual wordform processing: Progress, conflict, and evaluating theories. Brain and Language, 103, 304–307. McCandliss, B. D., Cohen, L., & Dehaene, S. (2003). The visual word form area: Expertise for reading in the fusiform gyrus. Trends in Cognitive Sciences, 7, 293–299. McNab, F., Rippon, G., Hillebrand, A., Singh, K. D., & Swithenby, S. J. (2007). Semantic and phonological task-set priming and stimulus processing using magnetoencephalography. Neuropsychologia, 45, 1041–1054. Mechelli, A., Gorno-Tempini, M. L., & Price, C. (2003). Neuroimaging studies of word and pseudoword reading: Consistencies, inconsistencies, and limitations. Journal of Cognitive Neuroscience, 15, 260–271. Melamed, F., & Zaidel, E. (1993). Language and task effects on lateralized word recognition. Brain and Language, 45, 70–85. Millman, R. E., Prendergast, G., Kitterick, P. T., Woods, W. P., & Green, G. G. R. (2010). Spatiotemporal reconstruction of the auditory steady-state response to frequency modulation using magnetoencephalography. Neuroimage, 49, 745–758. Nazir, T. A. (2000). Traces of print along the visual pathway. In A. Kennedy, R. Radach, D. Heller, & J. Pynte (Eds.), Reading as a perceptual process (pp. 2–33). Amsterdam: Elsevier. Nazir, T. A. (2004). Reading habits, perceptual learning, and recognition of printed words. Brain and Language, 88, 294–311. Neuper, C., & Pfurtscheller, G. (2001). Event-related dynamics of cortical rhythms: Frequency-specific features and functional correlates. International Journal of Psychophysiology, 43, 41–58. Oldfield, R. C. (1971). The assessment and analysis of handedness: The Edinburgh inventory. Neuropsychologia, 9, 93–113. Pammer, K. (2009). What can MEG neuroimaging tell us about reading? Journal of Neurolinguistics, 22, 266–280. Pammer, K., Hansen, P. C., Holliday, I., & Cornelissen, P. (2006). Attention shifting and the role of the dorsal pathway in visual word recognition. Neuropsychologia, 44, 2926–2936. Pammer, K., Hansen, P. C., Kringelbach, M. L., Holliday, I., Barnes, G., Hillebrand, A., et al. (2004). Visual word recognition: The first half second. Neuroimage, 22, 1819–1825. Pfurtscheller, G., & Lopes de Silva, F. H. (1999). Event-related EEG/MEG synchronization and desynchronization: Basic principles. Clinical Neuropsychology, 110, 1842–1857. Price, C. J. (2000). The anatomy of language: Contributions from functional neuroimaging. Journal of Anatomy, 197, 335–359. Price, C. J., & Devlin, J. T. (2003). The myth of the visual word form area. Neuroimage, 19, 473–481. Price, C. J., Wise, R. J. S., Warburton, E., Moore, C. J., Howard, D., Patterson, K., et al. (1996). Hearing and saying: The functional anatomy of auditory word processing. Brain, 119, 919–931. Pugh, K. R., Shaywitz, B. A., Shaywitz, S. E., Constable, R. T., Skudlarski, P., Fulbright, R. K., et al. (1996). Cerebral organization of component processes in reading. Brain, 119, 1221–1238. Pylkkänen, L., & Marantz, A. (2003). Tracking the time course of word recognition with MEG. Trends in Cognitive Sciences, 7, 187–189. Reinke, K., Fernandes, M., Schwindt, G., O’Craven, K., & Grady, C. L. (2008). Functional specificity of the visual word form area: General activation for words and symbols but specific network activation for words. Brain and Language, 104, 180–189. Rumsey, J. M., Horwitz, B., Donohue, B. C., Nace, K., Masog, J. M., & Andreason, P. (1997). Phonological and orthographic components of word recognition. Brain, 120, 739–759. Salmelin, R. (2007). Clinical neurophysiology of language. Clinical Neurophysiology, 118, 237–254.
71
Salmelin, R., Schnitzler, A., Schmitz, F., & Freund, H.-J. (2000). Single word reading in developmental stutterers and fluent speakers. Brain, 123, 1184–1202. Savitzky, A., & Golay, M. E. (1964). Smoothing and differentiation of data by simplified least squares procedures. Analytical Chemistry, 36, 1627–1639. Schmuller, J., & Goodman, R. (1979). Bilateral tachistoscopic perception, handedness, and laterality. Brain and Language, 8, 81–91. Schneider, W., Eschman, A., & Zuccolotto, A. (2002). E-Prime user’s guide. Pittsburgh: Psychology Software Tools. Shillcock, R., Ellison, T. M., & Monaghan, P. (2000). Eye-fixation behavior, lexical storage and visual word recognition in a split processing model. Psychological Review, 107, 824–851. Simpson, M. I. G., Barnes, G. R., Johnson, S. R., Hillebrand, A., Singh, K. D., & Green, G. G. R. (2009). MEG evidence that the central auditory system encodes multiple temporal cues. European Journal of Neuroscience, 30, 1183–1191. Singh, K. D. (2006). Magnetoencephalography. In C. Senior, T. Russell, & M. S. Gazzaniga (Eds.), Methods in mind (pp. 291–326). Cambridge, MA: MIT Press. Smith, S. M. (2002). Fast robust automated brain extraction. Human Brain Mapping, 17, 143–155. Springer, S., & Deutsch, G. (1997). Left brain, right brain: Perspectives from cognitive neuroscience (5th ed.). New York: W.H. Freeman. Starrfelt, R., & Gerlach, C. (2007). The visual what for area: Words and pictures in the left fusiform gyrus. Neuroimage, 35, 334–342. Tagamets, M.-A., Novick, J. M., Chalmers, M. L., & Friedman, R. B. (2000). A parametric approach to orthographic processing in the brain: An fMRI study. Journal of Cognitive Neuroscience, 12, 281–297. Tarkiainen, A., Helenius, P., Hansen, P. C., Cornelissen, P. L., & Salmelin, R. (1999). Dynamics of letter string perception in the human occipitotemporal cortex. Brain, 122, 2119–2131. Van Veen, B. D., van Drongelen, W., Yuchtman, M., & Suzuki, A. (1997). Localization of brain electrical activity via linearly constrained minimum variance spatial filtering. IEEE Transactions in Biomedical Engineering, 44, 867–880. Vigneau, M., Jobard, G., Mazoyer, B., & Tzourio-Mazoyer, N. (2005). Word and nonword reading: What role for the visual word form area? NeuroImage, 27, 694–705. Vinckier, F., Dehaene, S., Jobert, A., Dubus, J. P., Sigman, M., & Cohen, L. (2007). Hierarchical coding of letter strings in the ventral stream: Dissecting the inner organization of the visual word-form system. Neuron, 55, 143–156. Vrba, J., & Robinson, S. E. (2001). Signal processing in magnetoencephalography. Methods, 25, 249–271. Weems, S. A., & Zaidel, E. (2005). Repetition priming within and between the two cerebral hemispheres. Brain and Language, 93, 298–307. Wheat, K. L., Cornelissen, P., Frost, S. J., & Hansen, P. C. (2010). During visual word recognition phonology is accessed by 100 ms and may be mediated by a speech production code: Evidence from magnetoencephalography. Journal of Neuroscience, 30, 5229–5233. White, M. J. (1969). Laterality differences in perception: A review. Psychological Bulletin, 72, 387–405. Whitney, C. (2001). How the brain encodes the order of letters in a printed word: The SERIOL model and selective literature review. Psychonomic Bulletin and Review, 8, 221–243. Whitney, C., & Cornelissen, P. (2008). SERIOL reading. Language and Cognitive Processes, 23, 143–164. Wilson, T. W., Leuthold, A. C., Lewis, S. M., Georgopoulos, A. P., & Pardo, P. J. (2005). The time and space of lexicality: The neuromagnetic view. Experimental Brain Research, 162, 1–13. Xue, G., & Poldark, R. A. (2007). The neural substrates of visual perceptual learning of words: Implications for the visual word form area hypothesis. Journal of Cognitive Neuroscience, 19, 1643–1655. Yamagishi, N., Goda, N., Callan, D. E., Anderson, S. J., & Kawato, M. (2005). Attentional shifts towards an expected visual target alter the level of alphaband oscillatory activity in the human calcarine cortex. Cognitive Brain Research, 25, 799–809. Young, A. W., Bion, P. J., & Ellis, A. W. (1980). Studies toward a model of laterality effects for picture and word naming. Brain and Language, 11, 54–65. Young, A. W., & Ellis, A. W. (1985). Different methods of lexical access for words presented in the left and right visual fields. Brain and Language, 24, 326–358. Young, A. W., Ellis, A. W., & Bion, P. J. (1984). Left hemisphere superiority for pronounceable nonwords, but not for unpronounceable letter strings. Brain and Language, 22, 14–25. Zaidel, E., Clarke, J. M., & Suyenobu, B. (1990). Hemispheric independence: A paradigm case for cognitive neuroscience. In A. B. Schneibel & A. F. Wechsler (Eds.), Neurobiology of higher cognitive function (pp. 297–355). New York: Guildford Press.