Global precedence effect in audition and vision: Evidence for similar cognitive styles across modalities

Global precedence effect in audition and vision: Evidence for similar cognitive styles across modalities

Acta Psychologica 138 (2011) 329–335 Contents lists available at SciVerse ScienceDirect Acta Psychologica journal homepage: www.elsevier.com/ locate...

362KB Sizes 0 Downloads 56 Views

Acta Psychologica 138 (2011) 329–335

Contents lists available at SciVerse ScienceDirect

Acta Psychologica journal homepage: www.elsevier.com/ locate/actpsy

Global precedence effect in audition and vision: Evidence for similar cognitive styles across modalities Lucie Bouvet a, b,⁎, Stéphane Rousset a, b, Sylviane Valdois a, b, d, Sophie Donnadieu a, c a

Laboratoire de Psychologie et Neurocognition (UMR CNRS 5105), France Université Pierre Mendès France, BP 47, 38040 Grenoble Cedex 9, France Université de Savoie, BP 1104, 73011 Chambéry cedex, France d Centre National de la Recherche Scientifique, France b c

a r t i c l e

i n f o

Article history: Received 6 June 2011 Received in revised form 12 August 2011 Accepted 30 August 2011 Available online 23 September 2011 PsychInfo codes: 2323 2326 2340

a b s t r a c t This study aimed to provide evidence for a Global Precedence Effect (GPE) in both vision and audition modalities. In order to parallel Navon's paradigm, a novel auditory task was designed in which hierarchical auditory stimuli were used to involve local and global processing. Participants were asked to process auditory and visual hierarchical patterns at the local or global level. In both modalities, a global-over-local advantage and a global interference on local processing were found. The other compelling result is a significant correlation between these effects across modalities. Evidence that the same participants exhibit similar processing style across modalities strongly supports the idea of a cognitive style to process information and common processing principle in perception. © 2011 Elsevier B.V. All rights reserved.

Keywords: Global/local processing Cognitive style Vision Audition Hierarchically organized stimuli

1. Introduction The comprehension of perceptual relations between the whole and its parts has always been challenging. A striking example of the complexity of the whole-part (or global–local) relations is the painting “Autumn” by Giuseppe Arcimboldo (1573), which represents the portrait of a man composed of vegetables and fruits. As the portrait is recognizable at first sight, it takes longer to become aware that it is not composed of regular face features. A large body of research has been carried out focussing on the relations between global and local features. It has been suggested that this distinction may define a general perceptual function (Ivry & Robertson, 1998) that would account for the global and local processing distinction in all modalities. However, the question of global– local processing has been mainly addressed in the visual modality, and evidence is lacking that this dichotomy further applies to other modalities. This paper directly addresses this issue in investigating whether the global–local distinction applies to both the visual and the auditory modality and if there is a common perceptual mechanism across modalities. ⁎ Corresponding author at: Laboratoire de Psychologie et Neurocognition (UMR CNRS 5105), Université Pierre Mendès France, BP 47, 38040 Grenoble Cedex 9, France. Tel.: + 33 47650823564. E-mail address: [email protected] (L. Bouvet). 0001-6918/$ – see front matter © 2011 Elsevier B.V. All rights reserved. doi:10.1016/j.actpsy.2011.08.004

When we perceive natural scenes, global and local information are semantically dependent, perception of the forest can help perceiving the trees and vice versa. To go through this methodological issue, Navon (1977) used hierarchical shapes in the visual modality to independently assess global and local processing and investigate how these two processing levels do interact. The global and local levels of processing were dissociated by arranging local elements to construct a global shape. Local elements and the global shape could be congruent or not (e.g., a big “S” composed of small “Ss” or small “Hs”). Navon reported first, that participants were faster at identifying global shapes than local elements and second, that they were disturbed by the identity of the global shape when asked to identify local elements in the non-congruent condition. These findings illustrate the Global Precedence Effect (GPE). The observation of this effect leads to two conclusions: that global information is available sooner than local information and that global processing is automatic: it cannot be avoided despite explicit instruction to focus attention at the local level. Although it is a robust effect, the GPE can be modulated or reversed by experimental conditions or stimuli's characteristics (for a review, see Kimchi, 1992). For example, many features as visual angle of the global form and local elements, sparsity of elements, duration of exposure, spatial uncertainty (Lamb & Robertson, 1988) and the nature of stimuli (Poirel, Pineau & Mellet, 2006) are known to affect the GPE so

330

L. Bouvet et al. / Acta Psychologica 138 (2011) 329–335

much so that an opposite effect, a local precedence, can be sometimes obtained. Parallel processing of global/local information –where information is simultaneously processed at the two levels but local processing may begun before local processing is fully completed –has been proposed to explain the GPE (Navon, 1981). This conception of a temporal overlap between the two levels of analysis converges toward a more general model of the visual system, the iteration model, where global information is re-injected in the visual system via a top-down process (Bullier, 2001). The iteration model is supported by neuroimaging and EEG data (Beaucousin et al., 2011; Peyrin et al., 2010). The GPE might then be due to the coarse to fine integration of global and local information, driven by visual spatial frequencies (Hubner, 1997; Hughes, Nozawa & Kitterle, 1996). Low spatial frequencies –providing global information– are processed by the dorsal visual path whereas the ventral visual path processes high spatial frequencies –providing local information– (Badcock, Whitworth, Badcock & Lovegrove, 1990). Because the visual dorsal pathway is faster than the ventral one, “global” information (outcome of low and average spatial frequencies) is available sooner than “local” information (outcome of the spatial high frequencies). This initial low-pass visual analysis can serve to refine subsequent processing of the high spatial frequencies conveyed by the parvocellular visual channel of the ventral visual pathway (Peyrin et al., 2010). Beyond the GPE explanation, it is noteworthy that there is a hemispheric specialization for global and local processing. Indeed, left brain-damaged patients primarily exhibit a deficit in local elements' identification whereas right brain-damaged patients are disturbed in global forms' identification (Lamb, Robertson & Knight, 1990). Furthermore, neuroimaging studies on healthy participants demonstrated that the right hemisphere was more activated during global processing but the left hemisphere during local processing (Fink et al., 1996). More recently, it has been demonstrated that visual spatial frequencies (Hubner, 1997) are differentially processed by each hemisphere. Low spatial frequencies –providing global information– mainly rely on the right hemisphere whereas high spatial frequencies –providing local information– mainly rely on the left hemisphere (Peyrin, Chauvin, Chokron & Marendaz, 2003). Thus, local and global visual elements seem to be processed by the left and right hemispheres respectively due to hemispheric specialization for spatial frequencies. The question has been raised whether the same global–local processing differentiation could be found in other modalities (Ivry & Robertson, 1998), with a focus on audition which appears as the direct counterpart to vision (List, Justus, Robertson & Bentin, 2007; Sanders & Poeppel, 2007). The global–local auditory assumption is then assessed asking participants whether two unfamiliar melodies are identical or not. The different pairs are characterized by differences at either the global or local level. The local level is defined by the intervals –the pitch distance– between two notes whereas the global level corresponds to the melodic contour as defined by the pitch direction between notes independently of the pitch value. The reasons why differences in interval changes can be attributed to local processing while contour changes rather rely on global processing are threefold. First, intervals are small units embedded hierarchically in a larger unit, the contour. Second, it has been demonstrated that nonmusicians were more prone to use the contour than the interval cue to discriminate melodies, which implies that for non-musicians, melodic contour is a more salient cue to process melody than intervals (Peretz & Morais, 1987). Third, these two musical features involve hemispheric specialization. Studies of brain damaged patients revealed that patients who suffer from a left temporal lesion can perceive the contour but not the interval change whereas right temporal brain-damaged patients are impaired in the processing of both the contour and the interval (Liegeois-Chauvel, Peretz, Babai, Laguitton, & Chauvel, 1998; Peretz, 1990). This observation provides evidence that the contour and the interval features are hierarchically organized: the interval information cannot be processed if the contour information is not processed either. The study of Liegeois-Chauvel et al. (1998) also revealed that the superior temporal

gyrus is necessary for melody processing and more particularly its posterior part. Taken together, these findings provide converging evidence that contour change detection involves global processing (right hemisphere) whereas interval change detection further relies on local processing (left hemisphere). Furthermore, the hemispheric specialization in audition seems to be analogous to vision, the global structure being preferentially processed by the right hemisphere, the local elements by the left. However, the only fMRI study carried out on healthy participants failed to replicate the global and local processing distinction at the hemispheric level (Stewart, Overath, Warren, Foxton, & Griffiths, 2008). Contour processing was found to activate the left posterior superior temporal sulcus selectively while interval processing activated the same region bilaterally. Although activation is enhanced for interval processing –thus supporting the hierarchical view of contour–interval information–, the expected lateralization of processing for these elements is not observed. Furthermore, the global–local processing assumption in audition raises some methodological issues. Indeed, even if intervals and contour form a hierarchical structure, these two features cannot be manipulated independently (Justus & List, 2005). A melodic contour change always involves a modification of the interval; therefore a global modification cannot be done without a concomitant local change. Hence, addressing global–local processing in audition through manipulation of contour–interval features is all the more questionable. To compensate for these limitations, recent studies developed new sets of hierarchical auditory stimuli to parallel global–local stimuli in vision. For example, based on evidence that the two fundamental features of auditory objects are frequency and time, Justus and List (2005) used high–low and slow–fast stimuli in which the two global–local dimensions can be manipulated independently. Using a divided attention auditory task, they observed facilitation of target perception at one level (global or local) when the same level of processing was required on the previous trial, thus a priming effect very similar to that observed in vision (Robertson, 1996). A late mismatch negativity, which provides evidence for automatic discrimination of the auditory object properties, was also observed for slow (global) stimuli (List et al., 2007). Sanders and Poeppel (2007), using an identification task with slow–fast stimuli –which inspired the auditory task we designed for the current study– reported an auditory GPE. Better and faster responses were observed on global (slow stimuli) than on local (fast stimuli) forms and perception of local elements was disturbed by the global form for incongruent stimuli. Overall, these auditory paradigms have been designed to parallel the original paradigm of Navon (1977) in vision. However, to our knowledge, no study has ever directly compared global and local processing in vision and in audition within the same participants. To reinforce a unifying perception processing theory, it is crucial to demonstrate that the same mechanisms are involved in global– local processing in both modalities. It is also necessary to demonstrate that the individuals who show a GPE in vision further show a GPE in audition. In the current study, two tasks were designed using the same identification paradigm, which manipulated global and local processing in vision and in audition. The visual task used the classical global and local hierarchical stimuli of Navon (1977). The auditory task used the fast–slow stimuli designed by Justus and List (2005) but within the identification task proposed by Sanders and Poeppel (2007). The choice of these stimuli was motivated by evidence that time is a better counterpart to visual spatial frequency than frequency. If the same mechanisms underlie auditory and visual processing, GPE should be observed both in audition and in vision and a correlation was expected between these two effects. 2. Method 2.1. Participants Twenty right-handed non-musician young adults (seven males) from the Grenoble urban community participated in this study.

L. Bouvet et al. / Acta Psychologica 138 (2011) 329–335

331

counter-balanced across participants. Stimuli were presented on a Dell 1901 FP screen. Experiments were controlled by E-prime software and PC Dell GX280. Auditory stimuli were presented via Sennheiser HD 212Pro headphones.

They were 26 years (SD = 3.5) old on average, had normal or corrected-to-normal vision and normal audition. 2.2. Material 2.2.1. Visual stimuli Four hierarchical figures were designed. They were composed of a large geometrical shape (circle or square) made of twenty smaller shapes (circles or squares) (see Fig. 1). Each global shape was 49.5 mm by 49.5 mm (4.67 visual degrees at 60 cm); the smaller shapes were 5 mm wide/high (0.47 visual degrees) spaced by 3 to 5 mm (.28 to .47 visual degrees). The global and local forms could be either congruent [Global square/Local square (GsLs) or Global circle/Local circle (GcLc)] or incongruent [Global square/Local circle (GsLc) or Global circle/Local square (GcLs)].

3. Results

2.2.2. Auditory stimuli Four melodic sequences of nine harmonic complex tones ranging in fundamental frequency from 262 Hz to 659 Hz (C4-D4-E4-F#4-G4-A#4C5-D5-E5) were created using the Finale software with piano timbre (see Fig. 1). Each nine-tone sequence (melody) was composed of one three-item (triplet) sequence that was repeated three times. Each note lasted 210 ms, so that the local item (triplet) was 630 ms long and the global item (melody) 1900 ms long. The direction of the three triplets gave the global direction and the note of the three triplets gave the local one, which could either rise or fall. The global and local directions were either congruent [Global rising/Local rising (GrLr) or Global falling/Local falling (GfLf)] or incongruent [Global falling/Local rising (GfLr) or Global rising/Local falling (GrLf)].

3.1. Visual modality

Two participants, who responded randomly during the auditory task, were excluded from the analyses. For each modality, a 2 × 2 analysis of variance was carried out with processing level (global, local) and congruency (congruent, incongruent) as within-subjects variables. Analysis was performed on reaction times (RTs) and accuracies (%). Errors and training trials were excluded from RT analysis, as were the trials exceeding three standard deviations of each participant's mean. Results are presented in Fig. 2.

Analysis on accuracy revealed a ceiling effect. Performance was equivalent whether targets were the large (M = 97.31%, SD = 3.7%) or small shapes (M = 96.85%, SD = 5.2%), F(1,17) = 0.25, p = .62. A main congruency effect was observed with targets better identified in the congruent than the incongruent condition, F(1,17) = 11.69, p b .005, η 2p = .41. No significant processing by congruency interaction was found, F(1,17) = 1.31, p = .26. The analysis on RTs revealed a global processing advantage in vision, F(1,17) = 50.69, p b .001, η 2p = .73: the large shapes (M = 290 ms, SD= 95 ms) were identified faster than the small shapes (M = 377 ms, SD= 115 ms). A significant congruency effect was also observed, F(1,17) = 34.25, p b .001, η2p = .66: shapes were processed faster in the congruent condition (M = 316 ms, SD= 106 ms) than in the incongruent condition (M = 351 ms, SD= 120 ms). The congruency by processing interaction was significant, F(1,17) = 28.11, p b .001, η2p = .62. Small shape processing was slowed when local shapes were embedded in incongruent configurations, F(1,17) = 39.77, p b .001, η2p = .70, thus revealing a global interference. In contrast, processing at the global level was not affected by congruency (p N .15).

2.3. General procedure Each participant was administered four experimental blocks: global vision, local vision, global audition, local audition. Each block held 60 trials in which the four combinations of global–local stimuli were repeated 15 times. Trials were pseudo-randomly mixed in order to avoid presenting the same stimuli twice in a row. Before each block, there was a training session of 8 trials (2 trials of each stimulus) for which a feedback was given. The order of blocks was counter-balanced across participants. Participants were asked to focus on either the global or local form, which was either a circle or a square in the vision blocks, a rising or falling sequence in the auditory blocks. Each trial began with a central fixation cross or the word “listen” presented for 1000 ms. Then, the visual stimulus was presented centrally for 150 ms or the auditory melody was presented via headphones. The interval between two trials was 1500 ms long. Participants were told to respond by pushing a red button if the target was a circle or a rising sequence and a green button for a square or a falling sequence. The color of the response button was

3.2. Auditory modality The analysis revealed that participants more accurately identified the direction of the global melody (M = 95.15%, SD = 8.78%) than of local triplets (M = 74.91%, SD = 30.4%), F(1,17) = 19.86, p b .001, η2p = .53. They also performed better in the congruent (M = 96%, SD = 6.8%) than in the incongruent condition (M =55%, SD = 30.1%), F(1,17) = 46.16, p b .001, η2p =.73. Triplets and melody directions were differently processed according to congruency, F(1,17) = 16.311, p b .001, η2p = .48. As evidence of global interference, participants were worse at identifying triplet directions when embedded in incongruent melodies,

Vision

Audition

A GrLr

GfLf

GrLf

GfLr

B Fig. 1. Visual and auditory stimuli in congruent (A) and incongruent (B) conditions.

332

L. Bouvet et al. / Acta Psychologica 138 (2011) 329–335 congruent incongruent

*

600

100

*

400

Accuracy (%)

RT (ms)

500

300 200

80 60 40 20

100 0

0 global

global

local

local

Vision

*

*

100

Accuracy (%)

RT (ms)

2000

*

congruent incongruent

*

2500

1500 1000 500

*

*

global

local

80 60 40 20 0

0 global

local

Audition Fig. 2. Mean reaction times and accuracy performance in vision and in audition for congruent–incongruent stimuli in global and local blocks. Error bars are for standard errors; * = p b .01.

F(1,17) = 29.24, p b .005, η2p = .63. However, the identification of global melody directions also decreased when the melody was made of incongruent triplets, F(1,17) = 12.16, p b .005, η2p = .41, thus revealing a local interference. One subject was excluded from RT analysis, as his reaction times in local condition were more than 3SD slower than the group's average performance. Participants were faster to identify global melody (M= 419 ms, SD= 163 ms) than local triplets (M =975 ms, SD =593 ms), F(1,16) =24.91, p b .001, η2p =.61. They responded faster in the congruent condition (M= 453 ms, SD =198 ms) than in the incongruent condition (M =940, SD= 575), F(1,16) =25.58, p b .001, η2p =.61. There was a significant interaction between congruency and level of processing, F(1,16)= 11.92, pb .001, η2p =.42. Global interference was found, participants were slower at identifying triplet directions when embedded in incongruent melodies, F(1,16)= 21 .43, p b .005, η2p =.57. Local interference was also observed, incongruent triplet direction interfered with global melody identification, F(1,16)= 10.64, p b .005, η2p = .4. 3.3. Inter-modality correlation analyses To investigate whether similar processing was involved in vision and in audition, Pearson's correlations were carried out. We investigated correlations between a) the global advantage, b) the congruency by processing level interaction and c) the interference effect. For each type of correlation, two results are provided: one for correlations between RTs in vision and in audition, the other one for correlations between RTs in vision and accuracy in audition. In vision, only correlations with RTs are provided because of the ceiling effect on accuracy performance. In audition, correlations on accuracy are provided as a more meaningful measure of performance than RTs. Indeed, contrary to vision, auditory information is processed sequentially and participants have to wait until the end of the sequence before providing an answer. So, RTs in audition are not a relevant measure of when participants perceived the rising or falling patterns. Nevertheless, results on RTs in audition are further

provided as they parallel those observed on accuracy. Correlations are presented in Fig. 3. First, as a global advantage was observed in audition and in vision at the group level, we explored whether the same participants showed this global advantage in both modalities. For this purpose, we computed new measures by subtracting RTs and accuracy scores from global and local conditions, in vision and in audition. Results indicate that the individuals who were faster at identifying a global form were also faster, r(15) = .549, p b .03 and more accurate, r(16) = −.464, p = .052 at identifying a global melody. Second, we investigated the stability of the processing level by congruency interaction effect across participants and across modalities. Effect of interaction was computed for each modality and each dependent measure as follows: global incongruent− global congruent − local incongruent+ local congruent. We observed that the participants who were more sensitive to the interference effect in local than in global conditions in vision further showed the same interference effect in audition. The correlation between RTs in audition and in vision was marginally significant, r(15) = .41, p = .096 and a significant correlation was observed between accuracy in audition and RTs in vision, r(16) = −.55, p b .05. Third, global-to-local interference (local congruent− local incongruent) and local-to-global interference (global congruent− global incongruent) were further computed to investigate whether the same individuals were sensitive to this specific interference effect in both modalities. Correlations between global-to-local interference in audition and in vision are not significant (RT vision and RT audition: r(15) = −.17, p = .51; RT vision and accuracy audition : r(16) = −.376, p = .11). These non-significant correlations are not surprising given variability of performance in the local auditory condition probably due to the at-random performance of some participants. However, we observed that the individuals who showed an interference of the global shapes on the identification of local elements (global-to-local interference) in vision showed no interference of local triplets on the identification of global melodies (local-to-global interference) in audition.

L. Bouvet et al. / Acta Psychologica 138 (2011) 329–335

Vision global precedence (RT)

Vision global precedence (RT)

A r(15) = .55*

50 0 -50 -100 -150 -200 -2000

-1500

-1000

-500

0

500

50

-50 -100 -150 -200 -0.2

Vision interaction (RT)

Vision interaction (RT)

-20 -40 -60 -80 -100 -120 -140 -160 -2000

0.2

20

0

-180 -3000

0

-1000

0

0.4

0.6

Audition global precedence (%)

r(15) = .41

20

r(16) = -.46*

0

Audition global precedence (RT)

B

333

-20 -40 -60 -80 -100 -120 -140 -160 -180 -0.4

1000

r(16) = -.55*

0

-0.2

Audition interaction (RT)

0

0.2

0.4

0.6

0.8

1

Audition interaction (%)

20 0 -20 -40 -60 -80 -100 -120 -140 -160 -180 -1500

rs(15) = -.28

-1000

-500

0

500

Auditionlocal-to-global interference (RT)

Vision global-to-local interference (RT)

Vision global-to-local interference (RT)

C 20 0 -20 -40 -60 -80 -100 -120 -140 -160 -180 -0.05

rs(16) = .62*

0

0.05

0.1

0.15

0.2

0.25

Audition local-to-global interference (%)

Fig. 3. Correlations between vision and audition for (A) global precedence, (B) interaction effect and (C) interference effect. Correlations are presented between RTs in vision and audition (left) and between RTs in vision and accuracy in audition (right). * = p b .05.

Spearman correlations were performed showing that accuracy in audition and RTs in vision significantly correlated, rs(16) = .624, p b .01. The correlation between RTs in vision and in audition was non significant rs(15) = −.287, p = .26. 4. Discussion 4.1. Evidence for a similar GPE effect in vision and audition In this paper, we aimed to demonstrate a GPE in both vision and audition that relies on a common mechanism. To do so, we used an original set of auditory hierarchical stimuli (Justus & List, 2005) to parallel Navon (1977)'s paradigm in vision. These stimuli enabled us to independently assess local and global processing in the auditory modality, in contrast to previous studies using the classic contour–interval manipulation (Peretz, 1990). We further administered the auditory and visual shape identification tasks to the same participants. Results showed a

global-over-local advantage in the auditory modality as in the visual modality. Melodic directions were better and faster identified than triplet directions in audition and large visual configurations more quickly identified than their constitutive small shapes in vision. Global interference effects were further observed in both modalities. Triplet directions were less effectively detected when embedded in an incongruent melody and small visual shapes more slowly identified when belonging to an incongruent configuration. Taken together, these findings provide evidence for a GPE in both the visual and the auditory modality. Moreover, the GPE was found in audition on the two dependent measures of reaction time and accuracy. The second main finding of our study was that significant correlations were found between modalities. Indeed, when a global precedence effect was observed in one modality in a participant, the same participant demonstrated a similar global precedence effect in the other modality. Furthermore, the level of processing (global vs. local) by congruency (congruent vs. incongruent) interaction effect

334

L. Bouvet et al. / Acta Psychologica 138 (2011) 329–335

was found to be stable across modalities. Lastly and well in line with our hypothesis of similar mechanisms in vision and audition, those individuals who were sensitive to the global to local interference effect in vision (the incongruent global form interfere with the identification of the local one) were less sensitive to the local-to-global interference in audition (when the incongruent local triplets interfere with the identification of the global melody). This latter correlation is quite noteworthy, considering that it is observed within the global auditory conditions where mean accuracy is above chance level. The fact that an opposite correlation is observed between the interference of local triplets on the identification of the global melody in audition and the interference of the global form on the identification of local elements in vision strongly suggests that participants process information the same way in audition and in vision. Indeed despite the report of a significant local advantage in audition, this finding indicates that some participants process information globally whatever the modality. As a consequence, these participants are disturbed by the global shape when asked to identify local shapes, but they are not disturbed by local triplets when asked to identify global melodies. Conversely, other participants process information locally: they are not disturbed by the global form when asked to identify local elements in vision but are disturbed by local triplets to identify global melodies. These findings strongly suggest that similar processes are involved in vision and audition for the identification of hierarchical stimuli. These results further suggest that each individual is more or less sensitive to global or local interference and that there are inter-individual differences in the way perceptual information is processed. Thus, these findings argue for the existence of distinct cognitive styles among people. 4.2. Similar cognitive styles across modalities While all human minds share common cognitive abilities, cognitive style is defined as a stable individual approach to solve simple cognitive tasks which is related to both intelligence and personality (Kozhevnikov, 2007). One classical way to classify arrays of cognitive styles is analyticalholistic style (Allison & Hayes, 1996). This is typically what we observe in this study, some participants are more dependent upon global information (holistic style) while others are more independent from global information (analytical style). These different cognitive styles can also be understood as different ways to be dependent on top-down processing. As stated in the introduction, the iteration model postulates that global information is re-injected via top-down processing during the construction of the visual percept (Peyrin et al., 2010). Some authors argue that common neural mechanisms control attention and influence perception in audition and in vision (Shinn-Cunningham, 2008). Thus, those people who show a preference for global interference in both modalities would be more dependent upon this top-down process. Accordingly, their perception would be more under the influence of the global percept. Conversely, people who show local interference in both modalities (sensitivity to local information) would be less dependent upon the top-down process, their percept would be more sensitive to the properties of stimuli, thus to local percept. This question of cognitive styles is quite relevant when considering that some populations who suffer from a developmental disorder, like autism or William syndrome, seem to process visual information with a local bias (Rondan et al., 2008; Wang et al., 2007). Moreover, the independence of top-down and bottom–up processes has been proposed as a key characteristic in autism perception (Mottron et al., 2006). The question has been raised whether these populations further show a local bias when processing auditory information, using a contour–interval paradigm (Deruelle et al., 2005; Foxton et al., 2003; Mottron et al., 2000). As argued in the Introduction, in light of the methodological issues of this paradigm, the processes underlying the contour–interval paradigm are quite questionable. Thus, the paradigm developed here seems more suited to address the question of a cognitive style in pathological populations.

4.3. Auditory local interference A local interference was found in the auditory but not in the visual modality. One possible account for the absence of local interference in vision is methodological. As previously mentioned in the Introduction, the global precedence effect is an experimental effect that depends on stimuli characteristics or experimental conditions. In the current study, the vision experiment was a priori designed to induce a global interference so that the size and presentation of stimuli, and sparsity of elements were a priori selected for this purpose. Conversely, information is lacking on the mechanisms that can create global and local interference in audition as only a few studies have addressed this question (Justus & List, 2005; List et al., 2007; Sanders & Poeppel, 2007). The currently reported local precedence effect in the auditory modality might have followed from some a priori methodological choice, as for the global precedence effect in vision. For example, stimuli length or amount of global pitch change might have induced local processing. Indeed, Sanders and Poeppel (2007), whose paradigm inspired the present study, used different types of stimuli than ours and reported an auditory local interference in specific conditions. In their study, three local FM sweeps of 40 ms composed a global sequence of 500 ms. They observed an auditory local interference in RTs when the amount of global pitch change was 1/2 octave but not when it was only 1 octave. In our study the amount of global pitch change was a little more than 1 octave (do4 to E5). The apparent dichotomy of results can be also explained by the stimuli per se employed in both studies. In our study, local elements are composed of 3 tones whereas in Sanders et al.'s study, a single FM pure tone composed the local elements. The silence between tones in our local stimuli might have made the local stimuli more salient and thus induced the local interference we found in the auditory task. Another methodological account of the local interference could be novelty. Indeed, some studies reported a reduction of the global bias when hierarchical forms were made of non objects or novel objects (Harrison & Stiles, 2009; Poirel et al., 2006). In the present study for the auditory task, participants were asked to identify whether the global melody or local triplets were going up or down, which is a more unfamiliar task than identifying a familiar form (round or square) as in the visual modality. Another possible explanation of this auditory local interference is attentional. As the auditory task was more difficult than the visual task (as reflected by accuracy scores in audition), more attention is needed to perform the auditory task. Indeed, some studies have shown that an extra-attentional load can induce either a global or a local interference in vision (Shedden & Reid, 2001). Such a phenomenon might have been involved here and explain our results in the auditory modality. To conclude, methodological and attentional issues might account for the local interference observed in audition. Further work has to be done to clarify this point. 4.4. The GPE: A general perceptual processing? Ivry and Robertson (1998) argued within the framework of the Double Filtering Frequency theory that global and local processing could be a general perceptual functioning to process information, which lies on hemispheric asymmetry. They argued that the same asymmetry as observed in vision for spatial visual frequency should further be observed in audition (i.e. global information processed by the right hemisphere and local information by the left hemisphere). However, the studies that used fast–slow stimuli as local–global stimuli in audition failed to show the expected lateralization through MMN measures (List et al., 2007; Sanders & Poeppel, 2007). Thus, as this apparent lack of lateralization for the auditory stimuli could be explained by methodological choices (Sanders & Poeppel, 2007), there are some behavioral supports for common perceptual mechanisms across sensory modalities. For example, some studies have explored global and local processing in the haptic system, showing higher

L. Bouvet et al. / Acta Psychologica 138 (2011) 329–335

sensitivity to detect differences in the global form and lower ability to detect the local forms like depth or curvature (Norman, Norman, Clayton, Lianekhammy, & Zielke, 2004; Phillips, Egan, & Perry, 2009). Moreover, it appears that during object exploration, the haptic system relies more on local information at the beginning of the exploration but later relies on both local and global features (Lakatos & Marks, 1999). Also, Förster (2011) reported cross modal influences of global and local processing between all sensory modalities. Besides, Pressnitzer and Hupé (2006) found that the auditory and the visual systems have a similar way to perceive ambiguous stimuli. However, if there is some evidence of common perceptual mechanisms across modalities, more evidence is needed to establish a stability of perception across modalities. To our knowledge, this study provides first evidence of intra-personal stability in the processing of information across modalities. Acknowledgments We warmly thank Nicolas Poirel and Lorenza S. Colzato whose comments largely improved the quality of this manuscript. We further thank Adeline Paignon for her help in the design of the experiments. References Allison, J., & Hayes, C. (1996). The Cognitive Style Index, a measure of intuition-analysis for organizational research. Journal of Management Studies, 33, 119–135. Badcock, J. C., Whitworth, F. A., Badcock, D. R., & Lovegrove, W. J. (1990). Low-frequency filtering and the processing of local–global stimuli. Perception, 19(5), 617–629. Beaucousin, V., Cassotti, M., Simon, G., Pineau, A., Kostova, M., & Houdé, O. (2011). ERP evidence of a meaningfulness impact on visual global/local processing: when meaning captures attention. Neuropsychologia, 49(5), 1258–1266, doi:10.1016/j. neuropsychologia.2011.01.039. Bullier, J. (2001). Integrated model of visual processing. Brain Research. Brain Research Reviews, 36(2–3), 96–107. Deruelle, C., Schon, D., Rondan, C., & Mancini, J. (2005). Global and local music perception in children with Williams syndrome. Neuroreport, 16(6), 631–634. Fink, G. R., Halligan, P. W., Marshall, J. C., Frith, C. D., Frackowiak, R. S., & Dolan, R. J. (1996). Where in the brain does visual attention select the forest and the trees? Nature, 382(6592), 626–628, doi:10.1038/382626a0. Förster, J. (2011). Local and global cross-modal influences between vision and hearing, tasting, smelling, or touching. Journal of Experimental Psychology. General, doi:10.1037/a0023175. Foxton, J. M., Stewart, M. E., Barnard, L., Rodgers, J., Young, A. H., O'Brien, G., & Griffiths, T. D. (2003). Absence of auditory “global interference” in autism. Brain, 126(Pt 12), 2703–2709. Harrison, T. B., & Stiles, J. (2009). Hierarchical forms processing in adults and children. Journal of Experimental Child Psychology, 103(2), 222–240, doi:10.1016/j.jecp. 2008.09.004. Hubner, R. (1997). The effect of spatial frequency on global precedence and hemispheric differences. Perception & Psychophysics, 59(2), 187–201 PSYCHONOMIC SOC INC. Hughes, H. C., Nozawa, G., & Kitterle, F. (1996). Global precedence, spatial frequency channels, and the statistics of natural images. Journal of Cognitive Neuroscience, 8(3), 197–230. Ivry, R. B., & Robertson, L. C. (1998). The two sides of perception. Cambridge: MIT Press. Justus, T., & List, A. (2005). Auditory attention to frequency and time: an analogy to visual local–global stimuli. Cognition, 98(1), 31–51. Kimchi, R. (1992). Primacy of wholistic processing and global/local paradigm: A critical review. Psychological Bulletin, 112(1), 24–38, doi:10.1037/0033-2909.112.1.24. Kozhevnikov, M. (2007). Cognitive styles in the context of modern psychology: toward an integrated framework of cognitive style. Psychological Bulletin, 133(3), 464–481, doi:10.1037/0033-2909.133.3.464. Lakatos, S., & Marks, L. E. (1999). Haptic form perception: Relative salience of local and global features. Perception & Psychophysics, 61(5), 895–908 PSYCHONOMIC SOC INC.

335

Lamb, M. R., & Robertson, L. C. (1988). The processing of hierarchical stimuli: effects of retinal locus, locational uncertainty, and stimulus identity. Perception & Psychophysics, 44(2), 172–181. Lamb, M. R., Robertson, L. C., & Knight, R. T. (1990). Component mechanisms underlying the processing of hierarchically organized patterns: Inferences from patients with unilateral cortical lesions. Journal of Experimental Psychology. Learning, Memory, and Cognition, 16(3), 471–483. Liegeois-Chauvel, C., Peretz, I., Babai, M., Laguitton, V., & Chauvel, P. (1998). Contribution of different cortical areas in the temporal lobes to music processing. Brain, 121(10), 1853–1867. List, A., Justus, T., Robertson, L. C., & Bentin, S. (2007). A mismatch negativity study of local–global auditory processing. Brain Research, 1153, 122–133. Mottron, L., Peretz, I., & Menard, E. (2000). Local and global processing of music in high-functioning persons with autism: beyond central coherence? Journal of Child Psychology and Psychiatry, 41(8), 1057–1065. Mottron, L., Dawson, M., Soulieres, I., Hubert, B., & Burack, J. (2006). Enhanced perceptual functioning in autism: An update, and eight principles of autistic perception. Journal of Autism and Developmental Disorders, 36(1), 27–43. Navon, D. (1977). Forest before trees: The precedence of global features in visual perception. Cognitive Psychology, 9, 353–383. Navon, David (1981). The forest revisited: More on global precedence. Psychological Research, 43(1), 1–32, doi:10.1007/BF00309635 Springer Berlin/Heidelberg. Norman, J. F., Norman, H. F., Clayton, A. M., Lianekhammy, J., & Zielke, G. (2004). The visual and haptic perception of natural object shape. Perception & Psychophysics, 66(2), 342–351. Peretz, I. (1990). Processing of local and global musical information by unilateral braindamaged patients. Brain, 113(4), 1185–1205. Peretz, I., & Morais, J. (1987). Analytic processing in the classification of melodies as same or different. Neuropsychologia, 25, 645–652. Peyrin, C., Chauvin, A., Chokron, S., & Marendaz, C. (2003). Hemispheric specialization for spatial frequency processing in the analysis of natural scenes. Brain and Cognition, 53(2), 278–282. Peyrin, C., Michel, C. M., Schwartz, S., Thut, G., Seghier, M., Landis, T., Marendaz, C., et al. (2010). The neural substrates and timing of top-down processes during coarse-to-fine categorization of visual scenes: a combined fMRI and ERP study. Journal of Cognitive Neuroscience, 22(12), 2768–2780, doi:10.1162/jocn.2010.21424. Phillips, F., Egan, E. J. L., & Perry, B. N. (2009). Perceptual equivalence between vision and touch is complexity dependent. Acta Psychologica, 132(3), 259–266, doi:10.1016/j.actpsy.2009.07.010 ELSEVIER SCIENCE BV. Poirel, N., Pineau, A., & Mellet, E. (2006). Implicit identification of irrelevant local objects interacts with global/local processing of hierarchical stimuli. Acta Psychologica, 122(3), 321–336, doi:10.1016/j.actpsy.2005.12.010 ELSEVIER SCIENCE BV. Pressnitzer, D., & Hupé, J. -M. (2006). Temporal dynamics of auditory and visual bistability reveal common principles of perceptual organization. Current Biology, 16(13), 1351–1357, doi:10.1016/j.cub.2006.05.054. Robertson, L. C. (1996). Attentional persistence for features of hierarchical patterns. Journal of Experimental Psychology. General, 125(3), 227–249, doi:10.1037/00963445.125.3.227. Rondan, C., Santos, A., Mancini, J., Livet, M. O., & Deruelle, C. (2008). Global and local processing in Williams syndrome: Drawing versus perceiving. Child Neuropsychology: A journal on normal and abnormal development in childhood and adolescence, 14(3), 237–248, doi:10.1080/09297040701346321. Sanders, L. D., & Poeppel, D. (2007). Local and global auditory processing: Behavioral and ERP evidence. Neuropsychologia, 45(6), 1172–1186. Shedden, J. M., & Reid, G. S. (2001). A variable mapping task produces symmetrical interference between global information and local information. Perception & Psychophysics, 63(2), 241–252. Shinn-Cunningham, B. G. (2008). Object-based auditory and visual attention. Trends in Cognitive Sciences, 12(5), 182–186, doi:10.1016/j.tics.2008.02.003. Stewart, L., Overath, T., Warren, J. D., Foxton, J. M., & Griffiths, Timothy D. (2008). fMRI evidence for a cortical hierarchy of pitch pattern processing. PloS One, 3(1), e1470 doi:PMC2198945. Wang, L., Mottron, L., Peng, D., Berthiaume, C., & Dawson, M. (2007). Local bias and local-to-global interference without global deficit: A robust finding in autism under various conditions of attention, exposure time, and visual angle. Cognitive Neuropsychology, 24(5), 550–574, doi:10.1080/13546800701417096.