G Model
ARTICLE IN PRESS
NSL 30906 1–6
Neuroscience Letters xxx (2014) xxx–xxx
Contents lists available at ScienceDirect
Neuroscience Letters journal homepage: www.elsevier.com/locate/neulet
1
Short communication
3
Audiovisual congruency and incongruency effects on auditory intensity discrimination
4
Xiaoli Guo a , Xuan Li a , Xiaoli Ge a , Shanbao Tong a,b,∗
2
5 6
a b
School of Biomedical Engineering, Shanghai Jiao Tong University, 800 Dongchuan Road, Shanghai 200240, China Med-X Research Institute, Shanghai Jiao Tong University, 800 Dongchuan Road, Shanghai 200240, China
7
8 9 10 11 12 13
h i g h l i g h t s • • • •
We use a S1–S2 matching paradigm to investigate audiovisual interaction. We examine behavior and event-related potential difference waves. We find significant audiovisual congruency and incongruency effects. Change of visual size affects auditory intensity discrimination.
14
15
a r t i c l e
i n f o
a b s t r a c t
16 17 18 19 20 21
Article history: Received 15 August 2014 Received in revised form 20 October 2014 Accepted 26 October 2014 Available online xxx
22
27
Keywords: Audiovisual interaction Congruency and incongruency effects Auditory intensity discrimination ERP difference waves
28
1. Introduction
23 24 25 26
29 30 31 32 33 34 35 36 37 38 39 40 41 42
This study used a S1–S2 matching paradigm to investigate the influences of visual (size) change on auditory intensity discrimination. Behavioral results showed that subjects made more errors and spent more time to discriminate change in auditory intensity when it was accompanied by an incongruent visual change, while the performance for congruent audiovisual stimuli was better especially if there is a change in auditory stimuli. Event-related potential difference waves revealed that audiovisual interactions for multimodal mismatched information processing activated the right frontal and left centro-parietal cortices around 300–400 ms post S1-onset. © 2014 Elsevier Ireland Ltd. All rights reserved.
In everyday life, human beings receive information from different sensory modalities and the brain integrates those multisensory inputs to create awareness of the environment [1]. Among sensory modalities, vision and audition play the most important roles, and their congruency and incongruency effects have been demonstrated as audiovisual integrations or interactions [2–4]. Congruent audiovisual stimuli would increase the accuracy [3] and speed [4] of perception. On the other hand, when auditory and visual stimuli are incongruent, perception in one modality could be biased by the other, e.g., the McGurk effect [5–10]. Beside behavioral evidences, the underlying neural substrates for audiovisual interaction phenomena have also been extensively studied by neuroimaging [11–15] or electrophysiological [16] techniques. However, previous audiovisual studies mainly focused on cross-modal interactions
∗ Corresponding author at: School of Biomedical Engineering, Shanghai Jiao Tong University, 800 Dongchuan Road, Shanghai 200240, China. Tel.: +86 21 34205138. E-mail addresses:
[email protected],
[email protected] (S. Tong).
within single synchronously presented audiovisual stimuli, regardless of the influence from the preceding stimulations. In human brain, sensory perception is not discrete but continuous to calibrate environmental variability. It is a strategy for the brain to optimize energy consumption and improve perception performance [17,18]. Therefore, perceptual discrimination is important and information from multi-modalities should be integrated to complete this procedure. However, the effect of cross-modal interactions on perceptual discrimination remains unknown. In this study, a S1–S2 matching paradigm was designed to study the audiovisual congruency and incongruency effects during auditory intensity discrimination. That is, two successive auditory stimuli at the same or different intensities were presented synchronously with two visual stimuli of the same or different sizes. Besides behavioral tests, the underlying neural mechanism is also of our interest. Generally, neural activities consist of two types of audiovisual interactions during S1–S2 matching process. One is elicited by audiovisual S1 or S2, which has been extensively studied with synchronously presented auditory and visual stimuli [16,19,20]. While the temporal interaction between two successive
http://dx.doi.org/10.1016/j.neulet.2014.10.043 0304-3940/© 2014 Elsevier Ireland Ltd. All rights reserved.
Please cite this article in press as: X. Guo, et al., Audiovisual congruency and incongruency effects on auditory intensity discrimination, Neurosci. Lett. (2014), http://dx.doi.org/10.1016/j.neulet.2014.10.043
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63
G Model NSL 30906 1–6
ARTICLE IN PRESS X. Guo et al. / Neuroscience Letters xxx (2014) xxx–xxx
2
74
audiovisual stimuli has been less explored. Partially overlapping brain networks have been suggested to be involved in multimodal change detection. That is, transitions in visual, auditory and tactile modalities all activated the right temporoparietal junction, inferior frontal gyrus, insula and left cingulate, and supplementary motor areas [21]. In this study, we also sought to identify the audiovisual components for processing mismatch between changes of visual and auditory information using event-related potential (ERP) and ERP difference waves. These electrophysiological studies would help to reveal the time process and brain networks for cross-modal interactions in perceptual discrimination.
75
2. Material and methods
76
2.1. Subjects
64 65 66 67 68 69 70 71 72 73
85
Twelve volunteers (19–24 years old, male/female = 7/5) were recruited from Shanghai Jiao Tong University. All participants were right-handed with normal hearing, normal or corrected-to-normal vision, and reported no history of neurological or psychological disorders. Each subject signed a written informed consent and was paid for participation regardless of his/her performance in the experiment. The experimental protocols were in compliance with the Declaration of Helsinki. This study was approved by the institutional ethic committee of Shanghai Jiao Tong University.
86
2.2. Experimental procedure
77 78 79 80 81 82 83 84
Fig. 1. Stimulation protocols. The diagram showed the timing of the auditory, visual, and audiovisual stimuli. Each trial consisted of one/two auditory (A1, A1–A1, or A1–A2), or two visual (V1–V1 or V1–V2) or two audiovisual (V1A1–V1A1, V1A1–V1A2, V1A1–V2A1, or V1A1–V2A2) stimuli with 150 ms in between. The visual stimulus was a white-filled circle with a view angle of either 2◦ (V1) or 4◦ (V2), and the auditory stimulus was a 3.5 kHz tone at an intensity of either 65 dB (A1) or 70 dB (A2). Each auditory or visual stimulus was presented for 17 ms. The inter-trial interval ranged randomly from 1800 to 2000 ms.
2.3. Data acquisition and analysis 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121
All experiments were conducted in an acoustic and electric shielding room (3 × 3.5 m, Union Brother, Guangzhou, China). Subjects were seated 120 cm in front of an LCD display (Model: KLV-32J400A, SONY, Shanghai, China) in a comfortable posture. Two speakers (Model: R1600T08, Edifier, China) were placed on both sides of the display (horizontal visual angle 10◦ ) for auditory stimulation. During the experiments, subjects were requested to keep fixation on the display center and attend to the speakers. Programmed stimuli were delivered by the E-prime program (version 2.0, Psychology Software Tools Inc., Pittsburgh, PA). Each experiment consisted of eight blocks separated by a 5 min inter-block break for resting. Each block consisted of 114 trials. During the inter-trial interval (randomly from 1800 to 2000 ms), a black screen was displayed. Each trial used a “S1–S2” paradigm, in which S1 and S2 were presented successively at an interval of 150 ms (Fig. 1). S1 and S2 were either visual, or auditory, or audiovisual stimuli presented for 17 ms each. The two auditory (or visual) stimuli within the same trial could be of the same or different intensities (or sizes). The visual stimulus was a white-filled circle at either 2◦ (V1) or 4◦ (V2), and the auditory stimulus was a 3.5 kHz tone with an intensity of either 65 dB (A1) or 70 dB (A2). In total, we had ten different stimuli conditions, including congruent audiovisual trials (V1A1–V1A1 and V1A1–V2A2), incongruent audiovisual trials (V1A1–V1A2 and V1A1–V2A1), unimodal auditory trials (A1, A1–A1, and A1–A2), unimodal visual trials (V1–V1and V1–V2), and blank trials (No-stim) (Fig. 1). The No-stim trials were used to balance the pre-stimulus activity in calculating cross-modal difference waves [3]. In each block, two unimodal visual, three unimodal auditory, and four audiovisual stimuli were randomly presented with equal probabilities (i.e., 10.53% for each), while the blank trials were 5.23% in total. During the experiment, subjects were requested to discriminate the intensities of the two successive auditory stimuli, pressing the key “J” with the right index finger if the stimuli were the same and “F” with the left index finger if the stimuli were different.
Behavioral responses were automatically recorded with the E-prime software. Continuous EEG signals were recorded with BrainAmp (BrainAmp DC, Brain Products GmbH, Munich, Germany) at a sample rate of 1000 Hz using a 32 channel EEG cap with Ag–AgCl electrodes. All channels were referenced to the FCz site. Electrode impedances were kept below 5 k during recording. In addition, horizontal and vertical electrooculograms (EOGs) were recorded. One subject was excluded from the neurophysiological analysis due to extremely low signal-to-noise ratio of EEG signals. EEG preprocessing was performed offline using Analyzer software (version 2.0, Brain Products GmbH, Munich, Germany). EEG signals were re-referenced to the average of all channels. Trials with evident eye movements were rejected off-line and an artifact criterion of ±100 V was used to exclude the trials with excessive electromyogram. Then, the EEG data were filtered using a digital zero-phase-shift low-pass filter (0.016–30 Hz). EEG epochs for correct response trials were averaged across all subjects for different stimuli conditions over a 1000 ms period (200 ms pre-S1 baseline plus 800 ms post S1-onset) to obtain the event-related potentials (ERPs).
122
123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142
3. Results
143
3.1. Behavior data
144
The average reaction time (RT) over all trials was 600 ± 60 ms; therefore, trials with extremely short (i.e., <250 ms) or long (i.e., >1000 ms) RTs (5% of correct responses) were excluded in the following analysis. The accuracy (ACC) and RT under different stimuli conditions were compared with two-tailed paired sample t-tests followed by false discovery rate (FDR) correction (q < 0.05) (Fig. 2). The overall tendency of RT was in accordance with the ACC in all conditions, i.e., the higher the ACC, and the lower the RT. The unimodal A1–A1 trials (ACC = 0.92 ± 0.09, RT = 562 ± 56 ms)
Please cite this article in press as: X. Guo, et al., Audiovisual congruency and incongruency effects on auditory intensity discrimination, Neurosci. Lett. (2014), http://dx.doi.org/10.1016/j.neulet.2014.10.043
145 146 147 148 149 150 151 152 153
G Model NSL 30906 1–6
ARTICLE IN PRESS X. Guo et al. / Neuroscience Letters xxx (2014) xxx–xxx
3
were significantly easier than A1–A2 trials (ACC = 0.77 ± 0.11, RT = 626 ± 21 ms) in aspects of both ACC (p = 0.003, FDR corrected) and RT (p = 0.007, FDR corrected), indicating the difficulty in discriminating between 65 dB and 70 dB sounds. The audiovisual congruency and incongruency effects on behavioral performance were clear. The discrimination of auditory stimuli at different intensities could be affected by both congruent and incongruent visual stimuli. The performance was better with a congruent visual change (V1A1–V2A2 vs. A1–A2: p = 0.003 for ACC and p = 0.008 for RT, FDR corrected), and incongruent visual information also biased the auditory discrimination (V1A1–V1A2 vs. A1–A2: FDR-corrected p = 0.014 for ACC though p = 0.526 for RT). In contrast, for the trials with auditory stimuli at the same intensities, congruent visual stimuli did not make the task easier (V1A1–V1A1 vs. A1–A1: p = 0.079 for ACC and p = 0.312 for RT), which might be due to the good performance in unisensory condition. However, incongruent visual stimuli could confuse the subjects and then bias the judgment of auditory change (V1A1–V2A1 vs. A1–A1: p = 0.012 for ACC and p = 0.000 for RT, FDR corrected). 3.2. ERPs and ERP difference waves
Fig. 2. Behavior performance. Average response accuracy (A) and reaction time (B) for 12 participants. *: FDR-corrected p < 0.05, **: FDR-corrected p < 0.01.
Since the audiovisual incongruence effect was most significant in the condition of V1A1–V2A1 as measured with ACC and RTs, we extracted its ERPs and ERP difference waves to investigate the neural activity for cross-modal interaction. Generally, the ERP for audiovisual trials looks like the combination of unimodal auditory and visual ERPs (Fig. 3). The ERP elicited by A1–A1 showed typical auditory components, i.e., P1 at ∼60 ms, N1 at ∼110 ms, and P2 at ∼200 ms, which were prominent over the fronto-central scalp and inverted at mastoid electrodes (TP9/TP10). Under unimodal visual stimulation V1–V2, ERP was observed with peaks at ∼150 ms (P1) and ∼200 ms (N1), and the largest P1 peak
Fig. 3. Group averaged ERPs at frontal, mastoid and occipital sites. ERPs evoked by audiovisual (V1A1–V2A1), unimodal auditory (A1–A1), unimodal visual (V1–V2), and blank (No-stim) stimuli were colored black, blue, red, and green, respectively. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
Please cite this article in press as: X. Guo, et al., Audiovisual congruency and incongruency effects on auditory intensity discrimination, Neurosci. Lett. (2014), http://dx.doi.org/10.1016/j.neulet.2014.10.043
154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172
173
174 175 176 177 178 179 180 181 182 183 184
G Model NSL 30906 1–6 4
ARTICLE IN PRESS X. Guo et al. / Neuroscience Letters xxx (2014) xxx–xxx
Fig. 4. t-tests and topographies of the difference waves, i.e., (V1A1–V2A1) + (No-stim) − (V1–V2) − (A1–A1). (A) t-test analyses of ERP difference waves for all channels (green: p < 0.05; red: p < 0.01). The three windows highlighted in yellow boxes showed significant audiovisual integration: (1) Window I: 160–210 ms; (2) Window II: 215–275 ms; (3) Window III: 300–400 ms. (B) Topographical maps of ERP difference waves in Window I, II, and III. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202
was located over the occipital scalp and the largest N1 trough was over parieto-occipital scalp. N1 component was followed by a positive wave at occipital scalp around 250 ms, referred as P250. At the later stage (∼320 ms), both auditory and visual ERPs showed positive wave forms (i.e., P3) over occipital scalp. ERP difference waves were extracted to analyze the neural activities of audiovisual interactions [3,19] by subtracting the ERPs of unimodal auditory stimulus (A1–A1) and unimodal visual stimulus (V1–V2) from the sum of the ERPs of bimodal stimulus (V1A1–V2A1) and the blank trial (No-stim), i.e., (V1A1–V2A1) + (Nostim) − (V1–V2) − (A1–A1). The No-stim ERPs were included in the calculation of audiovisual difference waves to balance any prestimulus activity that was present at all trials [3]. Otherwise, such activity would be counted once but subtracted twice in the difference waves, possibly introducing an early deflection that could be mistaken for a true cross-modal interaction [22,23]. The ERP difference waves were statistically tested with t-tests by comparing the amplitudes of difference waves against zero at
each time point of the first 400 ms post-S1 at all 30 recording electrodes (Fig. 4). To control for the inflated type I error due to the large number of comparisons, we assumed the significance of the difference waves if the point-by-point t-tests were significant (p < 0.05) at least in fifteen consecutive time points in two adjacent channels [24]. Three windows with significant evidences of audiovisual integration or interaction were found: (1) Window I: 160–210 ms; (2) Window II: 215–275 ms; (3) Window III: 300–400 ms. (1) Window I: 160–210 ms This window includes both unimodal auditory P2 and visual N1 components, which attributed to the audiovisual interaction stage of S1. The amplitude of difference waves reached the significance level at 12 electrodes (Fp1, Fp2, F3, F4, FC2, FC5, FC6, Pz, P4, O1, Oz, O2) covering almost whole frontal, fronto-central and occipital regions. Evident frontal positivity and occipital negativity could be detected in the topography of difference waves. This
Please cite this article in press as: X. Guo, et al., Audiovisual congruency and incongruency effects on auditory intensity discrimination, Neurosci. Lett. (2014), http://dx.doi.org/10.1016/j.neulet.2014.10.043
203 204 205 206 207 208 209 210
211
212 213 214 215 216 217 218
G Model NSL 30906 1–6
ARTICLE IN PRESS X. Guo et al. / Neuroscience Letters xxx (2014) xxx–xxx
220
period was reported as a common stage of audiovisual integration for the illusory and physical perception in previous studies [25–27].
221
• Window II: 215–275 ms
219
222 223 224 225 226 227 228 229 230
231
In this period, corresponding to the visual P250, both t-tests and the topography of the difference waves showed significant negativity at fronto-centro-parietal electrodes (FC1, FC5, C3, Cz, CP1, CP2). The audiovisual P250 component was also reported in previous audiovisual studies using S1 paradigm [3,28,29], suggesting that the cross-modal activities in this period were mainly elicited by the first stimulation. It reflects the delayed effect of the auditory P2 on visual P250 [29] and is relevant to high-level cognition (e.g., mismatch process and attention distribution) [28,29]. • Windows III: 300–400 ms
5
during 300–400 ms in this study, were reported to be responsible for change detection either in visual, auditory, or tactile modality [21]. The mismatch between changes of visual and auditory information could be integrated in these areas; therefore, induced an incongruent effect of visual size change on auditory intensity discrimination. In summary, our study provided evidence of cross-modal interaction during perceptual discrimination. However, only the influences of visual size change on auditory intensity discrimination was manipulated. Interactions in other conditions, e.g., at different levels of incongruence, or between other audiovisual characteristics, such as visual luminance, color, shape, and auditory frequency, or even between other modalities, are also interesting and worth further investigating. A more comprehensive study with a larger number of subjects and more experimental trials is needed. Nevertheless, our results suggested the S1–S2 matching paradigm is suitable for studying cross-modal interaction in perceptual discrimination.
241
This window includes both unimodal auditory and visual P3 components. It corresponds to 133–233 ms post S2-onset, which accounted for multimodal mismatched information processing between two successive audiovisual stimuli. The amplitude of difference waves reached the significance level at 13 electrodes (Fp2, F4, F8, FC6, C3, Cz, T8, CP5, CP1, P3, P7, Pz, O1). Topography also showed frontal positivity and occipital negativity in this time window. However, different from the symmetric pattern in 160–210 ms, activations in 300–400 ms were asymmetric with a right frontal positivity and a left centro-parietal negativity.
242
4. Discussion
References
This study demonstrated that intensity discrimination between two auditory stimuli could be affected by synchronously presented visual information. The discrimination of auditory intensity could be promoted by a congruent visual size change or biased by an incongruent visual size change. It should be noted that the audiovisual congruency effect was more prominent if the auditory stimuli are at different intensities, which might be due to the different difficulties in discriminating intensity changed and unchanged auditory stimuli in this experiment. The effect of change in visual size on auditory intensity discrimination could be explained with the weighted linear model of sensory information integration [30]. The weight was related to the signal reliability in the modality. In humans, vision has high spatial reliability [31], therefore, the spatial size change in visual stimuli could play a relatively dominant role in multisensory perception. An impressive demonstration of the dominant role of visual spatial information is the ventriloquism effect [6] and aftereffect [7], which describes an enduring sound shift toward the visual input after exposure to spatially disparate audio-visual stimuli. The strong cross-modal interplay of visual size and auditory intensity in this study was also demonstrated by ERP difference waves showing that almost the whole frontal, fronto-central and occipital areas were involved in early audiovisual integration (160–210 ms). During the audiovisual S1–S2 discrimination, cross-modal interactions were presented at 160–210 ms, 215–275 ms, and 300–400 ms post S1-onset. Their spatiotemporal characteristics suggested that the neural activities in the first two windows were mainly elicited by the first stimulation, while those in the third window were responsible for multimodal mismatched information processing during perceptual discrimination. Studies on unimodal S1–S2 tasks suggested that activations in 300–400 ms probably belonged to the contingent negative variation (CNV) family that is related to working memory before the perceptual decision and the discriminative response [32,33]. The right frontal and left centroparietal cortices, which were involved in audiovisual interactions
[1] M.O. Ernst, H.H. Bulthoff, Merging the senses into a robust percept, Trends Cognit. Sci. 8 (2004) 162–169. [2] G.A. Calvert, Crossmodal processing in the human brain: insights from functional neuroimaging studies, Cereb. Cortex 11 (2001) 1110–1123. [3] J. Mishra, A. Martinez, T.J. Sejnowski, S.A. Hillyard, Early cross-modal interactions in auditory and visual cortex underlie a sound-induced visual illusion, J. Neurosci. 27 (2007) 4120–4131. [4] S. Molholm, W. Ritter, M.M. Murray, D.C. Javitt, C.E. Schroeder, J.J. Foxe, Multisensory auditory-visual interactions during early sensory processing in humans: a high-density electrical mapping study, Brain Res.: Cognit. Brain Res. 14 (2002) 115–128. [5] H. McGurk, J. MacDonald, Hearing lips and seeing voices, Nature 264 (1976) 746–748. [6] I.P. Howard, W.B. Templeton, Human Spatial Orientation, Wiley, London, 1966. [7] G.H. Recanzone, Rapidly induced auditory plasticity: the ventriloquism aftereffect, Proc. Natl. Acad. Sci. U. S. A. 95 (1998) 869–875. [8] J. Gebhard, G. Mowbray, On discriminating the rate of visual flicker and auditory flutter, Am. J. Psychol. 72 (1959) 521–529. [9] R. Sekuler, A.B. Sekuler, R. Lau, Sound alters visual motion perception, Nature 385 (1997) 308. [10] L. Shams, Y. Kamitani, S. Shimojo, What you see is what you hear, Nature 408 (2000) 788. [11] T. Noesselt, J.W. Rieger, M.A. Schoenfeld, M. Kanowski, H. Hinrichs, H.J. Heinze, J. Driver, Audiovisual temporal correspondence modulates human multisensory superior temporal sulcus plus primary sensory cortices, J. Neurosci. 27 (2007) 11431–11441. [12] R.A. Stevenson, T.W. James, Audiovisual integration in human superior temporal sulcus: inverse effectiveness and the neural processing of speech and object recognition, Neuroimage 44 (2009) 1210–1223. [13] A.R. Nath, M.S. Beauchamp, A neural basis for interindividual differences in the McGurk effect, a multisensory speech illusion, Neuroimage 59 (2012) 781–787. [14] T. Raij, K. Uutela, R. Hari, Audiovisual integration of letters in the human brain, Neuron 28 (2000) 617–625. [15] M.S. Beauchamp, See me, hear me, touch me: multisensory integration in lateral occipital-temporal cortex, Curr. Opin. Neurobiol. 15 (2005) 145–153. [16] J. Vidal, M.H. Giard, S. Roux, N. Bruneau, Cross-modal processing of auditory–visual stimuli in a no-task paradigm: a topographic event-related potential study, Clin. Neurophysiol. 119 (2008) 763–771. [17] J.C. Dahmen, P. Keating, F.R. Nodal, A.L. Schulz, A.J. King, Adaptation to stimulus statistics in the perception and neural representation of auditory space, Neuron 66 (2010) 937–948. [18] N.S. Price, D.L. Prescott, Adaptation to direction statistics modulates perceptual discrimination, J. Vis. 12 (2012). [19] M. Giard, F. Peronnet, Auditory–visual integration during multimodal object recognition in humans: a behavioral and electrophysiological study, J. Cognit. Neurosci. 11 (1999) 473–490.
232 233 234 235 236 237 238 239 240
243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277
Acknowledgements This work was supported by National Natural Science Foundation of China (No. 61001015), National Basic Research Program of China (973 Program) (No. 2011CB013304) and China Scholarship Council. We are grateful to Yu Sun and Hong Zhang for their help on signal processing and statistical analysis.
Please cite this article in press as: X. Guo, et al., Audiovisual congruency and incongruency effects on auditory intensity discrimination, Neurosci. Lett. (2014), http://dx.doi.org/10.1016/j.neulet.2014.10.043
278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295
296
297 298 299 300 301
302
303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348
G Model NSL 30906 1–6 6 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365
ARTICLE IN PRESS X. Guo et al. / Neuroscience Letters xxx (2014) xxx–xxx
[20] W.A. Teder-Salejarvi, J.J. McDonald, F. Di Russo, S.A. Hillyard, An analysis of audio-visual crossmodal integration by means of event-related potential (ERP) recordings, Brain Res. Cognit. Brain Res. 14 (2002) 106–114. [21] J. Downar, A.P. Crawley, D.J. Mikulis, K.D. Davis, A multimodal cortical network for the detection of changes in the sensory environment, Nat. Neurosci. 3 (2000) 277–283. [22] D. Talsma, M.G. Woldorff, Selective attention and multisensory integration: multiple phases of effects on the evoked brain activity, J. Cognit. Neurosci. 17 (2005) 1098–1114. [23] M. Gondan, B. Roder, A new method for detecting interactions between the senses in event-related potentials, Brain Res. 1073–1074 (2006) 389–397. [24] D. Guthrie, J.S. Buchwald, Significance testing of difference potentials, Psychophysiology 28 (1991) 240–244. [25] A. Fort, C. Delpuech, J. Pernier, M.H. Giard, Early auditory–visual interactions in human cortex during nonredundant target identification, Brain Res. Cognit. Brain Res. 14 (2002) 20–30. [26] L. Shams, Y. Kamitani, S. Thompson, S. Shimojo, Sound alters visual evoked potentials in humans, Neuroreport 12 (2001) 3849–3852.
[27] J. Mishra, A. Martinez, S.A. Hillyard, Cortical processes underlying soundinduced flash fusion, Brain Res. 1242 (2008) 102–115. [28] C.R. Brown, A.R. Clarke, R.J. Barry, Auditory processing in an inter-modal oddball task: effects of a combined auditory/visual standard on auditory target ERPs, Int. J. Psychophysiol. 65 (2007) 122–131. [29] B. Liu, Z. Wang, Z. Jin, The integration processing of the visual and auditory information in videos of real-world events: an ERP study, Neurosci. Lett. 461 (2009) 7–11. [30] M.O. Ernst, H.H. Bulthoff, Merging the senses into a robust percept, Trends Cognit. Sci. 8 (2004) 162–169. [31] D.A. Bulkin, J.M. Groh, Seeing sounds: visual and auditory interactions in the brain, Curr. Opin. Neurobiol. 16 (2006) 415–419. [32] Y. Chen, X. Huang, B. Yang, T. Jackson, C. Peng, H. Yuan, C. Liu, An event-related potential study of temporal information encoding and decision making, Neuroreport 21 (2010) 152–155. [33] X. Zhang, L. Ma, S. Li, Y. Wang, X. Weng, L. Wang, A mismatch process in brief delayed matching-to-sample task: an fMRI study, Exp. Brain Res. 186 (2008) 335–341.
Please cite this article in press as: X. Guo, et al., Audiovisual congruency and incongruency effects on auditory intensity discrimination, Neurosci. Lett. (2014), http://dx.doi.org/10.1016/j.neulet.2014.10.043
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383