International Journal of Psychophysiology 124 (2018) 1–11
Contents lists available at ScienceDirect
International Journal of Psychophysiology journal homepage: www.elsevier.com/locate/ijpsycho
Comparison for younger and older adults: Stimulus temporal asynchrony modulates audiovisual integration
T
⁎⁎
Yanna Rena,b, Yanling Renc, Weiping Yangd, , Xiaoyu Tange,a, Fengxia Wub, Qiong Wub, ⁎ Satoshi Takahashib, Yoshimichi Ejimab, Jinglong Wub,f,g, a
Department of Psychology, Medical Humanities College, Guiyang University of Chinese Medicine, Guiyang 550025, China Cognitive Neuroscience Laboratory, Graduate School of Natural Science and Technology, Okayama University, Okayama 7008530, Japan c Department of Light and Chemical Engineering, Guizhou Light Industry Technical College, Guiyang 550025, China d Department of Psychology, Faculty of Education, Hubei University, Wuhan 430062, China e School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian 116029, China f Intelligent Robotics Institute, Beijing Institute of Technology, Beijing 100081, China g Shenzhen Institute of Neuroscience, Shenzhen 518057, China b
A R T I C L E I N F O
A B S T R A C T
Keywords: Multisensory Audiovisual integration Temporal asynchrony Event-related potentials (ERP) Older adults Ageing effect
Recent research has shown that the magnitudes of responses to multisensory information are highly dependent on the stimulus structure. The temporal proximity of multiple signal inputs is a critical determinant for crossmodal integration. Here, we investigated the influence that temporal asynchrony has on audiovisual integration in both younger and older adults using event-related potentials (ERP). Our results showed that in the simultaneous audiovisual condition, except for the earliest integration (80–110 ms), which occurred in the occipital region for older adults was absent for younger adults, early integration was similar for the younger and older groups. Additionally, late integration was delayed in older adults (280–300 ms) compared to younger adults (210–240 ms). In audition‑leading vision conditions, the earliest integration (80–110 ms) was absent in younger adults but did occur in older adults. Additionally, after increasing the temporal disparity from 50 ms to 100 ms, late integration was delayed in both younger (from 230 to 290 ms to 280–300 ms) and older (from 210 to 240 ms to 280–300 ms) adults. In the audition-lagging vision conditions, integration only occurred in the A100V condition for younger adults and in the A50V condition for older adults. The current results suggested that the audiovisual temporal integration pattern differed between the audition‑leading and audition-lagging vision conditions and further revealed the varying effect of temporal asynchrony on audiovisual integration in younger and older adults.
1. Introduction In life, people obtain dynamic effective information from the complex environment through multiple senses. Merging the multiple informative inputs aid us in making identifications and decisions more quickly and accurately, which is called multisensory integration (Laurienti et al., 2006; Meredith et al., 1987; Spence, 2011; Stein and Meredith, 1993; Stein, 2012). Imagine that a firecracker is set off: we integrate the visual sparkle and sound of the burst in our brain, which makes it difficult to perceive the arrival time difference. However, on a stormy day, we generally see the lightning first and then hear the thunderclap. Although the lightning and the thunderclap come from a
common cause and occur simultaneously, a temporal asynchrony between the visual flash and the sound is perceived in our brain. The abovementioned life experiences indicate that the integration of information from multiple senses obeys the temporal principle, which declares that in a condition of slight temporal asynchrony, the maximum facilitation effect is induced by the greatest overlapping of the response trains evoked by the unisensory component stimuli (Stein, 2012). Meredith and Stein, in their representative neurophysiological studies, measured the response features of an auditory-visual superior colliculus neuron in a cat to a temporally combined stimulation (Meredith et al., 1987; Stein and Meredith, 1993). They found a
⁎ Correspondence to: J. Wu, Cognitive Neuroscience Laboratory, Graduate School of Natural Science and Technology, Okayama University, 3-1-1 Tsushima-naka, Okayama 700-8530, Japan. ⁎⁎ Corresponding author. E-mail addresses:
[email protected] (W. Yang),
[email protected] (J. Wu).
https://doi.org/10.1016/j.ijpsycho.2017.12.004 Received 3 March 2017; Received in revised form 5 December 2017; Accepted 11 December 2017 Available online 15 December 2017 0167-8760/ © 2017 Elsevier B.V. All rights reserved.
International Journal of Psychophysiology 124 (2018) 1–11
Y. Ren et al.
despite the ongoing deterioration of the sensory systems during ageing, there is still a large body of evidence for an increase in or maintenance of multisensory integration processing in older adults, which can aid older people in compensating for the often-destructive consequences of unisensory dysfunction (Freiherr et al., 2013; Laurienti et al., 2006; Peiffer et al., 2007). Using magnetoencephalography, Diaconescu et al. (2013) investigated the disparities in multisensory integration between younger and older adults and reported that despite the common sensory-specific regions in both younger and older adults, preferential activity in the posterior parietal and medial prefrontal areas between 150 ms and 300 ms after audiovisual stimuli onset was observed in older adults (Diaconescu et al., 2013). The authors proposed that the activity of these two brain regions was the basis for the integrated response in older adults (Diaconescu et al., 2013; Freiherr et al., 2013). However, with normal ageing, it becomes more difficult to discriminate simultaneity, temporal order and casual relationships among stimuli, leading to an increase in the width of the temporal binding window compared to that in younger adults (Bedard and Barnett-Cowan, 2016; Diederich and Colonius, 2015; Poliakoff et al., 2006; Setti et al., 2011; Setti et al., 2011). A particular interest of the current study was how audiovisual temporal integration processing varies with ageing. Based on prior studies we predict that the audiovisual integration for older adults was different from that for younger adults in all SOA conditions, and the audiovisual interaction pattern was different between auditory‑leading and auditory-lagging visual conditions. According to a study conducted by Giard and Peronnet on the ‘additive model’ for multisensory integration, we first added the ERPs evoked by unimodal auditory stimuli and unimodal visual stimuli together. The audiovisual integration was expressed as the difference between the additive ERPs and the ERPs evoked by bimodal audiovisual stimuli (Giard and Peronnet, 1999). To understand the differences in integration among the varying conditions, the spatiotemporal topographical differences were presented. The primary goal of the present study was to clarify the mechanism of audiovisual temporal integration and the effect of ageing on audiovisual temporal integration by recording EEG signals from both unisensory stimuli and audiovisual stimuli (in synchrony or asynchrony).
dramatic increase in the magnitude of response enhancement when decreasing the temporal asynchrony between auditory and visual stimuli. Frassinetti et al. (2002) first reported that the temporal rules governing multisensory integration at the neuronal level were also observed in a human study (Frassinetti et al., 2002). In this study, visual enhancement was evaluated using a signal detection measure (perceptual sensitivity, d′), and they found that visual enhancement existed when the visual stimulus was presented simultaneously with the auditory stimulus but disappeared if the auditory stimulus preceded the visual stimulus by 500 ms. To more clearly understand the temporal window of multisensory interaction, Bolognini et al. (2005) instructed participants to conduct a visual detection examination under visual selective attention (Bolognini et al., 2005) to systematically investigate the effect of stimuli onset asynchrony (SOA) on audiovisual integration. Using the same signal detection measure to evaluate visual enhancement, their results indicated that visual enhancement occurred when the auditory and visual stimuli were presented simultaneously but disappeared with larger temporal disparities between the stimuli, such as 100 ms, 200 ms, 300 ms, 400 ms, or 500 ms. Additionally, Yang et al. (2014) recently detected temporal audiovisual integration by comparing responses to audiovisual stimuli with results from a predicted model (race-model) based on unimodal auditory and visual stimuli, and their results revealed alterations of audiovisual integration ranging from elevation (temporal disparity conditions, 0 ms and 50 ms) to suppression (150 ms) (Laurienti et al., 2006; Yang et al., 2014). Recently, frequent neuroimaging research has also been conducted to clarify the temporal effect on audiovisual integration, and this work has further confirmed that audiovisual integration is sensitive to temporal asynchrony between auditory and visual stimuli. After analysing oscillatory gamma-band responses (GBRs) using electroencephalography (EEG), Senkowski et al. (2007, 2007) reported robust multisensory interactions in simultaneous audiovisual conditions, and the integration effect was found in the occipital areas in auditory-preceding visual stimulus conditions but was absent in visual-preceding auditory conditions (Senkowski et al., 2007). The robust integration effect elicited by simultaneous audiovisual stimuli was further confirmed by Van Atteveldt et al. (2007) using functional magnetic resonance imaging (fMRI) (Van Atteveldt et al., 2007). Liu et al. (2011) provided event-related potentials (ERP) evidence for temporal audiovisual integration (Liu et al., 2011). In their study, the adapted video frames of naturalistic motion stimuli were used, and the multisensory stimuli had SOA values of −300 ms, 0 ms, or 300 ms. Their results revealed that multisensory integration occurred regardless of temporal asynchrony but was influenced by temporal asynchrony. Studies on the perception of synchrony between auditory and visual modalities have indicated that there exists a range of temporal disparities within which humans are unable to discern the asynchrony, and this range is known as the temporal binding window (Dixon and Spitz, 1980; King, 2005; Munhall et al., 1996; Van Wassenhove et al., 2007). In the study by Liu et al. (2011), the authors focused on a larger temporal disparity (300 ms) in which participant perceived temporal asynchrony clearly. Therefore, it is reasonable to postulate that there was integration diversity between the synchrony and asynchrony conditions. However, when the audiovisual temporal disparity falls within the temporal binding window, it remains unclear whether and in what way audiovisual integration is altered as a function of the relative timing between auditory and visual stimuli. Although Bolognini et al. (2005) and Yang et al. (2014) investigated the effect of temporal asynchrony on audiovisual integration systematically, to date, no systematic study has been performed using event-related potentials (ERPs) (Bolognini et al., 2005; Yang et al., 2014). Additionally, age-effect studies have revealed that the auditory threshold tends to increase, the visual acuity tends to decrease with ageing (Diederich et al., 2008; Laurienti et al., 2006), and this deterioration can be attributed to the poorer health status and decline of cognitive function in older adults (Freiherr et al., 2013). However,
2. Materials and methods A behavioural pre-study was conducted (Ren et al., 2017). The results showed that temporal asynchrony between auditory and visual stimuli significantly modulated audiovisual integration and that the alternative pattern was different between the younger and older groups. The focus of the present study was to analyse EEG evidence for temporal asynchrony modulating audiovisual integration in both younger and older groups. 2.1. Participants Fifteen healthy younger volunteers (22–25 years, mean age ± SD, 23.00 ± 0.93) and 15 healthy older volunteers (61–76 years, mean age ± SD, 68.20 ± 4.60) were recruited as paid volunteers to participate in the study. All younger adults were undergraduate students of Okayama University, and the older adults were recruited randomly from the general community of Okayama City. All participants had normal hearing and normal or corrected-to-normal vision and were naive about the purpose of the experiment. Vision was examined by a Japanese Eye Chart, and audition was examined by a RION AUDIOMETER AA-71 (Rion Service Center, Japan). Participants were excluded if their mini-mental state examination (MMSE) scores were > 2.5 standard deviations (SD) from the mean for their age and education level (Bravo and Hébert, 1997). Moreover, participants who reported a history of cognitive disorder were also excluded. All participants provided written informed consent for the procedure, which was previously approved by the ethics committee of Okayama 2
International Journal of Psychophysiology 124 (2018) 1–11
Y. Ren et al.
Fig. 1. Schematic depiction of the experimental design. (A) An example of a target stimulus to form a possible trial sequence. After fixation for 3000 ms at the beginning of each session, all of the auditory, visual, and audiovisual stimuli were presented randomly with a random inter-stimulus interval (ISI) of 1300–1800 ms. After the presentation of each stimulus, the subject was instructed to identify whether it was a target and which hemispace the target was presented in by pressing the right or left button of a mouse as rapidly and accurately as possible. (B) Decomposition of the relative timing of the auditory and visual stimuli within each subtype of audiovisual stimuli. A0V: auditory and visual stimuli presented simultaneously. A50V, and A100V indicate that auditory stimuli led visual stimuli by 50 ms and 100 ms, respectively. V50A and V100A indicate that visual stimuli led auditory stimuli by 50 ms and 100 ms, respectively.
stimulus onset asynchrony (SOA): 0 ms, ± 50 ms, or ± 100 ms (Fig. 1B). Auditory and visual stimuli presented simultaneously were defined as A0V. An auditory stimulus leading the visual stimulus by 50 ms or 100 ms was defined as A50V, or A100V, respectively. An auditory stimulus lagging the visual stimulus by 50 ms or 100 ms was defined as V50A or V100A, respectively. The SOA times were chosen according to our previous behavioural studies and other previously published works (Bolognini et al., 2005; Bushara et al., 2001; Liu et al., 2011; Meredith et al., 1987; Ren et al., 2017; Senkowski et al., 2007; Yang et al., 2014). The target stimuli constituted 20% of the total stimuli, and the duration for each trial varied between 150 ms and 250 ms, depending on the SOA.
University. No participant was excluded from the study. 2.2. Stimuli and tasks The visual stimulus was a checkerboard image with 0 or 2 dots contained within the checkerboard, and the participant was instructed to detect the image with 2 dots (visual target, Fig. 1A) while ignoring the visual standard stimulus (without dot). The visual stimuli (52 × 52 mm) were presented on the lower left or lower right quadrant of a 21-inch black background computer monitor for 150 ms (at a 12° visual angle to the left or right of centre and a 5° angle below the central fixation). The distance between the computer screen and the participant's eyes was 60 cm. The auditory standard stimulus was a 1000 Hz sinusoidal tone, and the auditory target stimulus was white noise. The auditory stimuli (A) were presented to the left or right ear randomly through earphones at approximately 60 dB SPL for a 150-ms duration (including 10 ms of rise/fall cosine gate). The audiovisual target stimulus was the combination of the visual target stimulus and auditory target stimulus, and the audiovisual standard stimulus was the combination of the visual standard stimulus and auditory standard stimulus. The combination of the visual target stimulus and auditory standard stimulus, and the combination of the auditory target stimulus and visual standard stimulus were not presented. Both the audiovisual target stimulus and audiovisual standard stimuli (AV) were presented as a combination of the visual and auditory stimuli with different periods of
2.3. Procedure The subjects were instructed to perform an auditory/visual stimuli discrimination task in a dimly lit, electrically shielded and sound-attenuated room (laboratory room, Okayama University, Japan) with their head positioned on a chin rest. Eight sessions were conducted, with each session lasting approximately 5 min. At the beginning of each session, the subjects were presented with a fixation cross for 3000 ms, and 25 unimodal visual stimuli, 25 unimodal auditory stimuli and 125 audiovisual stimuli (25 A0V, 25 V50A, 25 V100A, 25 A50V, 25 A100V) were presented randomly with an ISI that varied from 1300 to 1800 ms. All participants were instructed to identify whether the targets 3
International Journal of Psychophysiology 124 (2018) 1–11
Y. Ren et al.
condition. For example, under the A0V condition, we added the ERPs evoked by the A and V stimuli, with their onset matched in the time window from 100 ms before to 400 ms after stimuli onset to create combined A_0_V ERPs. In the A50V condition, we added the ERPs evoked by A in the time window from 100 ms before to 400 ms after A stimuli onset to the ERPs evoked by V in the time window from 150 ms before to 350 ms after V stimuli onset directly to create combined A_50_V ERPs. Then, we computed the differences between the bimodal AV ERPs and the combined [A + V] ERPs (A0V vs A_0_V, A50V vs A_50_V) (Liu et al., 2011; Talsma et al., 2007). To establish the presence of audiovisual interaction, the statistical analysis was conducted in three steps as in previous studies (Senkowski et al., 2011; Senkowski et al., 2007; Yang et al., 2015). First, the ERPs for bimodal trials were compared with the summated ERPs for unimodal A and V trials using a point-wise running t-test (two-tailed) for each scalp electrode under each bimodal stimulus condition (A100V, A50V, A0V, V50A, V100A) for each group. Significant differences were plotted when at least 24 ms consecutive data points met the alpha criterion of being < 0.05 (12 data points = 24 ms at a 500 Hz digitization rate) (Guthrie and Buchwald, 1991; Liu et al., 2011; Yang et al., 2015). Based on the t-test results, we chose the regions of interest (ROI) and time intervals where and when there was significant audiovisual integration. Second, repeated-measures ANOVAs (Greenhouse-Geisser corrections with corrected degrees of freedom) were conducted for the bimodal stimuli types (A100V, A50V, A0V, V50A, V100A), and each time interval was selected based on an overview of the significant differences found in the first step. The mean amplitude data were analysed while accounting for the between-subject factors of groups, bimodal conditions, electrodes and time intervals. If a significant interaction existed among the groups, bimodal conditions, or electrodes and the integration time interval was observed in the mean ERP amplitudes, the third step of the analysis was performed. In this third step, ANOVAs (Greenhouse-Geisser corrections with corrected degrees of freedom) were conducted separately for each of the five bimodal conditions (A100V, A50V, A0V, V50A, V100A) while accounting for the factors of group, stimuli type, and electrodes for each audiovisual integration time interval. The SPSS version 16.0 software package (SPSS, Tokyo, Japan) was used for all statistical analyses.
appeared on the left (by pressing the left button of the mouse) or right (by pressing the right button of the mouse) hemispace as rapidly and accurately as possible. In the experiment, each subject completed 1400 trials (280 target trials and 1120 task-irrelevant trials). 2.4. Apparatus The EEG and behavioural data were recorded simultaneously. Stimulus presentation was controlled using Presentation software (Neurobehavioral Systems, Albany, CA, USA). An EEG system (BrainAmp MR plus, Gilching, Germany) was used to record EEG signals through 32 electrodes mounted on an electrode cap (Easy-cap, Herrsching Breitbrunn, Germany). All signals were referenced to the left and right earlobes. Horizontal eye movements were measured by deriving the electrooculogram (EOG) from one electrode placed approximately 1 cm from the outer canthi of the left eye. Vertical eye movements and eye blinks were detected by deriving an EOG from an electrode placed approximately 1 cm below the subject's left eye. The impedance was maintained below 5 kΩ. Raw signals were digitized using a sample frequency of 500 Hz, and all data were stored digitally for off-line analysis. The ERP analysis was carried out using Brain Vision Analyzer software (version 1.05, Brain Products GmbH, Munich, Bavaria, Germany). 2.5. Data analysis 2.5.1. Behavioural data The mean response times (RTs) were calculated based on the responses that fell within the average time period ± 2.5 SD. The hit rate (HR) was the percentage of correct responses relative to the total number of target stimuli, and the false alarm rates (FA) were the percentage of responses relative to the total number of task-irrelevant stimuli. Additionally, sensitivity measures (d′) and response criteria (c) were computed separately for different conditions (Macmillan, 1993; Stanislaw and Todorov, 1999). The behavioural results for all factors (RTs, HR, FA, d′, c) were then analysed using a repeated-measures analysis of variance (ANOVA, Greenhouse-Geisser corrections with corrected degrees of freedom), and the statistical significance level was set at p ≤ 0.05 (Mauchley's sphericity test). The effect size estimates ηp2 are reported.
3. Results 3.1. Behavioural results
2.5.2. ERP data analysis The ERPs elicited by the task-irrelative stimuli were analysed. The data were bandpass filtered from 0.1–60 Hz during recording at a sample rate of 500 Hz. The data were divided into epochs, from 200 ms before the stimulus onset to 500 ms after the stimulus onset, and baseline corrections were made against the 200 ms to 0 ms time interval before stimuli onset. Trials with horizontal eyeball movements (horizontal EOG amplitudes exceeding ± 25 mV), vertical eye movements and eye blinks (vertical EOG amplitudes exceeding ± 100 mV), or other artifacts (a voltage exceeding ± 80 mV relative to baseline) were rejected automatically from the analysis. In addition, data associated with a false alarm were excluded. The data were then averaged for each stimulus type, following digital filtering with a bandpass filter of 0.01–30 Hz, and the grand-averaged data were obtained across all participants for each stimulus type in each electrode. Previous studies have showed that audiovisual integration could be assessed by the difference wave [AV − (A + V)], which is obtained by subtracting the sum of the ERP waves of the unimodal stimuli from the ERP waves of the bimodal stimuli (Giard and Peronnet, 1999; Talsma et al., 2007). The logic of this additive model is that the ERPs for bimodal (AV) stimuli are equal to the sum of the ERPs for the unimodal (A + V) stimuli plus the putative neural activities specifically related to the bimodal nature of the stimuli. Before calculating the audiovisual interaction, we shift the ERPs evoked by A or V horizontally basing on the onset time of each bimodal
The 7 stimuli type (A, V, A0V, A50V, A100V, V50A, V100A) ∗ 2 hemispace (left, right) ANOVA showed no significant lateralization effect for the response to left and right hemispace stimuli in either the younger [F(1, 14) = 6.131, p = 0.64, ηp2 = 0.031] or older [F(1, 14) = 1.24, p = 0.18, ηp2 = 0.068] group. Therefore, the responses to the left and right stimuli were collapsed for further analysis (Table 1). The 2 group (younger, older) ∗ 7 stimuli type (A, V, A0V, A50V, A100V, V50A, V100A) repeated-measures ANOVA for the mean response times revealed no significant interaction between group and stimuli type [F (6, 168) = 1.78, p = 0.17, ηp2 = 0.06]. However, a main effect of stimuli type [F (6, 168) = 52.54, p < 0.001, ηp2 = 0.625] was found, indicating a faster response when the auditory stimuli and visual stimuli were presented synchronously than when they were presented separately or asynchronously (see Table 1 in italics). Additionally, a main effect of the group [F (1, 28) = 12.42, p = 0.001, ηp2 = 0.307] was also observed, indicating that younger adults exhibited an obviously faster response than older adults did. The analysis for the hit rates and false alarm rates showed no significant main effect of stimuli type, group or stimuli type ∗ group interaction (all p > 0.05). However, there was a significant main effect of stimuli type on perceptual sensitivity (d′) [F (1, 28) = 4.44, p = 0.008, ηp2 = 0.317]. The further analysis for older adults showed that the perceptual sensitivity to the V stimuli was significantly lower 4
International Journal of Psychophysiology 124 (2018) 1–11
Y. Ren et al.
Table 1 Mean behavioural data for all participants in the experiment.
Response time (ms) Younger Older Hit rate (%) Younger Older False alarm (%) Younger Older Perceptual sensitivity (d′) Younger Older Response criterion (c) Younger Older
A
V
A0V
A50V
A100V
V50A
V100A
635 (27.69)⁎ 745 (27.32)⁎
658 (15.76)⁎ 755 (25.55)⁎
568 (16.72)⁎ 692 (25.75)⁎
579 (18.42)⁎ 703 (25.75)⁎
593 (15.79)⁎ 698 (24.82)⁎
594 (15.79)⁎ 707 (27.12)⁎
627 (17.02)⁎ 727 (25.23)⁎
97.0 (0.44) 95.7 (0.59)
95.3 (1.40) 93.8 (2.3)
97.0 (0.27) 96.8 (0.82)
98.2 (0.45) 97.5 (0.67)
96.7 (0.72) 97.5 (0.57)
96.7 (0.40) 97.1 (0.61)
96.0 (0.64) 96.3 (0.55)
0.13 (0.09) 0.25 (0.08)
0.29 (0.15) 2.50 (0.50)
0.13 (0.09) 0.38 (0.18)
0.29 (0.08) 0.33 (0.23)
0.17 (0.13) 0.58 (0.27)
0.17 (0.009) 0.38 (0.17)
0.25 (0.18) 0.25 (0.08)
4.58 (0.056) 4.39 (0.047)
4.40 (0.014) 4.03 (0.017)⁎
4.58 (0.051) 4.52 (0.097)
4.65 (0.066) 4.57 (0.093)
4.54 (0.083) 4.51 (0.11)
4.52 (0.065) 4.53 (0.075)
4.45 (0.092) 4.43 (0.068)
0.39 (0.031) 0.44 (0.040)
0.42 (0.046) 0.29 (0.081)
0.39 (0.023) 0.34 (0.049)
0.30 (0.029) 0.32 (0.044)
0.40 (0.049) 0.30 (0.048)
0.41 (0.029) 0.33 (0.054)
0.43 (0.44) 0.42 (0.035)
Data are presented as the mean ± standard error of the mean (SEM). A = Auditory-only stimulus, V = Visual-only stimulus, A0V = Simultaneous audiovisual stimulus, A50V = Audition‑leading visual stimulus by 50 ms, A100V = Audition‑leading visual stimulus by 100 ms, V50A = Audition-lagging visual stimulus by 50 ms, V100A = Audition-lagging visual stimulus by 100 ms. ⁎ p < 0.05.
delayed (p < 0.05) in older adults (280–300 ms) compared to younger adults (210–240 ms). The 2 group (younger, older) ∗ 2 stimuli type (A0V, A_0_V) ∗ 5 electrode (Fz, FC1, Cz, FP1, Oz) ANOVA was conducted for the two early audiovisual integration time intervals, and there was a significant main effect of stimuli type for both time intervals: 80–110 ms [F (1, 28) = 130.92, p < 0.001, ηp2 = 0.824] and 140–160 ms [F (1, 28) = 42.72, p < 0.001, ηp2 = 0.604]. These results indicate that significant early audiovisual integration occurs in both younger and older adults. Interestingly, in the 80–110 ms time interval, the ERPs of A0V were more positive than the ERPs of A_0_V. However, in the 140–160 ms time interval, the ERPs of A0V were more negative than the ERPs of A_0_V.
than that to other stimulus conditions (all p < 0.05), however for younger adults there was no significant difference (all p > 0.05) between different stimulus conditions (Table 1). However, there were no significant main effects of the group [F (6168) = 1.54, p = 0.23, ηp2 = 0.05] or the interaction between the stimuli type and group [F (1, 28) = 1.25, p = 0.30, ηp2 = 0.043]. Regardless of the response criterion (c), the main stimuli type effect, main group effect, and stimuli type ∗ group interaction were not significant (all p > 0.05). 3.2. ERP results Based on the t-test statistical analysis and the topographical response pattern, five ROIs (frontal: F7, F3, Fz, F4, F8; fronto-central: FC5, FC1, FC2, FC6; central: C3, Cz, C4; centro-parietal: CP5, CP1, CP2, CP6; and occipital: O1, O2, O3) and six integration time intervals (80–110 ms, 140–160 ms, 190–220 ms, 210–240 ms, 230–290 ms, and 280–300 ms) were selected. Because there was no significant lateralization effect for any ROI (see the topography map in each temporal condition), we chose one electrode of each ROI for use in further analyses (Fz, FC1, Cz, FP1 and Oz). Analysis of the mean amplitudes using the 2 group (younger, older) ∗ 10 stimuli type (A0V, A_0_V, A50V, A_50_V, A100V, A_100_V, V50A, V_50_A, V100A, V_100_A) ∗ 5 electrode (Fz, FC1, Cz, FP1, Oz) ∗ 6 time interval (80–110 ms, 140–160 ms, 190–220 ms, 210–240 ms, 230–290 ms, and 280–300 ms) repeatedmeasures ANOVA revealed a significant group ∗ stimuli type ∗ electrode ∗ time interval interaction [F (180, 5040) = 2.46, p = 0.013, ηp2 = 0.681]. The results suggested different audiovisual integration patterns for different SOA conditions and for younger and older groups. Therefore, we analysed these differences in detail as follows.
3.2.2. Audition‑leading vision conditions (A50V and A100V) Fig. 3 displays the A50V, A_50_V, A100V, and A_100_V ERPs for the five ROI regions of the younger and older groups. For the younger group, audiovisual integration was observed in the time interval 230–290 ms after auditory stimulus onset in A50V (Fig. 3A) and in the time interval 280–300 ms after auditory stimulus onset in A100V (Fig. 3B). For the older group, audiovisual integration was observed in the time intervals 80–110 ms, 190–220 ms, and 210–240 ms after auditory stimulus onset in A50V and in the time intervals 80–110 ms, 190–220 ms, and 280–300 ms after auditory stimulus onset in A100V. These results showed that early integration was absent in younger adults but occurred in older adults. Moreover, late integration was delayed in both groups when the temporal disparity was increased from 50 ms to 100 ms. These results further revealed the different audiovisual integration patterns between A50V and A100V conditions or both younger and older adults. Therefore, ANOVA tests were conducted separately for A50V and A100V for both the younger and older groups in each audiovisual integration time interval using the factors stimulus type and electrode.
3.2.1. Synchronous audiovisual condition (A0V) Fig. 2A displays A0V and A_0_V ERPs at the five ROI regions for both younger and older groups. For the younger group, audiovisual integration was observed in the 80–110 ms, 140–160 ms and 210–240 ms time intervals. For the older group, audiovisual integration was found in the 80–110 ms, 140–160 ms, 190–220 ms, and 280–300 ms time intervals (topography shown in Fig. 2B). These results suggested that younger and older adults have similar time intervals in which early audiovisual integration occurs (80–110 ms, 140–160 ms). However, the distribution was different in the two groups; although audiovisual integration was observed in the frontal, fronto-central, central, and central-parietal regions in both groups, the integration found in the occipital region for older adults was absent for younger adults in the 80–110 ms time interval. Additionally, late integration was markedly
3.2.2.1. Audition‑leading vision by 50 ms (A50V, Fig. 3A). For the average ERPs of younger adults on the 3 electrodes Fz, FC1, and Cz in the 230–290 ms time interval, the 2 stimuli type (A50V, A_50_V) ∗ 3 electrode (Fz, FC1, Cz,) ANOVA revealed a significant main effect of stimuli type [F (1, 14) = 5.75, p = 0.031, ηp2 = 0.291]. Further post hoc analysis revealed significant audiovisual integration in Fz (p = 0.004), FC1 (p = 0.032) and Cz (p = 0.028). There was also a significant main effect of electrode [F (2, 28) = 7.51, p = 0.004, ηp2 = 0.349]. Further post hoc analysis revealed that in the A50V condition, there was a significant difference between the FC1 and Fz 5
International Journal of Psychophysiology 124 (2018) 1–11
Y. Ren et al.
Fig. 2. Grand-average event-related potentials and Topography map of audiovisual integration elicited by simultaneous audiovisual stimuli. (A) Event-related potentials of the sum of the unimodal stimuli (A_0_V) and bimodal (A0V) stimuli at a subset of electrodes are shown from 100 ms before the stimulus to 400 ms after stimulus onset. (B) Topography map difference of A0V and A_0_V with significant audiovisual integration marked by the grey background.
revealed a significant main effect of stimuli type [F (1, 28) = 29.37, p < 0.001, ηp2 = 0.521], electrode [F (3, 84) = 3.61, p = 0.040, ηp2 = 0.114] and group [F (1, 28) = 5.53, p = 0.026, ηp2 = 0.165]. Additionally, there was a significant interaction between electrode and group [F (3, 84) = 10.75, p < 0.001, ηp2 = 0.278] but no significant stimuli type ∗ electrode ∗ group interaction [F (3, 84) = 1.39, p = 0.258, ηp2 = 0.047]. Further post hoc analysis showed significant audiovisual integration for both younger and older adults for all four electrodes (all p < 0.017), and there was a significant difference in Fz (p = 0.005) and FC1 (p = 0.008) between younger and older adults in the A100V condition. In addition, there was a significant difference between CP1 and the other four electrodes (all p < 0.04) in the A100V condition for older adults but no significant differences among the four electrodes (all p > 0.50) for younger adults. Additionally, for the average ERPs of older adults on the FC1, Cz, CP1, and Oz electrodes in the 80–110 ms time interval, the 2 stimuli type (A100V, A_100_V) ∗ 4 electrode (FC1, Cz, CP1, Oz) ANOVA revealed a significant main effect of stimuli type [F (1, 14) = 9.72, p = 0.008, ηp2 = 0.410] and electrode [F (3, 42) = 40.99, p < 0.001, ηp2 = 0.745]. Further analysis showed that there was significant audiovisual integration in all four electrodes (all p < 0.022) and a significant difference among all the four electrodes in the A100V condition (all p < 0.005). For the average ERPs on the CP1 and Oz electrodes in the 190–220 ms time interval, the 2 stimuli type (A100V, A_100_V) ∗ 2 electrode (CP1, Oz) ANOVA revealed a significant main effect of stimuli type [F (1, 14) = 24.62, p < 0.001, ηp2 = 0.64] but no significant main effect of electrode [F (1, 14) = 0.67, p = 0.426, ηp2 = 0.046]. Further analysis showed significant audiovisual integration in both the CP1 (p = 0.002) and Oz (p = 0.001) electrodes but no difference between electrodes (p > 0.05).
electrodes (p = 0.003), as well as between the FC1 and Cz electrodes (p = 0.008). However, there was no significant interaction between stimuli type and electrode [F (2, 28) = 1.45, p = 0.254, ηp2 = 0.094]. For the average ERPs of the older adults on the FC1, Cz, CP1 and Oz electrodes in the 80–110 ms time interval, the 2 stimuli type (A50V, A_50_V) ∗ 4 electrode (FC1, Cz, CP1, Oz) ANOVA revealed a significant main effect of stimuli type [F (1, 14) = 7.91, p = 0.014, ηp2 = 0.361]. Further post hoc analysis revealed significant audiovisual integration in FC1 (p = 0.034), Cz (p = 0.038), CP1 (p = 0.021), and Oz (p = 0.034). In addition, there was also a significant main effect of electrode [F (3, 42) = 38.38, p < 0.001, ηp2 = 0.733]. Further post hoc analysis revealed significant differences between all electrodes (all p < 0.007), except for between the Cz and CP1 electrodes (p > 0.05). For the average ERPs on the CP1 and Oz electrodes in the 190–220 ms time interval, the 2 stimuli type ∗ 2 electrode ANOVA revealed a significant main effect of stimuli type [F (1, 14) = 8.61, p = 0.011, ηp2 = 0.386], and further analysis showed significant audiovisual integration in both CP1 (p = 0.047) and Oz (p = 0.007). For the average ERPs on the FC1, Cz, and CP1 electrodes in the 210–240 ms time interval, the 2 stimuli type (A50V, A_50_V) ∗ 3 electrode (FC1, Cz, CP1) ANOVA revealed a significant main effect of stimuli type [F (1, 14) = 7.99, p = 0.012, ηp2 = 0.363]. Further post hoc analysis revealed that there was significant audiovisual integration in Cz (p = 0.022) and CP1 (p = 0.008). In addition, there was also a significant main effect of electrode [F (2, 28) = 7.04, p = 0.016, ηp2 = 0.335], and further analysis showed that a significant difference was only found between the FC1 and Cz electrodes (p = 0.005) in the A50V condition.
3.2.2.2. Audition‑leading vision by 100 ms (A100V, Fig. 3B). In the 280–300 ms time interval, for the average ERPs on the Fz, FC1, Cz, and CP1 electrodes for the two groups, the 2 group (younger, older) ∗ 2 stimuli type (A100V, A_100_V) ∗ 4 electrode (Fz, FC1, Cz, CP1) ANOVA 6
International Journal of Psychophysiology 124 (2018) 1–11
Y. Ren et al.
Fig. 3. Grand-averaged event-related potentials and Topography map of audiovisual integration elicited by audition‑leading vision stimuli. (A) We added the ERPs evoked by A in the time window from 100 ms before to 400 ms after A stimuli onset to the ERPs evoked by V in the time window 150 ms from before to 350 ms after V stimuli onset to create combined A_50_V ERPs. (B) We added the ERPs evoked by A in the time window from 100 ms before to 400 ms after A stimuli onset to the ERPs evoked by V in the time window from 200 ms before to 300 ms after V stimuli onset to create combined A_100_V ERPs. (C) Topography map of audiovisual integration in conditions where A led V by 50 ms (A50V - A_50_V) or 100 ms (A100V - A_100_V). Significant audiovisual integration is indicated by the grey background. Y: younger adults, E: older adults.
7
International Journal of Psychophysiology 124 (2018) 1–11
Y. Ren et al.
Fig. 4. Grand-averaged event-related potentials and Topography map of audiovisual integration elicited by audition-lagging vision stimuli. (A) We added the ERPs evoked by A in the time window from 100 ms before to 400 ms after A stimuli onset to the ERPs evoked by V in the time window from 50 ms before to 450 ms after V stimuli onset to create combined V_50_A ERPs. (B) We added the ERPs evoked by A in the time window from 100 ms before to 400 ms after A stimuli onset to the ERPs evoked by V in the time window from the onset of V stimuli to 500 ms after V stimuli onset to create combined V_50_A ERPs. (C) Topography map of audiovisual integration in conditions where audition-lagging vision by 50 ms (V50A - V_50_A) or 100 ms (V100A - V_100_A). Significant audiovisual integration is marked by the grey background. Y: younger adults, E: older adults.
8
International Journal of Psychophysiology 124 (2018) 1–11
Y. Ren et al.
interval for older adults. Due to the loss of brain mass, structural changes underlying functional alterations, or age-related inhibitory neurotransmitter GABA loss, behavioural studies on the ageing effect have revealed non-specific slowing of cognitive processing as a result of generally dysfunctional sensory systems (Bedard and Barnett-Cowan, 2016; Freiherr et al., 2013; Mozolic et al., 2011; Takayama et al., 1992). Diederich et al. (2008) reported that the responses of older adults were significantly slower than those of younger adults under all conditions (Diederich et al., 2008). Moreover, as in our results, Wu et al. (2012) observed a delayed audiovisual integration time window for older adults (260 ms after stimuli onset) compared with younger adults (240 ms after stimuli onset) using race-model analysis (Wu et al., 2012). Therefore, general reduced cognitive function might provide a fundamental explanation for the delayed late integration in older adults. Although older adults retain an intact ability to perceive inputs from the environment, their perceptive ability varies greatly (Bedard and Barnett-Cowan, 2016; Diederich et al., 2008; Fiacconi et al., 2013; Freiherr et al., 2013; Laurienti et al., 2006; Mahoney et al., 2011; Mozolic et al., 2011; Peiffer et al., 2007; Wu et al., 2012). Our current behavioural data revealed that older adults have a relatively lower visual perceptual sensitivity (p = 0.033). This decreased perceptive sensitivity can attenuate the peripheral processing speed of visual signals and may further delay audiovisual integration in multisensory brain areas. Talsma and Woldorff (2005) reported that attention modulated multisensory integration processing and that the integration effect was greater in the attended condition than in the unattended condition (Talsma and Woldorff, 2005). Extensive research has yielded evidence of attention-decline in older adults (Kok, 2000; Plude et al., 1994; Quigley et al., 2010); therefore, another possible reason for the delayed audiovisual interaction in older adults might be a reduction in visual attention.
3.2.3. Audition-lagging vision conditions (V50A and V100A) Fig. 4 displays the V50A and V_50_A (Fig. 4A) and V100A and V_100_A (Fig. 4B) ERPs of the five ROI regions for the younger and older groups. Audiovisual integration occurred in the 140–160 ms time interval across electrodes Fz, FC1, Cz and CP1 in younger adults. The 2 stimuli type (V100A, V_100_A) ∗ 4 electrode (Fz, FC1, Cz, CP1) ANOVA revealed a significant main effect of stimuli type [F (1, 14) = 6.70, p = 0.021, ηp2 = 0.324] and electrode [F (3, 42) = 6.95, p = 0.004, ηp2 = 0.332]. Further analysis showed significant audiovisual integration for all four electrodes (p < 0.05) and a significant difference between the Fz and FC1 electrodes (p = 0.005), as well as the Fz and Cz (p = 0.002) electrodes in the V100A condition. However, there were no significant differences among the other electrodes (all p > 0.05). Moreover, there was significant ERP wave diversity in the 190–220 ms time interval for the Oz electrode (t-test, p = 0.012). For the older group, audiovisual integration was observed in the 280–300 ms time interval in the FC1, Cz and CP1 electrodes, and the 2 stimuli type (V50A, V_50_A) ∗ 3 electrode (FC1, Cz, CP1) ANOVA revealed a significant main effect of stimuli type [F (1, 14) = 8.77, p = 0.010, ηp2 = 0.385], and electrode [F (2, 28) = 6.75, p = 0.015, ηp2 = 0.325]. Further analysis showed that there was significant audiovisual integration for the three electrodes FC1 (p = 0.01), Cz (p = 0.023) and CP1 (p = 0.014). Additionally, there was a significant difference between the FC1 and CP1 electrodes (p = 0.048) in the V50A condition. In addition, Figs. 4 indicates that there was some diversity in the V50A and V_50_A ERP mean amplitude waves, but no statistical confirmation of this 24observation was found. 4. Discussion The results of the current study clearly showed that audiovisual integration was influenced by a temporal disparity between auditory and visual stimuli, and the effect of the temporal disparity was different between younger and older adults.
4.2. Multisensory integration in audition‑leading vision conditions Early integration was absent in younger adults in audition‑leading vision conditions but did occur in older adults. Colonius and Diederich (2004) proposed a ‘time-window-of-integration model’, and they presumed that cross-modal signal processing includes at least two serial stages of saccadic reaction time: an early afferent stage of peripheral processing (first stage) followed by a compound stage of converging sub-processes (second stage) (Colonius and Diederich, 2004; Diederich et al., 2008). Numerous investigations have shown that the auditory threshold tends to increase and that visual acuity generally decreases with age (Bäckman et al., 2006; Liu and Yan, 2007). Therefore, we propose that younger adults can perceive an audiovisual disparity in peripheral neural excitations in the visual and auditory pathways and do not integrate them together in the first stage. However, because of the declining precision of the perceptual ability in older adults, it is difficult for them to perceive the temporal disparity between visual and auditory stimuli. Although there is temporal asynchrony, they integrate the two signals together in an illusion. Recently, Bedard Gillian and Barnett-Cowan Michael reported impaired timing of audiovisual events in the elderly (Bedard and Barnett-Cowan, 2016). They systematically examined scores for simultaneity judgement, temporal order judgement, and stream/bounce illusion tasks in younger and older groups. In agreement with the previous studies, their results confirmed an extended temporal binding window for temporal order judgement and the stream/bounce illusion task in older adults. These results also support the hypothesis that older adults integrate the asynchrony stimuli in the early stage due to the declining precision of their perceptual ability. Furthermore, the current study showed that in audition‑leading vision conditions, late integration was significantly delayed with increases in SOA. For younger adults, late integration was observed in the 230–290 ms time interval in the A50V condition and in the 280–300 ms time interval in the A100V condition. For older adults, late integration was observed in the 210–240 ms time interval in the A50V condition
4.1. Multisensory integration in the synchronous audiovisual condition The results showed that the earliest integration in the occipital region (80–110 ms) occurred in older adults but was absent in younger adults. Previous studies have proven that regions traditionally considered to be sensory-specific (e.g., the primary visual cortex) show an audiovisual integration effect (Calvert et al., 2000; Macaluso, 2006; Wang et al., 2008), and reduced functional efficiency in the occipital region occurred more often in older adults compared with younger adults (Goh, 2011). In a demonstration of performance equivalent with that observed in younger adults at low cognitive load levels, older adults showed increased activity and dedifferentiation of the occipital cortex (Goh, 2011; Grady, 2012; Lee et al., 2011). Such a reduced selective engagement of separate visual regions may dim the hierarchical neural process and may be attributed to faster audiovisual integration in older adults. Diaconescu et al. (2013) proposed that compared to younger adults, older adults indeed activate a distinct brain network in response to cross-modal stimuli, and multisensory integration might play a compensatory role in normal ageing (Diaconescu et al., 2013). Additionally, the behavioural data of the present study revealed that the response perceptual sensitivity of older adults was significantly reduced (p < 0.05). Attention to visual stimuli may have become stronger to compensate for weak visual function. It is accepted that attention greatly influences audiovisual integration, and there was a much greater response enhancement in attentional presentations (Laurienti et al., 2006; Talsma et al., 2007). Therefore, we propose that compensatory phenomena occurred to substitute for the reduced neural function associated with audiovisual integration in older adults. Additionally, late audiovisual integration was significantly delayed in older adults. In our current study, late integration occurred in the 210–240 ms time interval for younger adults and the 280–300 ms time 9
International Journal of Psychophysiology 124 (2018) 1–11
Y. Ren et al.
available time course, the two inputs could be integrated together; otherwise, they would not be integrated (Diederich and Colonius, 2015; Diederich et al., 2008; Mozolic et al., 2011; Stein, 2012; Van Atteveldt et al., 2007). The current results suggested that due to the relatively faster audiovisual binding window in younger adults, the second auditory signal fell outside the time window triggered by the first visual signal in the V50A condition. However, as the older adults with slower movement exhibited a relatively wider time window of integration (Diederich et al., 2008; Laurienti et al., 2006; Peiffer et al., 2007; Wu et al., 2012), the auditory signal fell within the wide time window triggered by the antecedent visual signal, and audiovisual integration occurred for older adults in the V50A condition. This is the first report suggesting EEG evidence of the relatively delayed binding window in older adults. However, when the SOA was increased to 100 ms, although younger adults could perceive the dissociation, as a cue, the antecedent visual signal can facilitate the processing of a following auditory signal. Recent findings have suggested that putative sensory-specific cortices are responsive to inputs presented through different modalities and that visual temporal cues can substantially speed up forthcoming auditory processing (Griffin et al., 2002; Kayser et al., 2007; Kayser et al., 2008; Li et al., 2012; Tang et al., 2013; Van Wassenhove et al., 2005). However, such significant cross-model facilitation only occurs when the antecedent cue and the following target are both presented within a limited time course, typically 100–200 ms (Macaluso, 2006; McDonald et al., 2001). Therefore, the significant difference between the ERP waves in the V100A and V_100_A conditions might be attributed to the cue-target effect more than to audiovisual integration. In contrast, for older adults in the V100A condition, the second auditory signal might fall outside the integrative time window triggered by the first visual signal. However, the audiovisual stimuli interval was not sufficiently long to serve as a robust warning cue for the following auditory stimulus. Therefore, there was no significant interaction observed in the V100A condition for older adults.
and in the 280–300 ms time interval in the A100V condition. This is the first report to indicate that with increases in SOA, late integration was delayed. Consistent with our current results, previously myriad behavioural investigations have reported slower responses to asynchronous visual-auditory stimuli compared to synchronous visual-auditory stimuli (Bolognini et al., 2005; Ren et al., 2017; Yang et al., 2014). According to the ‘time-window-of-integration model’, because the first stage refers to very early sensory processing, the processing time for visual and auditory signals is assumed to be statistically independent (Colonius and Diederich, 2004; Diederich et al., 2008). In the A100V condition, the presentation time of the visual stimulus was delayed compared to that of the A50V condition. The processing of the visual signal was reasonably delayed, as was the second neural integration stage. Therefore, it is logical to reason that the audiovisual integration in the A100V condition was delayed compared to that in the A50V condition. 4.3. Multisensory integration in audition-lagging vision conditions Interestingly, in the same temporal disparity between auditory and visual stimuli (A50V and V50A, A100V and V100A), integration was markedly different between audition‑leading vision and audition-lagging vision conditions for both the older and the younger group. Although studies focusing on audiovisual temporal asynchrony processing are rare, evidence has shown that visual signals are realigned perceptually based on the auditory signal (Bedard and Barnett-Cowan, 2016; Di Luca et al., 2009). That is, when a non-matched audiovisual stimuli pair was presented, the perceptual reports tended to be biased to the temporal character of the auditory signals (Bedard and BarnettCowan, 2016; Shams et al., 2002). For example, when one flash is presented accompanying two beeps, the subject tended to report that they perceived two flashes. Two theories can explain this phenomenon: the ventriloquism theory and the moveable window and temporal theory (Morein-Zamir et al., 2003; Spence and Squire, 2003; Sugita and Suzuki, 2003). The ventriloquism theory proposes that the perceived arrival time of the visual signal can be ventriloquized into a temporal alignment with a following sound (Morein-Zamir et al., 2003). Due to the different velocities of auditory and visual signals, there are different arrival times when an auditory signal originates from an object at a distant location (Spence and Squire, 2003). To compensate for such differences during the perception of the environment in the real world, our brain adaptively exhibits a relatively higher neural transmission rate for auditory signals than for visual signals (Talsma et al., 2009). The moveable window theory proposes that the temporal integration window moves as the audiovisual stimuli become more distant for the multisensory integration to accommodate for the fact that the auditory signal will lag behind the visual signal with increasing distance (Sugita and Suzuki, 2003). Therefore, in audition‑leading vision conditions, audiovisual integration can be induced immediately; however, in audition-lagging vision conditions, the realignment takes time, leading to different audiovisual integration mechanisms between audition‑leading vision and audition-lagging vision conditions. Additionally, in this study, the integration pattern was significantly different between younger and older adults in both the V50A and V100A conditions. No significant audiovisual integration occurred in the V50A condition in younger adults; however, a significant audiovisual integration time interval of 280–300 ms was observed across the frontal-central, central, and central-parietal regions for older adults. Single-unit animal recording studies reported that when auditory and visual stimuli were presented close together in time, maximal response facilitation was generated by the overlapping of the peak triggered by each modality; however, this facilitation decayed to zero as the temporal disparity increased between the bimodalities (Meredith et al., 1987; Stein and Meredith, 1993). Behavioural studies in humans also indicated that after the first stimulus, there was a time course for accepting the second stimulus. If the second stimulus fell into the
5. Conclusion This study confirmed that audiovisual integration was greatly influenced by stimuli onset asynchrony, and the audiovisual temporal integration pattern was different between the audition‑leading and audition-lagging vision conditions. Additionally, the results showed different temporal asynchrony effects on audiovisual integration in younger and older adults. Our results indicated that there might be a compensatory mechanism easing the general cognitive decline in older adults. Moreover, these results may provide new viewpoints and reference data for age-related cognitive interventions. Conflict of interest statement The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Acknowledgements The authors sincerely thank Kohei Nakahashi for data collection for this study. We also thank the participants of the study. This study was supported by the Japan Society for the Promotion of Science (JSPS) KAKENHI grant numbers 16K18052; the National Natural Science Foundation of China (61473043; 61727807; 31600882; 31700973); the Beijing Municipal Science and Technology Commission (Z161100002616020); the Humanity and Social Science Youth Foundation of the Education Bureau of Hubei Province of China (16Q030, WY); and the Humanity and Social Science Youth Foundation of the Ministry of Education of China (16YJC190025). 10
International Journal of Psychophysiology 124 (2018) 1–11
Y. Ren et al.
References
Morein-Zamir, S., Soto-Faraco, S., Kingstone, A., 2003. Auditory capture of vision: examining temporal ventriloquism. Cogn. Brain Res. 17, 154–163. Mozolic, J.L., Long, A.B., Morgan, A.R., Rawley-Payne, M., Laurienti, P.J., 2011. A cognitive training intervention improves modality-specific attention in a randomized controlled trial of healthy older adults. Neurobiol. Aging 32, 655–668. Munhall, K.G., Gribble, P., Sacco, L., Ward, M., 1996. Temporal constraints on the McGurk effect. Percept. Psychophys. 58, 351–362. Peiffer, A.M., Mozolic, J.L., Hugenschmidt, C.E., Laurienti, P.J., 2007. Age-related multisensory enhancement in a simple audiovisual detection task. Neuroreport 18, 1077–1081. Plude, D.J., Enns, J.T., Brodeur, D., 1994. The development of selective attention: a lifespan overview. Acta Psychol. 86, 227–272. Poliakoff, E., Shore, D., Lowe, C., Spence, C., 2006. Visuotactile temporal order judgments in ageing. Neurosci. Lett. 396, 207–211. Quigley, C., Andersen, S.K., Schulze, L., Grunwald, M., Müller, M.M., 2010. Feature-selective attention: evidence for a decline in old age. Neurosci. Lett. 474, 5–8. Ren, Y., Yang, W., Nakahashi, K., Takahashi, S., Wu, J., Feb 2017. Audiovisual integration delayed by stimulus onset asynchrony between auditory and visual stimuli in older adults. Perception 46 (2), 205–218 0301006616673850. Senkowski, D., Saint-Amour, D., Höfle, M., Foxe, J.J., 2011. Multisensory interactions in early evoked brain activity follow the principle of inverse effectiveness. NeuroImage 56, 2200–2208. Senkowski, D., Saint-Amour, D., Kelly, S.P., Foxe, J.J., 2007. Multisensory processing of naturalistic objects in motion: a high-density electrical mapping and source estimation study. NeuroImage 36, 877–888. Senkowski, D., Talsma, D., Grigutsch, M., Herrmann, C.S., Woldorff, M.G., 2007. Good times for multisensory integration: effects of the precision of temporal synchrony as revealed by gamma-band oscillations. Neuropsychologia 45, 561–571. Setti, A., Burke, K.E., Kenny, R.A., Newell, F.N., 2011. Is inefficient multisensory processing associated with falls in older people? Exp. Brain Res. 209, 375–384. Setti, A., Finnigan, S., Sobolewski, R., McLaren, L., Robertson, I.H., Reilly, R.B., Kenny, R.A., Newell, F.N., 2011. Audiovisual temporal discrimination is less efficient with aging: an event-related potential study. Neuroreport 22, 554–558. Shams, L., Kamitani, Y., Shimojo, S., 2002. Visual illusion induced by sound. Cogn. Brain Res. 14, 147–152. Spence, C., 2011. Crossmodal correspondences: A tutorial review. Atten. Percept. Psychophys. 73, 971–995. Spence, C., Squire, S., 2003. Multisensory integration: maintaining the perception of synchrony. Curr. Biol. 13, R519–R521. Stanislaw, H., Todorov, N., 1999. Calculation of signal detection theory measures. Behav. Res. Methods Instrum. Comput.: A Journal of the Psychonomic Society, Inc. 31, 137–149. Stein, B., Meredith, M., 1993. The Merging of the Sense. The MIT Press, Cambridge, MA. Stein, B.E., 2012. The New Handbook of Multisensory Processing. Mit Press Cambridge, MA, pp. 104–114. Sugita, Y., Suzuki, Y., 2003. Audiovisual perception: implicit estimation of sound-arrival time. Nature 421, 911. Takayama, H., Ogawa, N., Yamamoto, M., Asanuma, M., Hirata, H., Ota, Z., 1992. Agerelated changes in cerebrospinal fluid γ-aminobutyric acid concentration. Clin. Chem. Lab. Med. 30, 271–274. Talsma, D., Doty, T.J., Woldorff, M.G., 2007. Selective attention and audiovisual integration: is attending to both modalities a prerequisite for early integration? Cereb. Cortex 17, 679–690. Talsma, D., Senkowski, D., Woldorff, M.G., 2009. Intermodal attention affects the processing of the temporal alignment of audiovisual stimuli. Exp. Brain Res. 198, 313–328. Talsma, D., Woldorff, M.G., 2005. Selective attention and multisensory integration: multiple phases of effects on the evoked brain activity. J. Cogn. Neurosci. 17, 1098–1114. Tang, X., Li, C., Li, Q., Gao, Y., Yang, W., Yang, J., Ishikawa, S., Wu, J., 2013. Modulation of auditory stimulus processing by visual spatial or temporal cue: an event-related potentials study. Neurosci. Lett. 553, 40–45. Van Atteveldt, N.M., Formisano, E., Blomert, L., Goebel, R., 2007. The effect of temporal asynchrony on the multisensory integration of letters and speech sounds. Cereb. Cortex 17, 962–974. Van Wassenhove, V., Grant, K.W., Poeppel, D., 2005. Visual speech speeds up the neural processing of auditory speech. Proc. Natl. Acad. Sci. U. S. A. 102, 1181–1186. Van Wassenhove, V., Grant, K.W., Poeppel, D., 2007. Temporal window of integration in auditory-visual speech perception. Neuropsychologia 45, 598–607. Wang, Y., Celebrini, S., Trotter, Y., Barone, P., 2008. Visuo-auditory interactions in the primary visual cortex of the behaving monkey: electrophysiological evidence. BMC Neurosci. 9, 1. Wu, J., Yang, W., Gao, Y., Kimura, T., 2012. Age-related multisensory integration elicited by peripherally presented audiovisual stimuli. Neuroreport 23, 616–620. Yang, W., Chu, B., Yang, J., Yu, Y., Wu, J., Yu, S., 2014. Elevated audiovisual temporal interaction in patients with migraine without aura. The J. Headache Pain 15, 1. Yang, W., Yang, J., Gao, Y., Tang, X., Ren, Y., Takahashi, S., Wu, J., 2015. Effects of sound frequency on audiovisual integration: an event-related potential study. PLoS One 10, e0138296.
Bäckman, L., Nyberg, L., Lindenberger, U., Li, S.-C., Farde, L., 2006. The correlative triad among aging, dopamine, and cognition: current status and future prospects. Neurosci. Biobehav. Rev. 30, 791–807. Bedard, G., Barnett-Cowan, M., 2016. Impaired timing of audiovisual events in the elderly. Exp. Brain Res. 234, 331–340. Bolognini, N., Frassinetti, F., Serino, A., Làdavas, E., 2005. “Acoustical vision” of below threshold stimuli: interaction among spatially converging audiovisual inputs. Exp. Brain Res. 160, 273–282. Bravo, G., Hébert, R., 1997. Age-and education-specific reference values for the minimental and modified mini-mental state examinations derived from a non-demented elderly population. Int. J. Geriatr. Psychiatry 12, 1008–1018. Bushara, K.O., Grafman, J., Hallett, M., 2001. Neural correlates of auditory–visual stimulus onset asynchrony detection. J. Neurosci. 21, 300–304. Calvert, G.A., Campbell, R., Brammer, M.J., 2000. Evidence from functional magnetic resonance imaging of crossmodal binding in the human heteromodal cortex. Curr. Biol. 10, 649–657. Colonius, H., Diederich, A., 2004. Multisensory interaction in saccadic reaction time: a time-window-of-integration model. J. Cogn. Neurosci. 16, 1000–1009. Di Luca, M., Machulla, T.-K., Ernst, M.O., 2009. Recalibration of multisensory simultaneity: cross-modal transfer coincides with a change in perceptual latency. J. Vis. 9, 7. Diaconescu, A.O., Hasher, L., McIntosh, A.R., 2013. Visual dominance and multisensory integration changes with age. NeuroImage 65, 152–166. Diederich, A., Colonius, H., 2015. The time window of multisensory integration: relating reaction times and judgments of temporal order. Psychol. Rev. 122, 232. Diederich, A., Colonius, H., Schomburg, A., 2008. Assessing age-related multisensory enhancement with the time-window-of-integration model. Neuropsychologia 46, 2556–2562. Dixon, N.F., Spitz, L., 1980. The detection of auditory visual desynchrony. Perception 9, 719–721. Fiacconi, C.M., Harvey, E.C., Sekuler, A.B., Bennett, P.J., 2013. The influence of aging on audiovisual temporal order judgments. Exp. Aging Res. 39, 179–193. Frassinetti, F., Bolognini, N., Làdavas, E., 2002. Enhancement of visual perception by crossmodal visuo-auditory interaction. Exp. Brain Res. 147, 332–343. Freiherr, J., Lundström, J.N., Habel, U., Reetz, K., 2013. Multisensory integration mechanisms during aging. Front. Hum. Neurosci. 72013. Giard, M.H., Peronnet, F., 1999. Auditory-visual integration during multimodal object recognition in humans: a behavioral and electrophysiological study. J. Cogn. Neurosci. 11, 473–490. Goh, J.O., 2011. Functional dedifferentiation and altered connectivity in older adults: neural accounts of cognitive aging. Aging and Disease 2, 30. Grady, C., 2012. The cognitive neuroscience of ageing. Nat. Rev. Neurosci. 13, 491–505. Griffin, I.C., Miniussi, C., Nobre, A.C., 2002. Multiple mechanisms of selective attention: differential modulation of stimulus processing by attention to space or time. Neuropsychologia 40, 2325–2340. Guthrie, D., Buchwald, J.S., 1991. Significance testing of difference potentials. Psychophysiology 28, 240–244. Kayser, C., Petkov, C.I., Augath, M., Logothetis, N.K., 2007. Functional imaging reveals visual modulation of specific fields in auditory cortex. J. Neurosci. 27, 1824–1835. Kayser, C., Petkov, C.I., Logothetis, N.K., 2008. Visual modulation of neurons in auditory cortex. Cereb. Cortex 18, 1560–1574. King, A.J., 2005. Multisensory integration: strategies for synchronization. Curr. Biol. 15, R339–R341. Kok, A., 2000. Age-related changes in involuntary and voluntary attention as reflected in components of the event-related potential (ERP). Biol. Psychol. 54, 107–143. Laurienti, P.J., Burdette, J.H., Maldjian, J.A., Wallace, M.T., 2006. Enhanced multisensory integration in older adults. Neurobiol. Aging 27, 1155–1163. Lee, Y., Grady, C.L., Habak, C., Wilson, H.R., Moscovitch, M., 2011. Face processing changes in normal aging revealed by fMRI adaptation. J. Cogn. Neurosci. 23, 3433–3447. Li, C., Chen, K., Han, H., Chui, D., Wu, J., 2012. An fMRI study of the neural systems involved in visually cued auditory top-down spatial and temporal attention. PLoS One 7, e49948. Liu, B., Jin, Z., Wang, Z., Gong, C., 2011. The influence of temporal asynchrony on multisensory integration in the processing of asynchronous audio-visual stimuli of real-world events: an event-related potential study. Neuroscience 176, 254–264. Liu, X., Yan, D., 2007. Ageing and hearing loss. J. Pathol. 211, 188–197. Macaluso, E., 2006. Multisensory processing in sensory-specific cortical areas. Neuroscientist 12, 327–338. Macmillan, N.A., 1993. Signal Detection Theory as Data Analysis Method and Psychological Decision Model. Mahoney, J.R., Li, P.C.C., Oh-Park, M., Verghese, J., Holtzer, R., 2011. Multisensory integration across the senses in young and old adults. Brain Res. 1426, 43–53. McDonald, J.J., Teder-Sälejärvi, W.A., Heraldez, D., Hillyard, S.A., 2001. Electrophysiological evidence for the “missing link” in crossmodal attention. Can. J. Exp. Psychol./Revue Canadienne de Psychologie Expérimentale 55, 141. Meredith, M.A., Nemitz, J.W., Stein, B.E., 1987. Determinants of multisensory integration in superior colliculus neurons. I. Temporal factors. J. Neurosci. 7, 3215–3229.
11