neurology, psychiatry and brain research 19 (2013) 207–215
Available online at www.sciencedirect.com
ScienceDirect journal homepage: www.elsevier.com/locate/npbr
The effects of background noise on brain activity using speech stimuli on healthy young adults Hanani Abdul Manan a,*, Ahmad Nazlim Yusoff a, Elizabeth A. Franz b, Siti Zamratol-Mai Sarah Mukari c a Diagnostic Imaging and Radiotherapy Program, School of Diagnostic and Applied Health Sciences, Faculty of Health Sciences, Universiti Kebangsaan Malaysia, Jalan Raja Muda Abdul Aziz, 50300 Kuala Lumpur, Malaysia b Department of Psychology and fMRIotago, University of Otago, William James Building, 275 Leith Walk, Dunedin 9016, New Zealand c Audiology Program, School of Rehabilitation Sciences, Faculty of Health Sciences, National University of Malaysia, Jalan Raja Muda Abdul Aziz, 50300 Kuala Lumpur, Malaysia
article info
abstract
Article history:
Everyday spoken language processing does not occur in a novel acoustic environment, but
Received 26 March 2013
rather in the presence of the interfering background noise. In the present study, brain
Received in revised form
activation associated with speech perception (SP) processing in quiet (SPQ) and SP proces-
9 August 2013
sing in 5-dB SNR noise (SPN) was examined in 15 healthy young adults using functional MRI.
Accepted 3 September 2013
The behavioral performance shows no significant difference between SPN and SPQ, suggest-
Available online 29 October 2013
ing that background noise does not always impair spoken language comprehension in young
Keywords:
and middle temporal gyrus (MTG) were significantly activated during both the SPQ and SPN.
fMRI
This is attributed to the use of verbal stimuli in this study. Further activation for both SPQ
healthy participants. The fMRI results indicate that both the superior temporal gyrus (STG)
Speech perception
and SPN was also found in other temporal areas and the cerebellum. However interestingly,
STG
specific comparisons between SPQ and SPN revealed significant increases in brain activation
MTG
in the left STG, left MTG and bilateral cerebellum during SPN compared to SPQ. We suggest
Compensatory strategy
that the higher processing demands due to the presence of background noise are associated with compensatory strategies to allow the cognitive system to overcome noise-related interference, particularly implicating involvement of the left STG, left MTG and bilateral cerebellum. Findings are discussed in the context of corroborating evidence of such compensation. © 2013 Elsevier GmbH. All rights reserved.
1.
Introduction
This study compares the activated cortical areas during spoken language processing of speech stimuli presented in quiet compared to the same stimuli presented in background noise (5-dB SNR babble noise) in healthy young adults. Spoken language processing involves cognitive, motor and sensory processes1 to interpret and understand speech and non-speech sounds.2–4 Spoken language processing refers
specifically to how human listeners recognize speech sounds and use this information to understand spoken language.5 Previous studies on normal participants suggested that listening and understanding speech is an effortless task. However, everyday spoken language processing does not always occur in a novel acoustic environment but rather in the presence of interfering and competing sounds such as background noise. Thus, whether such situations might or might not compromise the quality of everyday life and communication is debatable.
* Corresponding author. Tel.: +60 3 26878070; fax: +60 3 26878108. E-mail address:
[email protected] (H.A. Manan). 0941-9500/$ – see front matter © 2013 Elsevier GmbH. All rights reserved. http://dx.doi.org/10.1016/j.npbr.2013.09.002
208
neurology, psychiatry and brain research 19 (2013) 207–215
In the process of understanding speech or spoken language, listeners must attend to the auditory stimulus, perform acoustic analysis, map the stimulus to phonemic categories, store the information in memory for further processing, and finally map phonemes to meaning.5 Recent cognitive studies on speech perception have suggested that both the right and left hemispheres of the brain evolved with a specialization in cognitive and behavioral functions. Among the many important lateralized functions in the human brain are language and speech perception.6 Behavioral and neuroimaging studies have frequently shown that perception of verbal stimuli is consistently left-lateralized. The network involved in phonological processing has been found to include the superior temporal gyrus (STG; particularly posterior to the primary auditory cortex) and the inferior parietal lobule (IPL).7–9 Comparative studies of speech versus non-speech tasks have shown activation of the left superior temporal gyrus in speech tasks. This might be due to semantic processing and an increase in attention demands in the processing of complex speech stimuli.7,10,11 Classic neuropsychology on language comprehension has further demonstrated involvement of the posterior part of the STG.10 This pattern is consistent with Howard et al.12 which implicated the posterior portion of the left STG as being involved in auditory word processing. Previous studies have also observed that the presence of background noise was associated with increased involvement of the left hemisphere in speech sound processing, while right hemisphere processing was unaffected by noise.9,13,14 As suggested earlier, everyday spoken language processing does not always occur in a quiet environment. There is usually some level of noise as background. Noise can be defined as unwanted or annoying auditory input and in a more extreme case can be categorized as a form of environmental pollution.15 Background noise can cause temporary or permanent damage to the auditory system5 and can affect the ability to concentrate and communicate efficiently.3 It is well known that the ability to discriminate speech sounds decreases with increasing noise levels. For example, the syllables /ba/ and /da/4 were found to be difficult to distinguish in a noisy environment. In addition, prolonged exposure to noise can affect the brain organization of speech processing and attention control.3 Some of the environmental situations associated with noise are quite concerning. For example, it has been reported by Yusoff and Ishak16 that noise level exposure experienced by Klang Valley residents in Malaysia exceeds the Department of Environment of Malaysia (DOE) guidelines on a daily basis. The maximum sound level in Klang Valley is 72 dBA in daytime and falls to between 56 dBA and 60 dBA at night, which is higher than the WHO suggested noise limits for public surroundings of 55 dBA in daytime and 45 dBA at nighttime and DOE recommendations of 55 dBA in daytime and 50 dBA at night.16 The excessive noise level in Malaysia suggests a need for direct studies on speech processes and their underlying brain effects in the Malaysian population. In particular, it seems critical as a first step to evaluate and compare spoken language processing in quiet and in noise in this population. The present study examined spoken word processing in quiet and in background babble noise (5-dB SNR) in healthy young Malay adults. If spoken language processing in
background noise increases task demands over processing in quiet, we would expect to see an increase in activation of auditory and attention areas of the brain in the noise (compared to quiet) condition. In contrast, if spoken languages processing in quiet and in background babble noise are similar in terms of processing demands, we would expect to see both tasks activating the same brain areas with comparable activation intensity (number of activated voxels and t-value).
2.
Materials and methods
2.1.
Participants
Fifteen right-handed Malay male adults (assessed using the abbreviated Edinburgh inventory: Oldfield 1971) participated in the study (see Table 1). All were native Malay speakers and reported no history of psychiatric or neurological disorders or current use of any psychoactive medications. Each participant's health status was examined through an interview prior to the experiment. Based on self-report assessment (using a standard questionnaire: Rochester Hearing & Speech Center) no participant had evidence of auditory problems. The same participants had also taken part in our previous studies.17,18 After full explanation of the nature and risks of the study, informed consent was obtained according to the protocol approved by the Institutional Ethics Committee (IEC) of Universiti Kebangsaan Malaysia. (Reference no: UKM 1.5.3.5/ 244/NN-075-2009).
2.2.
Data acquisition
Functional MRI scans were conducted in the Department of Radiology, UKM Medical Centre, using a 1.5 tesla magnetic resonance imaging (MRI) system (Siemens Magnetom Avanto) equipped with functional imaging options and echo planar imaging capabilities. A radiofrequency (RF) head coil was used for signal transmission and reception. Prior to each functional imaging scan, an MRI structural scan was obtained. T1*weighted multi planar reconstruction (MPR) spin-echo pulse sequence was collected with the following parameters: TR = 1240 ms, FOV = 250 mm 250 mm, flip angle = 908, matrix size = 128 128, and slice thickness = 1 mm. Functional images were then acquired using a gradient echo–echo planar imaging (GRE–EPI) pulse sequence. Each whole brain acquisition consisted of 21 axial slices, which comprised all brain regions including the cerebellum. The following parameters were used during the study: Repetition time (TR) = 2000 ms,
Table 1 – Demographic and performance data obtained from 15 participants. N Age (range) Age (mean SD) Years of education (mean SD) Word-based SPQ, accuracy rate (mean SD) Word-based SPN, accuracy rate (mean SD)
15 (all male) 23–29 27 2.18 14.80 0.79 16.49 2.28 17.20 2.92
Abbreviations: SPQ, speech perception task in quiet; SPN, speech perception task in noise.
209
neurology, psychiatry and brain research 19 (2013) 207–215
0s
Trial 1
Trial 2
Trial 3
Trial 4
Trial 5
. . .Trial 120
Stimulus
Baseline
Stimulus
Baseline
Stimulus
Baseline
6s
Participants wait for the stimulus.
11s
Response Time
64s
48s
32s
16s
80s
. . . . . 1920s
Participants need to clear mind and keep still
Stimulus (SPQ)
Stimulus (SPN)
Stimulus (listening to noise)
(a) Example of trials. 0.5s 0.6s
Verb
Verb
Noun
Verb
Noun
5s (b) Examples of stimulus sequence. Fig. 1 – (a) Stimuli were presented in four different conditions: SPQ; SPN; Babble Noise (N); Baseline (Quiet). The sequence of the conditions was fixed; SPQ-baseline-SPN-baseline-N-baseline. Total duration of each trial is 16 s. During stimulus trials, stimuli were presented at the 6th second, and lasted approximately 5 s, and participants were given 5 s to repeat forward all the words presented. (b) Illustration of stimulus train consisting of a sequence of five unrelated familiar words (verbs and nouns were randomly selected) to produce SPQ and SPN tasks.
echo time (TE) = 50 ms, field of view (FOV) = 192 192 mm, flip angle (a) = 908, matrix size = 128 128 and slice thickness = 5 mm with 1.25 mm gap. The protocols and procedures of the study have been described in the Manan et al.17,18 A sparse imaging paradigm was used to avoid the interference of scanner sound with the stimulus.19
digital voice editor), stored and edited (Adobe Audition 2.0) (intensity level of 55 dB). The multi-talker babble noise was originally recorded from five volunteers reading difference passages simultaneously and was edited to have an intensity level of 50 dB. The signal-to-noise ratio (SNR) was 5 dB throughout the presentation.
2.3.
2.4.
Stimuli and materials
Stimuli consisted of a series of natural speech words produced by a Malay male adult voice, and were digitally recorded (Sony
Paradigm and procedure
Auditory stimuli were presented binaurally. There were 120 trials in total with duration of 16 s for each. As in Fig. 1(a), there
210
neurology, psychiatry and brain research 19 (2013) 207–215
were four different conditions: (i) 20 trials listening to noise (N), (ii) 20 trials performing the speech perception task in 5 dB SNR (SPN), (iii) 20 trials performing the speech perception task in quiet (SPQ), and (iv) 60 rest trials with no stimuli, hereafter refer to as quiet (Q). The sequence of conditions was fixed: NQ-SPN-Q-SPQ-Q-N. The main reason for a fixed sequence is that reaction time tends to be faster than with a varied sequence.20 Total scan time was 32 min. In order to construct an experimental trial for SPQ and SPN conditions as in Fig. 1(b), a total of 40 (2-syllable and 3-syllable) verbs and nouns of unrelated familiar Malay words were randomized, producing the 40-trial sets. Five consecutive stimuli each with 0.6 s duration separated by 0.5 s silent gap made up a 5 s stimulus train. During a trial, the stimuli were presented at the 6th second and lasted approximately 5 s. The fMRI scans were acquired continuously while participants were instructed to repeat forward all the words presented during SPQ and SPN tasks (referred to as a forward repeat task). During the N task participants were instructed to only listen to the stimulus presented.
2.5.
hemodynamic response function as provided in SPM8. Statistical analysis was performed using a mixed effects model; fixed effects analysis (FFX) was used for single participant analysis and random effects analysis (RFX) for group analysis. For group analysis, contrast images were computed for each participant, and then one-sample t-tests were performed. A cortical brain region is regarded as significantly activated only if a minimum cluster size of 10 voxels was reached at pFWEuncorr < 0.001. Voxels or clusters with t-values higher than 3.5 were included in the region of interest (ROIs) analysis using WFU PickAtlas.22 The laterality index (LI) was calculated using the formula LI = (VL – VR)/(VL + VR), in which VL is the number of the activated voxels in the left hemisphere and VR is the number of activated voxels in the right hemisphere. The LI ranges from 1 to 1 from which negative values indicate right hemisphere dominance and positive values indicate left hemisphere dominance.23
3.
Results
3.1.
Behavioral scores
fMRI and behavioral procedures
Before entering the MR scanner, instructions about the task were explained in detail and the participants were instructed to focus with a clear mind throughout the procedure and to keep still. During the scan, the participants lay comfortably in a supine position in the MR scanner. An adjusted head holder was used to restrict head movement. Auditory stimuli were presented through earphones. During the scan, each individual participant's scores (number of correct answers) were recorded manually outside the scanner room.
Demographics and behavioral scores obtained from speech processing in quiet (SPQ) and in 5-dB SNR babble noise (SPN) are shown in Table 1. A paired t-test indicated that there is no significant difference in the performance accuracy between SPQ (M = 17.20, SD = 2.83) and SPN (M = 17.52, SD = 2.19), t(14) = 0.73, p = 0.48. This result suggests that background noise (5-dB SNR) does not have an appreciable effect on task performance, at least on our young healthy participants.
2.6.
3.2.
fMRI
3.2.1.
Listening to babble noise (N)
Data analysis
Performance of speech perception processing using the forward repeat span task (SP) in quiet and in noise was analyses in terms of accuracy. Paired t-tests were used to compare the quiet and noise conditions. A quantitative interpretation on the participant's performance during SP in quiet and in noise is made in relation to the fMRI results. For fMRI data processing, our sparse-imaging data were analyzed in a manner similar to procedures in our previous studies17,18 using MATLAB 7.4 – R2008a (Mathworks Inc. MA, USA) and Statistical Parametric Mapping (SPM8) (Functional Imaging Laboratory, Wellcome Department of Imaging Neuroscience, Institute of Neurology, University College of London, UK; http://www.fil.ion.ucl.ac.uk/spm). The first two images of every EPI-recording session were discarded to account for the approach to steady state of the MR signal. Prior to image analysis, each participant's raw data were motion-corrected and normalized.21 The amount of absolute motion did not exceed 2 mm for any participant. Data were further analyzed using a 12 parameter non-linear normalization into the MNI-reference state as implemented in SPM8, and with smoothing (FWHM = 6 mm). The fMRI data were analyzed according to the general linear model as implemented in SPM8. With regard to the different conditions, four regressors were included in the design: (Background babble noise) N, SPQ, SPN, and baseline (Q). The regressors were convolved using the
The task of listening to babble noise (N) was used as control in this study. The main purpose of this task was to evaluate each participant's responses to a non-verbal stimulus to be used as a comparison with other conditions. The superior temporal gyrus (STG) and middle temporal gyrus (MTG) were significantly activated bilaterally ( puncorr < 0.001), indicating that both areas were involved in the auditory processing. The coordinates and the anatomical localization for both areas are shown in Table 2. The t-value in the table refers to the local maxima in each particular STG and MTG in comparison to the whole brain. This t-value is generated from the RFX analysis at puncorr < 0.001. Furthermore, values of the laterality index (LI) indicate that both the STG and MTG demonstrate leftward asymmetry (LI values are tabulated in Table 2).
3.2.2.
Brain activation due to SPQ and SPN
For speech perception processing of the forward repeat task in quiet (SPQ) and in noise (SPN), significant areas of activation with the coordinates and anatomical localizations are shown in Table 3 ( puncorr < 0.001). The t-values in the table refer to the main activated areas during both tasks in comparison to the whole brain. The t-values are generated from the RFX analysis at puncorr < 0.001.
211
neurology, psychiatry and brain research 19 (2013) 207–215
Table 2 – Anatomical area, brain hemisphere, t-value, coordinates of maximum intensity (x, y, z) and number of activated voxels obtained from group analysis ( puncorr < 0.001) during listening to babble noise condition (N). Anatomical area
Hemisphere
t-value
Coordinate (x, y, z mm)
STG
L R
6.06 5.09
66, 46,
26, 6 20, 2
MTG
L R
6.60 5.22
66, 70,
38, 8 34, 2
NOV
LI
1131 1120
0.005
525 326
0.233
Abbreviations: NOV, number of activated voxels; LI, laterality index; STG, superior temporal gyrus; MTG, middle temporal gyrus; L, left; R, right.
Table 3 – Anatomical area, brain hemisphere, t-value, coordinates of maximum intensity (x, y, z), and number of activated voxels are obtained from group analysis RFX ( puncorr < 0.001) comparing SPQ and SPN tasks. Anatomical Area
Hemisphere
SPQ t-value
SPN
Coordinate
NOV
LI
t-value
(x, y, z mm)
Coordinate
NOV
LI
50, 12, 18 44, 26, 4
293 93
0.52
54,
28,
179 –
1.00
67 58
0.07
46 46
0.00
11 –
1.00
191 131
0.19
14 –
1.00
(x, y, z mm)
STG
L R
6.61 7.23
60, 46,
12, 12 24, 4
276 170
0.24
5.78 5.73
MTG
L R
5.59 4.92
54, 48,
28, 22,
122 25
0.66
5.81 –
PCG
L R
5.09 4.64
50, 50,
4, 46 8, 36
96 23
0.61
4.74 5.24
48, 50,
8, 42 8, 38
Cerebellum
L R
5.34 6.4
4, 74, 24 26, 64, 30
42 35
0.09
5.04 5.61
28, 24,
60, 66,
Thalamus
L R
– 4.96
– 42
1
4.88 –
Post CG
L R
7.67 5.25
HG
L R
5.58 –
4 8
– 0,
12, 8
62, 10, 14 56, 10, 22 32,
30, 10
–
300 145
0.35
5.35 5.5
50 –
1.00
5.72 –
4
–
2,
14, 8
– 62, 2, 18 50, 10, 36 34, –
30, 6
32 28
``–'': Not significant. Abbreviations: SPQ, speech perception in quiet; SPN, speech perception in noise; NOV, number of activated voxels; LI, laterality index; STG, superior temporal gyrus; MTG, middle temporal gyrus; PCG, precentral gyrus; IFG, inferior frontal gyrus; MFG, middle frontal gyrus; IPL, inferior parietal lobes; SPL, superior parietal lobes; Post-CG, postcentral gyrus; HG, heschl's gyrus; L, left hemisphere; and R, right hemisphere.
Both the SPQ and SPN tasks activated bilateral superior temporal gyrus (STG), middle temporal gyrus (MTG), precentral gyrus (PCG), cerebellum, postcentral gyrus (Post CG), and left Heschl's gyrus (HG). However, the left thalamus was not activated during SPQ and the right thalamus was not activated during SPN. Direct comparison of SPQ and SPN conditions revealed higher BOLD activation in the SPN condition in the left STG, left MTG, right PCG and bilateral cerebellum. Other areas such as the right STG, right MTG, left PCG, left HG, and bilateral post CG demonstrated higher BOLD activation in the SPQ condition. Brain images of these results are presented in Fig. 2 (for SPQ) and Fig. 3 (for SPN). Result on the LI reveal that all the activated brain areas on both tasks show a leftward asymmetry, except that the thalamus demonstrates a rightward asymmetry during the SPQ condition. The LI values of all the activated areas on both tasks are tabulated in Table 3.
4.
Discussion
The main purpose of this study was to directly compare performance of speech stimuli using a forward repeat task
performed in quiet (SPQ) and in noise (SPN) and to examine corresponding areas of brain activation, with a specific focus on auditory-speech related areas that might show differential activation in the two conditions. Our primary findings are that there were similar performance accuracy measures on the two tasks and both tasks activated a network of brain areas in the temporal cortex and the cerebellum, but the two tasks also showed specific areas that were differentially active. The effects in specific regions of interest are discussed below in terms of the possible neural mechanisms underlying the two tasks and in particular, possible compensatory processes involved in speech processing in the presence of noise.
4.1.
Listening to noise (N) task
The N task imposed the fewest processing demands, as one would expect given that participants were required to merely listen to babble noise. The observed activation of the STG and MTG was also expected, given our use of non-verbal stimuli. Our results are consistent with Burton et al.7,10 indicating activation of STG and MTG during a non-verbal listening task. This pattern of activation is also consistent with previous
212
neurology, psychiatry and brain research 19 (2013) 207–215
Fig. 2 – SPQ task ( puncorr < 0.001). Activation maps showing the averaged activated volume of BOLD signal during SPQ task. [Note: left side of the brain is on the left: neurological conventions].
studies demonstrating the involvement of STG and MTG in auditory processing.7,12 Result on the LI calculation revealed that both brain areas demonstrated a leftward asymmetry, consistent with the left hemisphere dominance in processing of non-verbal auditory stimuli.9,24
4.2.
Activated areas during SPQ and SPN
Cortical areas with significant activation during SPQ were similar to those demonstrated for SPN. This suggests that SPQ and SPN use the same general neural networks for processing.
neurology, psychiatry and brain research 19 (2013) 207–215
213
Fig. 3 – SPN task ( puncorr < 0.001). Activation maps showing the averaged activated volume of BOLD signal during SPN task. [Note: left side of the brain is on the left: neurological conventions].
Those brain areas included the STG, MTG, precentral gyrus (PCG), cerebellum, postcentral gyrus (Post-CG), and left Heschl's gyrus (HG). Both tasks also activated PCG and post-CG bilaterally. Given that participants were instructed to remember words, activation of these areas was expected. Bilateral PCG and post-CG were reported to play an important role in supporting the maintenance processes in both visual and
verbal memory.25 The PCG has also been proposed to contribute to a circuit for verbal memory and play a key role in the control of attention.26–28 The similarity with previous studies strengthens the claims from earlier research implicating the involvement of those areas in attending to and processing the presented stimuli while at the same time holding the information in memory for subsequent reporting.
214
neurology, psychiatry and brain research 19 (2013) 207–215
Like the N task, SPQ and SPN tasks also activated bilateral STG and MTG, as shown in Table 3 and Figs. 2 and 3. This again is easily explained as related to processing of auditory stimuli.7,10,12 While the STG is associated with auditory processing, it is also linked with spoken language processing.7 The MTG is also associated with accessing word meaning.12 Given that our stimuli consisted of meaningful spoken words, activation of these areas (bilateral STG and MTG) was expected. Our findings are also consistent with those observed in normal hearing participants presented with speech stimuli.24 Furthermore, previous work suggests that these regions are associated with the pathway of spoken word processing.5 A comparison of the tasks performed in quiet and in noise revealed that the spatial extent of the left STG and left MTG during SPN was higher and the activation cluster was also larger compared to in the SPQ task. The same result has been observed in previous studies in that background noise was associated with increased involvement of the left hemisphere in speech sound processing while right hemisphere processing was unaffected by noise.13,14 We propose that the increase of brain activity in the left STG and left MTG might be due to the higher demand in processing of word stimuli in the presence of background noise; that is, background noise can have an interfering effect on completion of a given task. The increases in activation in the left STG and left MTG during SPN are also proposed as contributing to compensatory mechanisms to overcome potential interference effects. Results on the laterality index show leftward asymmetries of STG and MTG on both tasks (SPQ and SPN). This result is consistent with a previous study indicating that the left hemisphere of the STG is involved in auditory word processing.10 This pattern of activation is further supported by classic neuropsychology which emphasizes the dominance of the left hemisphere STG in language comprehension.4 Reliable activation was also found in the cerebellum bilaterally. The cerebellum has been identified as a region that may support the ability to perform syllable discrimination and identification tasks.2 It is hypothesized to be a part of network important for auditory-motor integration.29 The cerebellum has also been proposed to have interesting links with attention during memory processing.30–32 Results of the present study reveal that activation of the (bilateral) cerebellum increased during SPN compared to SPQ. We propose that this increase is associated with a greater recruitment of attention resources when stimuli are processed in the presence of background noise. We further propose that in the presence of background noise, this additional activation is part of a compensatory process to overcome interference. This type of compensatory strategy seems similar to that mentioned in Wong et al.5
4.3.
Behavioral score in relation to fMRI
Interestingly, whereas the fMRI results (see Tables 2 and 3) reveal that the brain recruits additional activity in left STG, left MTG, and bilateral cerebellum during speech perception in noise (SPN), the behavioral performance, as measured by accuracy scores, was not different for the two tasks (see Table 1). We propose that in the noisy environment the brain uses the same mechanism(s) as in the quiet environment in order
to accomplish the same tasks but with increased processing demands. Thus, the additional activation in the same brain areas (left STG, left MTG, and bilateral cerebellum) in the noisy environment is compensating for the additional demand in cognitive processing.
5.
Conclusions
The present study examined areas of the brain activated during N, SPQ, and SPN tasks while focusing on the differences in neural activity and areas of activation during processing of speech stimuli in quiet and in noise. Both tasks activated the same brain regions, which suggest that both use the same neural networks. However, there were differences in activation intensity. There was increased activation during SPN in the left STG, left MTG, and bilateral cerebellum. These fMRI findings, in light of very similar behavioral performances on the two tasks, suggest that the involved brain structures (with increased activity during the noise condition) perform a compensatory function in overcoming potential interference associated with the noise. In our view, the cerebellum plays a key role in such processes, as it might modulate to some extent the neural processing of involved cortical regions (in this case, involved areas of the temporal lobe).
Acknowledgements We thank Sa'don Samian from Department of Radiology, Universiti Kebangsaan Malaysia Medical Centre, for the assistance in fMRI scans. We also thank Noorazrul Azmie Yahya from Diagnostic Imaging and Radiotherapy Program, School of Diagnostic and Applied Health Sciences, for his ideas, and insight. This work is supported by the Research University Grant UKM GUP-SK-07-020-205.
references
1. Kuhl PK. Speech perception. Introduction to communication sciences and disorders. San Diego: Singular Publishing Group, Inc.; 1996. 2. Hickok G, Poeppel D. Towards a functional neuroanatomy of speech perception. Trends in Cognitive Sciences 2000;4: 173–81. 3. Diehl RL, Lotto AJ, Holt LL. Speech Perception. Annual Review of Psychological 2004;55:149–79. 4. Darwin CJ. Listening to speech in the presence of other sounds. Philosophical Transactions of the Royal Society 2007;363 (1493):1011–21. 5. Wong PCM, Jin JX, Gunasekera GM. Aging and cortical mechanisms of speech perception in noise. Neuropsychologia 2009;47:693–703. 6. Thomsen T, Rimol LM, Ersland L, Hugdahl K. Dichotic listening reveals functional specificity in prefrontal cortex: an fMRI study. NeuroImage 2004;21:211–8. 7. Burton MW, Small SL, Blumstein S. The role of segmentation in phonological processing: an fMRI investigation. Journal of Cognitive Neuroscience 2000;12:679–90. 8. Ashburner J, Friston KJ. Computational neuroanatomy. In: Frackowiak RSJ, Friston KJ, Frith CD, Dolan RJ, Price CJ, Zeki J,
neurology, psychiatry and brain research 19 (2013) 207–215
9.
10.
11.
12.
13.
14.
15. 16.
17.
18.
19.
20.
Ashburner J, Penny WD, editors. Human brain function. Amsterdam: Elsevier Academic Press; 2004. p. 655–72. Sequeira SDS, Specht K, Hamalainen H, Hugdahl K. The effects of background noise on listening to consonant–vowel syllables. Brain and Language 2008;107:11–5. Burton MW, Noll DC, Small SL. The anatomy of auditory word processing: individual variability. Brain & Language 2001;77:119–31. Sakai KL. Language acquisition and brain development. Science 2005;310(574):815–9. http://dx.doi.org/10.1126/ science.1113530. Howard D, Patterson K, Wise R, Brown WD, Friston K, Weiller C, et al. The cortical localization on the lexicons: positron emission tomography evidence. Brain 1992;115:1769–82. Brattico E, Kujala T, Tervaniemi M, Alki P, Ambrosi L, Monitillo V. Long-term exposure to occupational noise alters the cortical organization of sound processing. Clinical Neurophysiology 2005;116:190–203. Kujala T, Shtyrov Y, Winkler I, Saher M, Tervaniemi M, Sallinen M. Long-term exposure to noise impairs cortical sound processing and attention control. Psychophysiology 2004;41:875–81. Kujala T, Brattico E. Detrimental noise effects on brain's speech functions. Biological Psychology 2009;17:135–43. Yusoff S, Ishak A. Evaluation of urban highway environmental noise pollution. Sains Malaysiana 2005;34 (2):81–7. Manan HA, Franz EA, Yusoff AN, Mukari SZM. Hippocampal– cerebellar involvement in enhancement of performance in word-based BRT with the presence of background noise: an initial fMRI study. Psychology & Neuroscience 2012;5(20):247– 56. http://dx.doi.org/10.3922/j.psns.2012.2.16. Manan HA, Yusoff AN, Franz EA, Mukari SZM. Early and late shift of brain laterality with normal aging on a word-based short term memory task. ISRN Neurology 2013;2013:13. http:// dx.doi.org/10.1155/2013/892072. [Article ID 892072]. Hall AD, Haggard PM, Summerfield PR, Elliott Q, Gurney M, Bowtell RW. ``Sparse'' temporal sampling in auditory fMRI. Human Brain Mapping 1999;16:213–23. Hazeltine E. The representational nature of sequence learning: evidence for goal-based codes. In: Prinz W, Hommel B, editors. Attention and performance. Oxford: Oxford University Press XIX; 2002. p. 673–89.
215
21. Confalonieri L, Pagnoni G, Barsalou LW, Rajendra J, Eickhoff SB, Butler AJ. Brain activation in primary motor and somatosensory cortices during motor imagery correlates with motor imagery ability in stroke patients. ISRN Neurology 2012. http://dx.doi.org/10.5402/2012/613595. 22. Maldjian JA, Laurienti PJ, Kraft RA, Burdette JH. An automated method for neuroanatomic and cytoarchitectonicaltas-based interrogation of fMRI data sets. Neuroimage 2003;19:1233–9. 23. Segheir ML. Laterality index in functional MRI: methodological issues. Magnetic Resonance Imaging 2008;26:594–601. 24. Salvi RJ, Lockwood AH, Frisina RD, Coad ML, Wack DS, Frisina DR. PET imaging of the normal human auditory system: responses to speech in quiet and in background noise. Hearing Research 2002;170:96–106. 25. Just MA, Carpenter PA, Keller TA, Eddy WF, Thulborn KR. Brain activation modulated by sentence comprehension. Science 1996;274:104–16. 26. Knops A, Nuerk HC, Fimm B, Vohn R, Willmes K. A special role for numbers in working memory? An fMRI study. NeuroImage 2006;29:1–14. 27. Justus T, Ravizza SM, Fiez JA, Ivry RB. Reduced phonological similarity effects in patients with damage to the cerebellum. Brain and Language 2005;95:304–18. 28. Karlsgodt KH, Shirinyan D, Theo GM, Cohen MS, Connon TD. Hippocampal activation during encoding and retrieval in a verbal working memory paradigm. NeuroImage 2005;25: 1224–31. 29. Schweizer TA, Alexander MP, Cusimano M, Stuss DT. Fast and efficient visuotemporal attention requires the cerebellum. Neuropsychologia 2007;45:3068–74. 30. Townsend J, Westerfield M, Leaver E, Makeig S, Jung T, Pierce K, et al. Event-related brain response abnormalities in autism: evidence for impaired cerebello-frontal spatial attention networks. Cognitive Brain Research 2001;11: 127–45. 31. Teder-Salejarvi WA, Pierce KL, Courchesne E, Hillyard SA. Auditory spatial localization and attention deficits in autistic adults. Cognitive Brain Research 2005;23: 221–34. 32. Courchesne E. Brainstem, cerebellar and limbic neuroanatomical abnormalities in autism. Current Opinion on Neurobiology 1997;7:269–78.