Neuroscience Letters 707 (2019) 134283
Contents lists available at ScienceDirect
Neuroscience Letters journal homepage: www.elsevier.com/locate/neulet
Research article
Deaf adolescents have bigger responses for somatosensory and visual stimulations Çağdaş Güdücüa,1, İpek Ergönüla,1, Adile Öniza, Ahmet Ömer İkizb, Murat Özgörena,c,
T
⁎
a
Dokuz Eylül University, Faculty of Medicine, Department of Biophysics, Balçova, Izmir, Turkey Dokuz Eylül University, Faculty of Medicine, Department of Otorhinolaryngology, Balçova, Izmir, Turkey c Department of Biophysics, Faculty of Medicine, Near East University, Nicosia, Cyprus b
A R T I C LE I N FO
A B S T R A C T
Keywords: Somatosensory evoked potentials Visual evoked potentials Electroencephalography Congenitally deaf Unimodal study
The functions of the sensory systems on disabled people have been one of the most investigated topics in brain research. In these studies, mostly visual stimuli had been employed while investigating the deaf participants. Limited number of electrophysiological studies revealed better visual sensory processing in deaf participants. On the other hand, studies deploying tactile stimuli especially used either electrical or painful stimulus or they focused the psychophysical assessments of thresholds associated with tactile stimuli. The present study tried to evaluate electrophysiological brain responses in deaf and control group with a unimodal study design including both the visual and non-painful tactile stimuli, and to reveal the possible changes in brain plasticity within modality basis. Thirteen congenitally deaf adolescents (mean: 14.61 ± 1.06 years; 7 girls) and 10 adolescents with normal hearing (16.6 ± 2.72 years; 4 girls) were recruited for the study. Somatosensory evoked potentials (SEP) and visual evoked potentials (VEP) were separately delivered to the participants and in order to maintain neutrality among sessions they were presented in random order while the electroencephalography (EEG) recordings were taken. Brain responses to non-painful tactile and visual stimuli were measured for N1, P2, and N2 components. All amplitudes of deaf group were significantly larger than all amplitudes of control group in SEP session whereas in VEP session only P2 and N2 amplitudes of deaf group were statistically larger. In addition, the latency of N1 component in VEP session was significantly earlier in deaf group. These findings suggest early cortical excitability, less neuronal capacity usage and also more efficient sensory processing in deaf group.
1. Introduction The functions of the sensory systems have been one of the most investigated topics in brain research. The structural or functional changes in the brain due to the sensory loss have been well studied both through electrophysiological [1–6] and morphological methods [5,7–9]. It has been observed that most of the studies were carried out through employing blind participants, yet a very limited number of studies have been conducted with deaf participants. Studies with deaf participants have mostly employed visual stimuli [10]. Studies examining the parameters associated with the visual stimuli demonstrated that deaf participants outperform the control group. Most of the research employed central and peripheral visual task among
non-hearing and hearing participants. Non hearing participants outperformed hearing subjects in terms of reaction times [11–14]. However; in their review Voss et al., reported that deaf and control subjects have been shown to be comparable for visual skills such as contrast sensitivity, motion velocity, motion sensitivity, etc. [15]. Similarly electrophysiological study findings on better visual sensory processing in deaf individuals are available [1,2,4,6,16]. By contrast, there are very limited studies on tactile stimulus processing in deaf individuals. Glick and Sharma suggested that cross-modal plasticity by vision in congenital deafness could be due to compensatory dependence on the visual modality for communication especially in difficulty listening situations [17]. Studies deploying tactile stimuli especially used either electrical [18] or painful stimulus or they focused the psychophysical
Abbreviations: SEP, somatosensory evoked potentials; VEP, visual evoked potentials; EEG, electroencephalography ⁎ Corresponding author at: Department of Biophysics, Faculty of Medicine, Dokuz Eylül University, Izmir, Turkey. E-mail address:
[email protected] (M. Özgören). 1 The first two authors contributed equally to this work. https://doi.org/10.1016/j.neulet.2019.134283 Received 4 October 2018; Received in revised form 14 May 2019; Accepted 20 May 2019 Available online 23 May 2019 0304-3940/ © 2019 Elsevier B.V. All rights reserved.
Neuroscience Letters 707 (2019) 134283
Ç. Güdücü, et al.
Fig. 1. The grand average somatosensory evoked potentials (SEP) are shown on the left side, and the grand average visual evoked potentials (VEP) are shown on the right side both in FCZ electrode. Measured peak points were named as N1, P2, and N2 respectively. Y-axis describes the amplitudes in microvolts, while the x-axis shows the latencies in milliseconds. Stimulus onset time marked as time zero (0 ms) in x-axis. Gray lines represent deaf participants whereas black lines represent the control group. The EEG epoch starts from 1000 ms before stimulus and lasts until 2000 ms after stimulus onset. Positive amplitudes are shown upward and negative amplitudes are shown downwards. N = 13 for deaf and N = 10 for control group.
room. Participants were seated on a comfortable chair during the recordings. EEG signals were collected by using an EEG-cap (Quik, Compumedics Neuromedical Supplies Inc., Australia) according to 1010 International positioning system [21]. Reference electrodes were placed at earlobes. SCAN 4.3 was used to assess the EEG recordings which were digitized at 1 kHz sampling frequency. The impedance differences between reference channels and rest of the electrodes was always less than 10kΩ. Stimulus onset time were marked to ongoing EEG without any delay. EEG epochs were started from 1000 ms before stimulus and lasted until 2000 ms after stimulus onset. The electrooculogram (EOG) was recorded by placing electrodes at outer canthus of left and right eye. Trials, in which ocular activity exceeded 50μV, were discarded from recordings. Data were filtered on x-axis with a band-pass filter with cutoff frequencies at 0.5 and 48 Hz. Following this process an average file for every participant has been created. FCZ channel data have been evaluated primarily for this study. Both the amplitudes and latencies were measured for the peak points, which were described in Fig. 1.
assessments of thresholds associated with tactile stimuli [14,19]. In the literature we found very limited number of studies which are investigating the non-painful tactile stimuli with electrophysiological analysis on deaf participants [3,5]. The literature on electrophysiological studies explained the superior performance in disabled individuals with capacity utilization and brain plasticity. However, contradicting results were found in the literature as well as some consistent results. In terms of tactile domain, Hauthal et al., reported that deaf group processed the tactile stimuli more efficiently than the control group. Hauthal et al. also presented no differences between control and deaf group during visual stimuli processing while Bottari et al. reported an earlier cortical processing for visual stimuli in deaf group [3,15]. In order to be able to handle a thorough study on deaf subjects, one would be able to address visual, tactile and chemosensory modalities as these are the common modalities for both control and deaf subjects. In this regard, the present study tried to evaluate electrophysiological brain responses in deaf and control group with a unimodal study design including both the visual and non-painful tactile stimuli, and to evaluate the potential changes in brain plasticity within modality basis. We hypothesized that the deaf group will have bigger amplitudes compared with the controls due to reorganization of the primary and secondary sensory areas.
2.3. Stimuli and procedure Somatosensory evoked potentials (SEP) and visual evoked potentials (VEP) were separately recorded from the participants by delivering somatosensorial and visual stimuli. In order to maintain neutrality among the sessions they were presented in random order. Participants were rested 5 min between sessions. The stimuli were marked on EEG recordings by using a special embedded micro-controlled stimulus unit (EMISU) [22]. A pneumatic stimulator (Somatosensory Stimulus Generator 4-D Neuroimaging, USA) was used to deliver the non-painful tactile stimulations. The stimulator delivered the stimulus via plastic hoses with a rubber membrane ending which were placed to the pulp of fingers. The membranes were puffed up on stimulus time and puffed down after the stimulus. Single type of non-painful tactile stimuli was presented to the second and the third fingers of the right hand. Tactile stimulus pressure was set to 140 kPa and inter-stimulus interval was set within a range of 2.5–4.5 seconds. Total number of 60 stimuli were delivered to participants (the first 30 stimuli to the second finger, and the second 30 stimuli to the third finger, or vice versa). The stimuli used in visual evoked potentials recordings were designed with MATLAB and delivered through 19-inch square LCD monitor as a gray circle. The grey circle was positioned in the middle of the screen and the light intensity was set to 35 cd/m2. The participants were seated 130 cm away from the screen. Visual stimulus was presented for 500 ms and inter-stimulus interval was set within a range of 2.5–4.5 seconds randomly. Total of 60 numbers of stimulus were delivered to participants.
2. Materials and methods 2.1. Participants Thirteen congenitally and profoundly deaf adolescents (mean: 14.61 ± 1.06 years; 7 girls) and 10 adolescents with normal hearing (16.6 ± 2.72 years; 4 girls) were recruited for the study. The age of the participants did not differ between deaf and control groups (U = 43, Z = -1.416, p = .157). All of the participants had normal or corrected-to-normal vision who are wearing eyeglasses or contact lenses in their daily life with no further reported visual problems, and none of the participants had color blindness. Only right-handed individuals were recruited for the study and Edinburgh Handedness Test was used to determine their dominant hand [20]. All of the deaf participants were signers. The study and the control group participants were selected according to their school grades. Study was carried out at the “Five Senses Laboratory” of the Biophysics Department at Dokuz Eylul University. Three researchers of the project team have learnt the sign language in order to maintain communication with deaf participants during the experiment. Also, one of the teachers from “Tulay Aktas School for Impaired Hearing” accompanied the students throughout the project. The parents of participants gave their informed and written consent for the study. The study was also approved by the Non-invasive Research Ethics Committee of Dokuz Eylul University and the Ministry of Education.
2.4. Statistical analysis
2.2. Electroencephalography recordings
SPSS 22 was employed for statistical analysis. Descriptive statistics were calculated and normality of data was determined via Shapiro-Wilk tests. Following the determination of normality, for the comparisons of two independent groups, Mann Whitney-U test was employed while, a
The electroencephalography (EEG) recordings were taken in a wellventilated, dimly lit, electromagnetically and acoustically shielded 2
Neuroscience Letters 707 (2019) 134283
Ç. Güdücü, et al.
Fig. 2. Amplitudes for SEP sessions for deaf and control group were shown on the left side and the amplitudes for VEP sessions for deaf and control group were presented on the right side. Gray bars represent deaf whereas black bars represent control group. N = 13 for deaf and N = 10 for control group (* p < 0.05; ** p < 0.005).
stimulations, reported greater amplitudes of N200 component in deaf comparing the control group [3,4,6]. N200 component, which is described as within the same time windows as N1 in our study, is a negative deflection peaking at about 200 msec after presentation of stimulus. Similarly, the present study showed larger amplitude of N1 component to visual stimuli in deaf group compared to control group. However, this difference was not statistically significant. On the other hand, amplitudes of N1 component to tactile stimuli were statistically larger in deaf group compared to control group. Hauthal et al. [3], explained the differences in amplitude of N200 (N1) component by means of change in sensory processing for visual stimuli, while no significant difference were reported for tactile stimuli. Correspondingly, non-significant differences on VEP amplitudes can be explained with the nature of simple stimuli which is not enough to show the capacity changes in sensory processing. With complex visual stimulus design, the plasticity of sensory processing could be demonstrated on deaf group. On the other hand, in this present study a significant difference was demonstrated on latency of N1 component in VEP session but this difference could not be found in SEP session. In the related literature, while a significant difference was shown on N1 latency to tactile stimulations, similar differences were not demonstrated in VEP sessions. Bottari et al. [16] reported an early detection of visual stimuli in deaf group compared to control group. Similarly, in their study conducting healthy participants, Callaway et al. [23] suggested that the early latency in N1 component can be an indicator of less capacity usage during visual stimuli processing. Early and larger responses to visual stimulations as well as tactile stimulations in deaf group could show more efficient sensory processing comparing to control group. While earlier responses can reflect more efficient processing, larger non-cognitive responses can be the result of enhanced cortical excitability [24]. Previous studies investigated the P300 component within the same time windows described as P2 in our study which refers to evoked
one-way ANOVA with Bonferroni correction was conducted to compare brain responses on independent groups in VEP and SEP sessions. 3. Results VEP and SEP for deaf and control group were demonstrated in Fig. 1. Amplitudes and latencies were calculated and demonstrated for both sessions in Figs. 2 and 3. Brain responses to non-painful tactile stimuli were measured for N1, P2, and N2 peaks and the amplitudes of deaf were significantly larger than the amplitudes of control group. Mean (S.D.) SEP amplitudes were presented in Table 1. Brain responses to visual stimuli were measured for N1, P2, and N2 peaks and the amplitudes of deaf group for P2 and N2 components were statistically larger than the amplitudes of control group for P2 and N2 components. Mean (S.D.) VEP amplitudes were presented in Table 1. Despite the amplitude of N1 component was found larger in deaf group compared to control group, statistical significance could not have been observed, F(1, 16) = .823, p = .378. In SEP session the latency differences were not statistically significant for the comparison between deaf group and control group. Mean (S.D.) SEP latencies are presented in Table 2. In the VEP session latency of N1 component was statistically earlier in deaf group than the control group, F(1, 16) = 5.98, p = .026. In addition, latencies of P2 and N2 components of deaf group did not statistically differed from and control groups’. Mean (S.D.) VEP latencies are presented in Table 2. 4. Discussion This present study investigated the brain responsiveness to visual and non-painful tactile stimulations in deaf and controls within a counter-balanced unisensory study design. The literature on visual
Fig. 3. Latencies for SEP sessions for deaf and control group were shown on the left side and the latencies for VEP sessions for deaf and control group were presented on the right side. Gray bars represent deaf whereas black bars represent control group. N = 13 for deaf and N = 10 for control group (* p < 0.05). 3
Neuroscience Letters 707 (2019) 134283
Ç. Güdücü, et al.
Table 1 N1, P2, N2 mean amplitudes and ANOVA coefficients of SEP and VEP for Deaf and Control Groups. AMPLITUDE
SEP
VEP
df
N1 P2 N2 N1 P2 N2
Control
Deaf
Between Groups
Within Groups
−4.06 ± 0.86 4.20 ± 1.40 −4.062.69 ± −4.58 ± 1.22 4.33 ± 1.98 −1.51 ± 1.58
−18.65 ± 13.14 13.70 ± 4.13 −7.37 ± 3.49 −6.07 ± 4.47 8.80 ± 3.49 −4.58 ± 4.16
1 1 1 1 1 1
18 18 18 16 16 16
F
p
10.27 7.61 5.42 .82 10.34 4.60
.005** .013* .032* .378 .005** .048*
areas in the brain [29,30]. Cardon suggested that regions of auditory cortex and somatosensory cortex are activated by vibrotactile stimuli in cochlear implanted children whereas the vibrotactile stimulus only elicits activation in somatosensory cortex in the normal hearing group [33] Our study using congenital deaf subjects have pointed into a special organization of brain areas with a new focus on multi-modal sensory processing. One could hypothesize that the brains of congenital deaf individuals may have undergone a neurodevelopmental progress such that typical brain regions for various sensory modality may have been re-wired. Henceforth with the lack of proper auditory stimulation during the development the brain regions may have favored other plastic adaptation in regard to various modalities.
potentials’ setup. Hauthal et al. [3] showed larger amplitudes in P300 on both visual and tactile stimulations on deaf group compared to control group. Coherently, our study reported the same significantly larger amplitudes for P2 components on deaf group. In addition to this, although not significant, latencies of P2 component on visual and tactile stimulation is earlier in deaf group. These significant amplitude differences in visual and tactile stimulation in P2 component found in our study can be explained with activating more organized structure as well as managing greater neural population during stimuli processing in deaf group. Also, the latency differences could show an extra attention processes/efficient processing in deaf group. Karns and Knight (2008) reported that behavioral response to visual stimuli is faster than vibrotactile or auditory stimuli detection, thus pointing to also increase of attention processes in deafs of our study group [25]. The N2 component which occurred between 400–500 millisecond time window, associated with attention related components and/or semantic process to visual stimulation [26,27]. Likewise, studies conducting congruent and incongruent visual stimulations, which were independent from semantic processes, N400 were observed in discrimination of stimulus type and responses to unexpected stimulus [28]. In this present study, larger amplitudes of N400 component were measured for both visual and tactile stimulations in deaf group. Activation of not only the primary and secondary sensory areas related with the stimulation but also auditory homologues of these regions could be one of the explanations for the significant differences in amplitudes. Voss et al. reported that BOLD changes in the auditory cortex of deaf subjects in response to visual motion [15]. Heimler and Pavani reported a significant response time advantage in deaf adults at visual determination task compared to control group. But this advantage did not occur in vibro-tactile task. The findings in our study related to early detection of visual and tactile stimuli were similar with Heimler and Pavani’s [14] behavioral findings. In the related literature cross-modal reorganization of the brain areas which deprived from a sensory loss has been shown [29–32]. When a sensory modality is deprived, the primary sensory areas recruited by the intact sensory modalities. Petrus et. al. demonstrated that visual deprivation causes an enhancement in auditory cortex (A1) in adult animals, and also they observed the thalamo-cortical plasticity [31]. In other studies, deploying deaf animals, visual localization and movement detection performances decreases when the auditory regions were inhibited as a result of these functions reallocated in homologous
5. Conclusion In this present study, brain responsiveness to visual and non-painful tactile stimuli, and sensory processing of these stimulations in deaf and control group have been investigated. Early latency in N1 component in visual related brain responses can be associated with more efficient neural processing due to sensory deprivation in hearing. In other words, this superiority in deaf group can be explained with early cortical excitability and less neuronal capacity usage. On the other hand, larger amplitudes in deaf group can be correlated with more efficient sensory processing and/or with different cortical representation areas. Sample size can be considered as a limitation of this study. In future studies, employing a larger sample of participants it will be possible to investigate trending results of the groups in terms of electrophysiology. In addition, neural processing in sensory disabled population could be explained by employing further analysis such as entropy or source localization. Contributors CG: Data collection and analyses, writing and reviewing the manuscript IE: Writing the manuscript, building the statistical analyses and results AO: Reviewing the data and commenting on the manuscript AOI: Design of the study, reviewing the manuscript MO: Design of the study, writing and reviewing the manuscript
Table 2 N1, P2, N2 mean latencies and ANOVA coefficients of SEP and VEP for Deaf and Control Groups. LATENCY
df Control
SEP
VEP
N1 P2 N2 N1 P2 N2
185.33 302.77 466.55 274.87 373.37 480.50
Deaf ± ± ± ± ± ±
37.35 47.76 70.38 145.64 172.02 187.58
173.90 265.90 444.63 158.10 260.00 393.50
± ± ± ± ± ±
15.74 32.55 64.21 38.92 56.66 41.26
4
Between Groups
Within Groups
1 1 1 1 1 1
18 18 18 16 16 16
F
p
.852 4.19 .52 5.98 3.87 2.05
.368 .055 .476 .026* .067 .171
Neuroscience Letters 707 (2019) 134283
Ç. Güdücü, et al.
Conflict of interest [15]
The authors report no conflicts of interest in this work.
[16]
Funding [17]
This work was supported by Turkish Research Council TUBITAK108S113 and also by Dokuz Eylul University, Scientific Projects Fund (ID: 2012.KB.SAG.083).
[18]
Acknowledgements [19]
The authors would like to thank to R. Ugras Erdogan for experimental setup. Tulay Aktas School for Impaired Hearing for their patience during the recording processes.
[20]
References [21] [1] B.A. Armstrong, H.J. Neville, S.A. Hillyard, T.V. Mitchell, Auditory deprivation affects processing of motion, but not color, Brain Res. Cogn. Brain Res. 14 (2002) 422–434, https://doi.org/10.1016/S0926-6410(02)00211-2. [2] D. Bottari, B. Heimler, A. Caclin, A. Dalmolin, M.-H. Giard, F. Pavani, Visual change detection recruits auditory cortices in early deafness, Neuroimage 94 (2014) 172–184, https://doi.org/10.1016/j.neuroimage.2014.02.031. [3] N. Hauthal, S. Debener, S. Rach, P. Sandmann, J.D. Thorne, Visuo-tactile interactions in the congenitally deaf: a behavioral and event-related potential study, Front. Integr. Neurosci. 8 (2014) 98, https://doi.org/10.3389/fnint.2014.00098. [4] N. Hauthal, J.D. Thorne, S. Debener, P. Sandmann, Source localisation of visual evoked potentials in congenitally deaf individuals, Brain Topogr. 27 (2014) 412–424, https://doi.org/10.1007/s10548-013-0341-7. [5] G. Hickok, D. Poeppel, K. Clark, R.B. Buxton, H.A. Rowley, T.P. Roberts, Sensory mapping in a congenitally deaf subject: MEG and fRMI studies of cross-modal nonplasticity, Hum. Brain Mapp. 5 (1997) 437–444, https://doi.org/10.1002/(SICI) 1097-0193(1997)5:6<437::AID-HBM4>3.0.CO;2-4. [6] H.J. Neville, A. Schmidt, M. Kutas, Altered visual-evoked potentials in congenitally deaf adults, Brain Res. 266 (1983) 127–132, https://doi.org/10.1016/00068993(83)91314-8. [7] E.M. Finney, I. Fine, K.R. Dobkins, Visual stimuli activate auditory cortex in the deaf, Nat. Neurosci. 4 (2001) 1171–1173, https://doi.org/10.1038/nn763. [8] D. Bavelier, A. Tomann, C. Hutton, T. Mitchell, D. Corina, G. Liu, H. Neville, Visual attention to the periphery is enhanced in congenitally deaf individuals, J. Neurosci. 20 (2000) RC93, https://doi.org/10.1038/nn1110-1309. [9] G.D. Scott, C.M. Karns, M.W. Dow, C. Stevens, H.J. Neville, Enhanced peripheral visual processing in congenitally deaf humans is supported by multiple brain regions, including primary auditory cortex, Front. Hum. Neurosci. 8 (2014) 177, https://doi.org/10.3389/fnhum.2014.00177. [10] F. Pavani, D. Bottari, Visual Abilities in Individuals With Profound Deafness a Critical Review, CRC Press, Boca Raton (FL), 2012 doi:NBK92865 [bookaccession].. [11] W. Hong Lore, S. Song, Central and peripheral visual processing in hearing and nonhearing individuals, Bull. Psychon. Soc. 29 (1991) 437–440, https://doi.org/10. 3758/BF03333964. [12] H.N. Reynolds, Effects of foveal stimulation on peripheral visual processing and laterality in deaf and hearing subjects, Am. J. Psychol. 106 (1993) 523–540 http:// www.ncbi.nlm.nih.gov/pubmed/8296925. [13] D. Bottari, E. Nava, P. Ley, F. Pavani, Enhanced reactivity to visual stimuli in deaf individuals, Restor. Neurol. Neurosci. 28 (2010) 167–179, https://doi.org/10. 3233/RNN-2010-0502. [14] B. Heimler, F. Pavani, Response speed advantage for vision does not extend to touch
[22]
[23] [24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
5
in early deaf adults, Exp. Brain Res. 232 (2014) 1335–1341, https://doi.org/10. 1007/s00221-014-3852-x. P. Voss, O. Collignon, M. Lassonde, F. Lepore, Adaptation to sensory loss, Wiley Interdiscip, Rev. Cogn. Sci. 1 (2010) 308–328, https://doi.org/10.1002/wcs.13. D. Bottari, A. Caclin, M.-H. Giard, F. Pavani, Changes in early cortical visual processing predict enhanced reactivity in deaf individuals, PLoS One 6 (2011) e25607, , https://doi.org/10.1371/journal.pone.0025607. H. Glick, A. Sharma, Cross-modal plasticity in developmental and age-related hearing loss: clinical implications, Hear. Res. 343 (2017) 191–201, https://doi.org/ 10.1016/j.heares.2016.08.012. L.E. Charroó-Ruíz, T. Picó, M.C. Pérez-Abalo, Mdel C. Hernández, S. Bermejo, B. Bermejo, B. Álvarez, A.S. Paz, U. Rodríguez, M. Sevila, Y. Martínez, L. Galán, Cross-modal plasticity in deaf child cochlear implant candidates assessed using visual and somatosensory evoked potentials, MEDICC Rev. 15 (2013) 16–22, https://doi.org/10.1590/S1555-79602013000100005. H. Frenzel, J. Bohlender, K. Pinsker, B. Wohlleben, J. Tank, S.G. Lechner, D. Schiska, T. Jaijo, F. Rüschendorf, K. Saar, J. Jordan, J.M. Millán, M. Gross, G.R. Lewin, A genetic basis for mechanosensory traits in humans, PLoS Biol. 10 (2012) e1001318, , https://doi.org/10.1371/journal.pbio.1001318. R.C. Oldfield, The assessment and analysis of handedness: the Edinburgh inventory, Neuropsychologia. 9 (1971) 97–113, https://doi.org/10.1016/0028-3932(71) 90067-4. G. Klem, H. Luders, H. Jasper, C. Elger, The ten-twenty electrode system of the International Federation, Electroencephalogr. Clin. Neurophysiol. 10 (1958) 371–375, https://doi.org/10.1016/0013-4694(58)90053-1. M. Ozgoren, U. Erdogan, O. Bayazit, S. Taslica, A. Oniz, Brain asymmetry measurement using EMISU (embedded interactive stimulation unit) in applied brain biophysics, Comput. Biol. Med. 39 (2009) 879–888, https://doi.org/10.1016/j. compbiomed.2009.07.001. E. Callaway, R. Halliday, The effect of attentional effort on visual evoked potential N1 latency, Psychiatry Res. 7 (1982) 299–308. Ç. Güdücü, E. Eskicioğlu, D. Öz, A. Öniz, R. Çakmur, M. Özgören, Auditory brain oscillatory responses in drug-naïve patients with Parkinson’s disease, Neurosci. Lett. 701 (2019) 170–174, https://doi.org/10.1016/J.NEULET.2019.02.039. C.M. Karns, R.T. Knight, Intermodal auditory, visual, and tactile attention modulates early stages of neural processing, J. Cogn. Neurosci. 21 (2009) 669–683, https://doi.org/10.1162/jocn.2009.21037. E.W. Wlotko, K.D. Federmeier, Two sides of meaning: the scalp-recorded n400 reflects distinct contributions from the cerebral hemispheres, Front. Psychol. 4 (2013) 181, https://doi.org/10.3389/fpsyg.2013.00181. A. Zani, G. Marsili, A. Senerchia, A. Orlandi, F.M.M. Citron, E. Rizzi, A.M. Proverbio, ERP signs of categorical and supra-categorical processing of visual information, Biol. Psychol. 104 (2015) 90–107, https://doi.org/10.1016/j. biopsycho.2014.11.012. S.C. Steffensen, A.J. Ohran, D.N. Shipp, K. Hales, S.H. Stobbs, D.E. Fleming, Genderselective effects of the P300 and N400 components of the visual evoked potential, Vision Res. 48 (2008) 917–925, https://doi.org/10.1016/j.visres.2008.01.005. S.G. Lomber, M.A. Meredith, A. Kral, Cross-modal plasticity in specific auditory cortices underlies visual compensations in the deaf, Nat. Neurosci. 13 (2010) 1421–1427, https://doi.org/10.1038/nn.2653. D. Bavelier, E.A. Hirshorn, I see where you’re hearing: how cross-modal plasticity may exploit homologous brain structures, Nat. Neurosci. 13 (2010) 1309–1311, https://doi.org/10.1038/nn1110-1309. E. Petrus, A. Isaiah, A.P. Jones, D. Li, H. Wang, H.-K. Lee, P.O. Kanold, Crossmodal induction of thalamocortical potentiation leads to enhanced information processing in the auditory cortex, Neuron. 81 (2014) 664–673, https://doi.org/10.1016/j. neuron.2013.11.023. H.-K. Lee, J.L. Whitt, Cross-modal synaptic plasticity in adult primary sensory cortices, Curr. Opin. Neurobiol. 35 (2015) 119–126, https://doi.org/10.1016/j. conb.2015.08.002. G. Cardon, Somatosensory Cross-modal Reorganization in Children with Cochlear Implants, University of Colorado Boulder, 2015.