Speech processing in children with cochlear implant

Speech processing in children with cochlear implant

G Model PEDOT-7748; No. of Pages 7 International Journal of Pediatric Otorhinolaryngology xxx (2015) xxx–xxx Contents lists available at ScienceDire...

757KB Sizes 2 Downloads 174 Views

G Model

PEDOT-7748; No. of Pages 7 International Journal of Pediatric Otorhinolaryngology xxx (2015) xxx–xxx

Contents lists available at ScienceDirect

International Journal of Pediatric Otorhinolaryngology journal homepage: www.elsevier.com/locate/ijporl

Speech processing in children with cochlear implant Takwa A. Gabr a,*, Mohammad R. Hassaan b a b

Associate Professor of Audiology, Audiology Unit, ENT Department, Faculty of Medicine Tanta University, Tanta, Egypt Associate Professor of Audiology, Audiology Unit, ENT Department, Faculty of Medicine, Zagazig University, Zagazig, Egypt

A R T I C L E I N F O

A B S T R A C T

Article history: Received 1 June 2015 Received in revised form 29 August 2015 Accepted 1 September 2015 Available online xxx

Cochlear implants (CIs) can be used effectively in the profoundly impaired children individuals. Objectives: This work was designed to assess speech processing at brainstem and cortical level in children fitted with CIs to investigate the possible influence of brainstem processing of speech on the cortical processing in those children. Method: Twenty children fitted with CIs underwent aided sound-field audiologic evaluation, speech evoked cortical auditory evoked potentials (S-CAEPs) and according to the results, children were classified into two groups: group I with good cortical response and group II with poor cortical response. This was followed by speech evoked ABR (S-ABR) recoding. Results: P1 component of CAEPs was recorded in all children while other component showed variable results. S-ABR was recorded in all children even those with poor S-CAEPs response who showed delayed D, E, F and O latencies. However, S-ABR amplitudes did not show any significant difference between both groups. Conclusions: Children fitted with CI showed immediate cortical activation following device programming and this activity depends on the age of implantation as well as the child’s age. S-ABR provides a new clinical tool that showed an important role of brainstem in complex sound processing that contribute to cortical processing. ß 2015 Elsevier Ireland Ltd. All rights reserved.

Keywords: Cochlear implants Speech processing Speech-evoked ABR (SABR) Cortical auditory evoked potentials (CAEPs)

1. Introduction Cochlear implants (CIs) are devices designed to use electric stimulation of the remaining auditory nerve fibers for hearing restoration in the profoundly impaired individuals [1]. Unlike a hearing aid, the use of CI allows for damaged inner ear by-pass, permitting direct stimulation of the auditory nerve fibers [2]. Cochlear implants represent one of the most important achievements of modern medicine, as for the first time in history an electronic device is able to restore a lost hearing sense [3]. Prelingually deaf children develop significant speech perception and production abilities over time as the use of CIs radically improves deaf children’s access to spoken language and the intelligibility of their speech [4]. These achievements may appear limited in the first two years, but show significant improvement after the second year of implantation, and do not

* Corresponding author at: ENT Department, Tanta University Hospitals, El-Geesh Street, Tanta 31725, Egypt. Tel.: +2 0100 13 23 962; fax: +2 040 333545. E-mail address: [email protected] (T.A. Gabr).

reach a plateau, even 5 years following implantation [5,6]. However, children with CIs show deficits in language abilities compared to hearing children, at least in the first years following implantation [7,8]. The verification of CI benefit in very young population is a challenging process owing to the absence of spoken language and the subject’s feedback. Results regarding the acquisition of spoken language in children with profound deafness fitted with CI are astonishing. It is a crucial element of rehabilitation process that follows the surgical implantation [9]. Auditory brainstem response (ABR) provides such an objective measure for evaluation of hearing thresholds in infants, children, and in difficult to test individuals [10]. Traditionally, the ABR has used short, simple stimuli, such as pure tones and tone bursts. Recently, ABR has also been recorded to complex sounds such as speech and termed as complex ABR (c-ABR) and it provides an objective measure of subcortical speech processing [11]. Complex ABR arises largely from the inferior colliculus of the upper midbrain [12] functioning as part of a circuit that interacts with cognitive, top-down influences. Unlike the click-evoked ABR, the cABR waveform is remarkably similar to its complex stimulus

http://dx.doi.org/10.1016/j.ijporl.2015.09.002 0165-5876/ß 2015 Elsevier Ireland Ltd. All rights reserved.

Please cite this article in press as: T.A. Gabr, M.R. Hassaan, Speech processing in children with cochlear implant, Int. J. Pediatr. Otorhinolaryngol. (2015), http://dx.doi.org/10.1016/j.ijporl.2015.09.002

G Model

PEDOT-7748; No. of Pages 7 2

T.A. Gabr, M.R. Hassaan / International Journal of Pediatric Otorhinolaryngology xxx (2015) xxx–xxx

waveform, allowing for fine-grained evaluations of timing, pitch, and timbre representation [13]. The use of aided auditory evoked potentials (AEPs) for assessment of amplification benefit has got much attention in the past decade because of its objectivity and applicability in young age population [14]. Specifically, the use of CAEPs has an advantage due to its cortical origin giving an idea about the function of higher auditory centers [15]. Moreover, the use of speech stimuli to evoke the cortical potentials can predict the speech perception in young amplification users who don’t fit for psychophysical tests [16]. Researchers has linked the efficiency of cortical responses in terms of their latencies, amplitudes [17] and number of produced waves [18] with speech recognition scores of HAs or CI users. In new CI users, particularly those with delayed neural maturation, these potentials may not be produced [19]. Complex-ABR represents a lower neural response with earlier maturational course than the cortical response [20] which may provide information about the amplification benefit in young age CI patients. This relatively earlier course of maturation may minimize the factors affecting its reproducibility in young ages and may gain the advantage of predicting speech capabilities [21]. 2. Aim of the work In this study we hypothesized that recording of S-ABR in CI users can be used as a predictor for speech processing at cortical level. We used speech syllables for c-ABR recording. So, we referred to complex ABR as Speech-ABR (S-ABR). This work was designed to evaluate and compare S-ABR and S-CAEPs recordings in CI children aiming to investigate the possible influence of brainstem processing of speech on the cortical processing in those children. 3. Subjects and methods We recruited twenty children (2–6 years) fitted with unilateral CIs for this study. They were chosen from children attending the Audiology Units at Tanta University Hospitals. Consents were taken from parents of children after explaining the test procedures and this study has been carried out in accordance with The Code of Ethics of the World Medical Association (Declaration of Helsinki). All the children had pre-or peri-lingual onset of severe to profound sensorineural hearing loss and received their implants at ages ranging from 2 to 6 years. The etiology of hearing loss was heredofamilial in ten patients (with positive family history), post-febrile in two patients and idiopathic in the rest of patients. Inclusion criteria: children with bilateral severe to profound hearing loss SNHL that was not successfully treated with optimal hearing aid fittings for at least 6 months were included in this study. The exclusion criteria included: un-cooperative children including those with mental retardation, developmental or behavioral disorders, irregularity in HA use before CI surgery or improper rehabilitation therapy. Children of this group met the selection criteria for CI: bilateral severe to profound hearing loss as shown from results of sound field or play audiometry, absent ABR and absent otoacoustic emissions, normal IQ, normal EEG activity, normal appearance of cochlea and auditory nerve as evidenced by CT scan and MRI and unsatisfactory aided response after proper binaural HAs fitting for at least 6 months before deciding to proceed into CI surgery. After CI surgery and programming, Soundfield testing using CIs were done warable tones and speech materials. Good aided response is obtained when the aided response is 30 dBHL along the frequency range of 250–4000 Hz. The types of CIs were: Sonata Opus 2 processor in 10 patients (MED-EL), freedom processor (Cochlear) in 5 patients and harmony

processor (Advance Bionic) in 5 patients. As regard side of implantation, 9 patients received CI in their right ears and 11 patients received CI in their left ear. Children of this work were subjected to the following: 3.1. Aided sound field Children were seated in the sound proof room positioned one meter away from and at 458 angle to right and left loudspeakers two loud-speakers. The child was asked to indicate whenever he/ she heard the warble tones till reaching the aided threshold. The test was done at 500, 1000, 2000 and 4000 Hz as well as speech reception threshold (SRT) with very simple monosyllabic words or digits or speech detection threshold (SDT) according to the child’s vocabulary. 3.2. Aided click-evoked ABR Click-evoked ABR was recorded using Smart-Evoked potentials system of Intelligent Hearing System (IHS). Recording start at 70 dBHL to confirm the presence of wave V using repetition rate (RR) of 19.3/s and time window of 0–12 ms. After recording a response at 70 dBHL, the intensity was reduced in 10 dB steps till reaching the aided threshold to confirm satisfactory aided sound field results particularly in young children. After 1–2 months of regular use of CI at a stable map with reliable and satisfactory aided sound field and aided ABR results, children were enrolled in the rehabilitation program. At the start of the rehabilitation program, the following procedures were done for all children. 3.3. Speech-evoked CAEPs (S-CAEPs) CAEPs were recorded in response to CV syllables/da/of 206 ms duration presented at 70 dBHL and 0.5/s RR. The filter setting was 1–30 Hz with alternating polarity, 0–450 ms time window and the total number of sweeps was 30. Three averages were recorded and the responses were considered to be present if components of SCAEPs were identified in at least 2 out of the 3 averages. 3.4. Speech-evoked ABR (S-ABR) Speech-ABR was recorded using the same CV speech stimulus that was used for S-CAEPs (speech syllable/da/) which was presented at 70 dBHL, 11.1/s RR and time window of 0–75 ms. As in S-CAEPs recording, three averages were recorded and the responses were considered to be present if components of S-ABR were identified in at least 2 out of the 3 averages. For both types of ABR recording, the filter setting was 150– 1500 Hz with alternating polarity and the total number of sweeps was 1024. Both types of stimuli (click and/da/) used for recoding ABR and CAEPs, they were presented delivered via loudspeaker at 458 azimuth angle according to the implanted side and at distance of 50 cm. For each evoked potential test, three blocks of 1024 artifact free sweeps were collected for each tested ear. Four disposable electrodes were fixed according to the Smart-EP manual specification as the following: one high frontal Fz (positive electrode), one low frontal Fpz (ground electrode). The last two electrodes were placed on the left and right mastoids (as negative electrode or reference electrode) depending on the recording side. 3.5. Response analysis of S-ABR Click-ABR was obtained in each ear before recording S-ABR to confirm the presence of wave V as mentioned before. For S-ABR, the response was identified by the presence of seven waves (V, A, C,

Please cite this article in press as: T.A. Gabr, M.R. Hassaan, Speech processing in children with cochlear implant, Int. J. Pediatr. Otorhinolaryngol. (2015), http://dx.doi.org/10.1016/j.ijporl.2015.09.002

G Model

PEDOT-7748; No. of Pages 7 T.A. Gabr, M.R. Hassaan / International Journal of Pediatric Otorhinolaryngology xxx (2015) xxx–xxx Table 1 Results of the aided sound field, aided SRT, aided SDT and aided ABR in all children. Aided Aided Aided Aided

sound field SRT SDT ABR

25.7  4.6 dB 20.3  3.55 dB 19.3  5.4 dB 30.67  3.6 dBHL

D, E, F, O) using nomenclature previously established for the S-ABR [22–24]. The absolute latencies and amplitude of each wave, VA amplitude, duration, area and slope were done [25]. Peak-totrough slope was defined as peak-to-trough amplitude divided by peak-to-trough duration, while area was defined as peak-totrough amplitude multiplied by peak-to-trough duration.

3

twenty children (9 boys and 11 girls). Their mean age was 4.4  1.98 years. Sound field tests showed good aided response (mean of warble tone along the frequency range of 500–4000 Hz was 25.7  4.6 dB, mean of SRT was 20.3  3.55 dB in 15 patients and the mean aided SDT was 19.3  5.4 dB in 5 patients). Aided click-evoked ABR was recorded from all patients and can be traced down to lower level (mean aided ABR thresholds was 30.67  3.6 dBHL, Table 1). Results of S-CAEPs showed that not all children received CI produced responses with reasonable morphology. P1 component of CAEPs was present in all tested children with good morphology, while N1 component was present only in nine (45%) of these children. Other CAEPs peaks (P2 and N2) were either absent or showed poor morphology and repeatability (Fig. 1). So, we decided to consider P1-N1 results only of S-CAEPs. Accordingly, children of this study were classified into two groups:

4. Statistical analysis The collected data were organized, tabulated and statistically analyzed using the SPSS software statistical computer package version 16. For quantitative data t-test was used for the comparison between the two groups and significance was adopted if p < 0.05 for interpretation of the results. Correlation coefficients were computed in both groups among the variables age and SCAEPs latencies and amplitudes. 5. Results This study was done from April 2014 to February 2015. In the current study, we assessed the performance of the CI device in

Group I: consisted of nine children with recorded P1-N1 components (good S-CAEPs). Group II: consisted of eleven children with only P1 component (poor S-CAEPs). The comparison of age between both groups revealed significantly smaller age of group I. Group I also showed significantly shorter duration of hearing loss before CI surgery with significantly younger age of implantation when compared with group II (p  0.05) (Table 2). The comparison of CAEPs results showed a statistically significant delayed P1 latency in group II patients. P1 amplitudes showed no significant difference between both groups (p > 0.05) (Figs. 2 and 3). N1 latency and amplitudes

Fig. 1. Different patterns of S-CAEPs recorded in CI children.

Please cite this article in press as: T.A. Gabr, M.R. Hassaan, Speech processing in children with cochlear implant, Int. J. Pediatr. Otorhinolaryngol. (2015), http://dx.doi.org/10.1016/j.ijporl.2015.09.002

G Model

PEDOT-7748; No. of Pages 7 T.A. Gabr, M.R. Hassaan / International Journal of Pediatric Otorhinolaryngology xxx (2015) xxx–xxx

4

Table 2 Comparison of the mean and SD of P1 latencies in the study groups.

Age Duration of HL before CI Age of CI surgery *

Group I

Group II

2.50  0.70 1.15  0.568 2.21  0.5

3.9  1.6 2.22  0.601 3.2  2.1

t

p 2.54 4.09 4.3

0.021* 0.001* 0.001*

Significant difference.

Table 3 Comparison of S-CAEPs latencies and amplitudes between group I and II. The table also showed N1 results in group I. Group I

Group II

t

p

Latency

P1 N1

100.3  16 147.6  28.1

136.3  9.1 –

4.1 –

0.002* –

Amplitude

P1 N1

0.81  0.23 0.61  0.3

0.71  .41 –

1.56 –

0.23 –

S-CAEPs

*

were not compared as it was not recordable in all children of group II (Table 2). As regards P1 amplitude, there was no significant difference between both groups (p > 0.05) (Table 3). Pearson correlation showed a positive correlation between children ‘age and N1 and P1 latencies of S-CAEPs in group I. This means that, as the age of the child increase, the more the delay of P1 and N1 latencies (p < 0.05). As regard P1 and N1 amplitudes, there was no significant correlation with children ‘age. In group II, the same positive correlation between P1 latency and age was also found, however, P1 amplitudes did not show any relation with age (p > 0.05) (Table 4). The S-ABR was recorded in all children of both groups even those with absent S-CAEPs (Fig. 4). The comparison of S-ABR results between both groups was done using independent t-test. There was statistically no significant difference as regards A and V latencies, however, other peaks (C, D, E, F and O) showed significantly delayed latencies in group II when compared with group I (Table 6). The comparison of amplitudes of S-ABR peaks A, V and C showed no statistically significant difference, however, other peaks (D, E, F and O) were significantly higher in group I when compared with group II (Tables 5 and 6).

Significant difference.

Table 4 Correlation between age and S-CAEPs results in group I and group II. S-CAEPs Group I

Latency Amplitude

Group II *

Latency Amplitude

Age P1 NI P1 NI

r = 0.545 r = 4.1 r = 0.13 r = 0.09

p = 0.003* p = 0.002* p = 0.2 p = 0.45

P1 P1

r = 0.50 r = 0.23

p = 0.008* p = 0.1

Significant difference.

6. Discussion The central auditory system goes through a sensitive period of development in the first years of life. The acquisition of spoken language is a time-dependent process, and some form of linguistic input should be present from the first 6 month of life for a child to become linguistically competent [26]. Sensory deprivation as a sequence of hearing losses, in this period can alter or prevent central auditory development [27] and prompts cortical reorganization of auditory structures which may become responsive to other sensory stimuli, such as those of visual nature [28]. Profound congenital SNHL is not so infrequent. Early identification, referral, and diagnosis of children with hearing loss are necessary to initiate the process of auditory rehabilitation [3]. The introduction of auditory stimulation through electronic devices, such as CIs, in individuals with severe to profound HL can partially or totally revert the effects of this sensory deprivation and redirect these auditory cortical structures to their primary function. Consequently, the development of auditory abilities, a prerequisite for oral language acquisition and production, is also enabled [29]. The changes that occur in central auditory pathways, after the activation of CIs can be verified by means of electrophysiological procedures [14,30]. In this work we examined auditory processing in children fitted with CI first at cortical level through recording of

Fig. 2. Latencies of different S-ABR peaks in both groups.

Fig. 3. Amplitudes of different S-ABR peaks in both groups.

Fig. 4. Example of S-ABR recorded in CI children.

Please cite this article in press as: T.A. Gabr, M.R. Hassaan, Speech processing in children with cochlear implant, Int. J. Pediatr. Otorhinolaryngol. (2015), http://dx.doi.org/10.1016/j.ijporl.2015.09.002

G Model

PEDOT-7748; No. of Pages 7 T.A. Gabr, M.R. Hassaan / International Journal of Pediatric Otorhinolaryngology xxx (2015) xxx–xxx Table 5 Comparison between latencies of S-ABR waves in groups I and II. Peaks of S-ABR

Group I

Group II

t

p

V A C D E F O

9.6  1.9 12.9  1.8 23.4  2.4 31  3.6 38.7  4.8 48  5.6 56.9  5.4

10.3  1.6 14.7  1.9 26.9  1.4 35.1  1.2 45  0.9 53.3  1.9 63  2.2

0.867 1.86 3.51 4.32 3.51 2.26 2.16

0.44 0.06 0.003* 0.009* 0.003* 0.02* 0.01*

*

Significant difference.

Table 6 The comparison between amplitudes of S-ABR waves in control and study groups. Peaks of S-ABR

Group I

Group II

t

p

V A C D E F O

0.4  0.1 0.7  0.7 0.6  0.5 1.1  1 0.9  0.5 1  0.6 1.1  0.6

0.3  0.2 0.3  0.1 0.3  0.2 0.6  0.3 0.4  0.3 0.4  0.3 0.5  0.3

1.41 2.9 1.23 1.87 2.15 2.16 1.88

0.22 0.07 0.13 0.04* 0.03* 0.03* 0.04*

*

Significant difference.

CAEPs in response to speech syllable/da/. The P1 component of CAEP has been most utilized in research because it is considered a biomarker of the maturation of the central auditory system structures [31]. P1 is an easily identified, robust positivity at a latency of 100–300 ms in young children. It is generated by auditory thalamic and cortical sources [32] and its latency reflects the accumulated sum of delays in synaptic transmission in the ascending auditory pathways including delays in the cerebral cortex [33]. In the current work, consistent and robust P1 recordings were found in all children who were recently fitted with CI with good morphology. This might indicate immediate cortical response following the activation of the CIs [30]. The values of P1 latencies recorded in this work were similar to the mean p1 latencies recorded in 3 years old children which was approximately 125 ms as reported by Dorman et al. [17]. This may reflect that the child who receives stimulation via CI within the first 3.5 years of life will have P1 latency that might reach the normal range within the first 6 months after implant activation [17]. The other components of S-CAEPs showed variable responses where N1 was recorded in 45% of the patients. The other components were recorded in only 20% of the patients. Accordingly, we classify our patients into two groups as mentioned before based on the results of S-CAEPs: group I (P1-N1) and group II (P1 only). The absence of other components of CAEPs might be related to the duration of hearing loss before CI surgery. Children with good CAEPs were significantly younger than those with poor CAEPs. Moreover, they received their CI at significantly shorter duration since diagnosis of their hearing loss till the time of CI surgery (Table 2). Correlation between children ‘age and latencies showed that the younger the child the earlier the S-CAEPs latencies. This also indicated that the earlier the time of CI surgery the better the outcome. Eggermont and Ponton [34] reported absent N1 in children who had been deaf for a period of at least 3 years before CI surgeries which were done before the age of 6 years. The recording of N1 in CI children indicated central auditory pathway activation since N1 is believed to reflect activity in the primary auditory cortex [35]. We take that as an indication of continuous normal central auditory pathways development in children received CI at an early childhood. Generally, CAEP reflects recurrent cortical activity mediated by cortico-thalamic loops. These recurrent loops mediate subsequent

5

cortico-cortical projections that may be disrupted after auditory deprivation. Restoring function to these modulatory projections may be possible with CI, as long as the central auditory system remains maximally plastic and the effects of neural degeneration have not completely taken effect [36]. According to the S-CAEPs results we hypothesized that cortical processing might be affected by processing at the brainstem level. So, recording of S-CAEPs was followed by S-ABR recording. All components of S-ABR were elicited in all children with good morphology and high repeatability. Generally, Brainstem responses elicited by speech stimuli can provide clues about encoding of the sound structure of speech syllables by the central auditory pathway. Speech-ABR can be divided into two components: an onset response (V-A complex) and the frequency following response (FFR). Together, both responses roughly reflect the acoustic parameters of the CV stimulus used to evoke the response. The onset component arises as a response to the onset of sound. In the case of a CV stimulus the onset represents the initiation of the consonant and contains aperiodic information. Its initial waves are similar to those observed in response to click stimuli (waves I, III and the VA complex), whereas wave C possibly reflects the onset of voicing. The FFR reflects phase locking to the fundamental frequency of the stimulus. It arises in response to the periodic information present in the vowel at the frequency of the sound source (i.e. the glottal pulse). Thus, the period between peaks D, E, and F of the FFR corresponds to the fundamental frequency of the stimulus (F0), whereas the peaks between waves D, E, and F represent phase locking at the frequencies of the first formant (F1). Finally, Wave O marks the offset of the stimulus [22,23,37]. The presence of S-ABR response in CI children indicated that those children have some sort of speech processing at brainstem levels despite of the delayed latencies in all children as regards the normal values of S-ABR latencies reported in literature (e.g.: [23,38]. Activation of CIs involves improvements in neural conduction velocity and neural synchrony and the underlying mechanisms likely include improvements in synaptic efficacy and possibly increased myelination. The delayed VA complex latencies might indicate synchrony deficits in brainstem neural responses to complex sounds leading to the perceptual problems associated with the identification of stop consonants [39]. As regards the C, D, E, F and O components, the delay of their latencies might indicate that the ability to identify consonants and vowels in those children may not be properly developed yet leading to speech perception difficulties [40]. These results in CI children might be due to lack of proper sound perception and limited language experience which in turn affects the speech encoding at brainstem level [41] or poor synchrony to transient events [23]. Alternatively, the delayed latencies might be also related to the time taken by the speech processor of CI to process the incoming acoustic signal or to the recording procedure (through loudspeakers) rather that a deficit in the brainstem itself. Finally, it could be a combination of all the above factors. Nevertheless, the amplitudes of S-ABR in children fitted with CI showed similar results as normative data in our clinic indicating that despite of the delayed latencies of S-ABR, once the stimulus has arrived, it activates a good neuronal population in response to speech sounds. Results of S-CAEPs in this work raised an important question in our minds. Does speech processing at brainstem could contribute to speech processing at cortical level? Accordingly, we compared SABR results between the two groups. The comparison showed similar onset response (VA complex) with no significant difference in both groups. As regards the other components, children with poor CAEPs showed significant delayed latencies when compared

Please cite this article in press as: T.A. Gabr, M.R. Hassaan, Speech processing in children with cochlear implant, Int. J. Pediatr. Otorhinolaryngol. (2015), http://dx.doi.org/10.1016/j.ijporl.2015.09.002

G Model

PEDOT-7748; No. of Pages 7 6

T.A. Gabr, M.R. Hassaan / International Journal of Pediatric Otorhinolaryngology xxx (2015) xxx–xxx

with children with good CAEPs. As regards the S-ABR amplitudes, children with poor cortical response also showed significant reduced amplitudes of D, E, F and O components. Clinical evidence indicates that higher brainstem nuclei such as the inferior colliculus (IC) play an important role in auditory processing in humans [42,43]. Indeed, the response generators of both the late waves of the ABR (V and A) and the FFR have been localized to the upper brainstem (lateral lemniscus and IC) [44]. Animal models showed evidence that these regions of the brainstem are sensitive to complex spectral and temporal properties of complex stimuli and are likely to have a role in speech processing in humans [34,45,46]. Studies focusing on FFR recording in human demonstrated the role of brainstem in encoding speech and speech-like sounds [38,47,48]. This brainstem processing is thought to be related to processing taking place in lower (e.g. the auditory nerve) and higher (e.g. the auditory cortex) areas of the auditory pathway [49]. The delayed latencies and reduced amplitudes of S-ABR in CI children with poor S-CAEPs suggested deficient brainstem timing which might be linked to abnormal cortical processing. In their work Wible et al. [37] reported that, in the normal auditory system, increased synchrony among mechanisms that encode transient acoustic information at brainstem level contributes to more robust processing at the cortical level. In turn, the enhanced cortical processing reflects more consistently precise timing at brainstem level. In this work processing at brainstem level was evaluated through S-ABR recording which showed delayed latencies of later components with reduced amplitudes of all components. These findings indicated that children fitted with CI need more time for encoding of speech than normal as shown in S-ABR results. This also suggested less precise timing of generation and/or transmission of responses in the lateral lemniscus and/or inferior colliculus, relates to ‘weaker’ cortical activity and reduced cortical discrimination of fine acoustic differences [37,38]. Moreover, individuals with delayed brainstem timing showed a smaller degree of left/right cortical asymmetry in response to the speech sound [50]. Another important factor should be taken into consideration which is the immature cortical function in the children received CI in the current study. Thus, a deficit in brainstem timing would result in degraded input to the stilldeveloping cortex [51]. On the other hand, similar genetic or environmental factors leading to abnormal brainstem timing could also cause abnormal cortical function [49]. Krishnan et al. [48] suggested that encoding at the level of the brainstem could be malleable to top-down effects (e.g. experience and context). This could be explained by the reverse hierarchy theory (RHT) [52], which suggested that conscious perception is typically based on the highest possible representation of the stimulus along the perceptual hierarchy. With repeated exposures, higher levels are thus likely not only to use input from lower levels, but also influence the ways the lower levels encode incoming stimuli in a context dependent manner. Cortical influence over the subcortical acoustic analyses of complex sounds might be related to the existence of a sensory-cognitive interaction in which bottom-up processes extract the acoustic features of signal while attenuating irrelevant features, and top-down processes, such as attention and memory which modulate subcortical activity in brainstem [53]. Known temporal properties of brainstem neurons, which can phase lock up to 1000 Hz, as well as the remarkable temporal precision of the scalp recorded response they evoke implies that the brainstem is likely to also faithfully encode many of the acoustic properties of speech and other complex auditory signals [49,54,55]. There are two primary differences between the auditory language experiences of deaf children with CIs and that of normal hearing children. The first is that profoundly deaf implant users

have very little access to auditory input prior to receiving their implants, so they experience a delay in exposure to spoken language. The second is that while CIs have long been proven to allow increased access to auditory input [56] the electrical signal is not identical to natural hearing. CIs work through electrical stimulation applied by approximately 12–22 distinctly located channels (depending on the CI device) in the cochlea instead of the 15,000 natural hair cells that typically transmit information to the auditory nerve. So, CI can effectively convey a great deal of auditory information, however, it does not restore normal hearing. In other words, CIs provide an alternative representation of sound [57]. 7. Conclusion The auditory system is extremely sensitive to the auditory experience and the temporal characteristics of sounds. Speech evoked auditory potentials can be used as an efficient tool to characterize the auditory processing of the temporal properties of complex stimuli in an objective fast, reliable and cost effective procedure to evaluate the structural and functional integrity of the central auditory system in a non-invasive fashion. Results of SCAEPs and S-ABR emphasize the significance of the critical age for cochlear implantation and emphasize the importance of early diagnosis and management of hearing loss. Activity in the auditory pathways at the level of the brainstem and auditory cortex can be evoked through CI and auditory stimulation proceeds once the implant is activated. Speech processing in children fitted with CIs can be evaluated efficiently at both brainstem and cortical level at an early time after implantation. Speech-ABR can be implemented as a routine test for post-operative evaluation protocols in CI programs. Acknowledgement Authors of this work did not receive a grant or funding that supports this work. References [1] D.K. Eddington, W.R. Rabinowitz, J. Tierney, V. Noel, M. Whearty, Speech Processors for Auditory Prostheses, 8th Quarterly Progress Report, NIH Contract N01-DC, 1997, 6–2100. [2] M.S. Harris, W.G. Kronenberger, S. Gao, H.M. Hoen, R.T. Miyamoto, D.B. Pisoni, Verbal short-term memory development and spoken language outcomes in deaf children with cochlear implants, Ear Hear. 34 (2) (2013) 79–192. [3] P.V. Vlastarakos, T.P. Nikolopoulos, S. Pappas, M.A. Buchanan, J. Bewick, D. Kandiloros, Cochlear implantation update: contemporary preoperative imaging and future prospects – the dual modality approach as a standard of care, Expert Rev. Med. Devices 7 (2010) 555–567. [4] C.M. Connor, H.K. Craig, S.W. Raudenbush, K. Heavner, T.A. Zwolan, The age at which young deaf children receive cochlear implants and their vocabulary and speech production growth: is there an added value for early implantation? Ear Hear. 27 (6) (2006) 628–644. [5] M.C. Allen, T.P. Nikolopoulos, G.M. O’Donoghue, Speech intelligibility in children after cochlear implantation, Am. J. Otol. 19 (1998) 742–746. [6] G.M. O’Donoghue, T.P. Nikolopoulos, S.M. Archbold, M. Tait, Speech perception in children after cochlear implantation, Am. J. Otol. 19 (1998) 762–767. [7] P.E. Spencer, Individual differences in language performance after cochlear implantation at one to three years of age: child, family, and linguistic factors, J. Deaf Stud. Deaf Educ. 9 (4) (2004) 395–412. [8] J.G. Nicholas, A.E. Geers, Expected test scores for preschoolers with a cochlear implant who use spoken language, AJSLP 17 (2) (2008) 121–138. [9] Y. Henkin, P.R. Kileny, M. Hildesheimer, L. Kishon-Rabin, Phonetic processing in children with cochlear implants: an auditory event-related potentials study, Ear Hear. 29 (2) (2008) 239–249. [10] J. Hall, New Handbook of Auditory Evoked Responses, Allyn and Bacon, Boston, MA, USA, 2007. [11] S. Greenberg, J.T.W. Marsh, S. Brown, J.C. Smith, Neural temporal coding of low pitch. I. Human frequency-following responses to complex tones, Hear Res. 25 (2– 3) (1987) 91–114. [12] B. Chandrasekaran, N. Kraus, The scalp-recorded brainstem response to speech: neural origins and plasticity, Psychophysiology 47 (2) (2010) 236–246. [13] S. Anderson, N. Kraus, The potential role of the cABR in assessment and management of hearing impairment, Int. J. Otolaryngol. 2013 (2013) 604729.

Please cite this article in press as: T.A. Gabr, M.R. Hassaan, Speech processing in children with cochlear implant, Int. J. Pediatr. Otorhinolaryngol. (2015), http://dx.doi.org/10.1016/j.ijporl.2015.09.002

G Model

PEDOT-7748; No. of Pages 7 T.A. Gabr, M.R. Hassaan / International Journal of Pediatric Otorhinolaryngology xxx (2015) xxx–xxx [14] S. Purdy, K. Gardner-Berry, Auditory evoked potentials and cochlear implants: research findings and clinical applications in children, Persp. Hear Hear Dis. Child 19 (2009) 14–21. [15] A. Sharma, A. Nash, M. Dorman, Cortical development, plasticity and re-organization in children with cochlear implants, J. Commun. Disord. 42 (2009) 272–279. [16] A. Sharma, G. Cardon, K. Henion, P. Roland, Cortical maturation and behavioral outcomes in children with auditory neuropathy spectrum disorder, Int. J. Audiol. 50 (2011) 98–106. [17] M. Dorman, A. Sharma, P. Gilley, K. Martin, P. Roland, Central auditory development: evidence from CAEP measurements in children fit with cochlear implants, J. Commun. Disord. 40 (4) (2007) 284–298. [18] M.R. Hassaan, Aided evoked cortical potential: an objective validation tool for hearing aid benefit, EJENTAS 12 (2011) 155–161. [19] A. Sharma, M.F. Dorman, A.J. Spahr, Rapid development of cortical auditory evoked potentials after early cochlear implantation, Neuroreport 13 (10) (2002) 1365–1368. [20] K.L. Johnson, T. Nicol, S.G. Zecker, N. Kraus, Developmental plasticity in the human auditory brainstem, J. Neurosci. 28 (2008) 4000–4007. [21] J. Hornickel, E. Skoe, T. Nicol, S. Zecker, N. Kraus, Subcortical differentiation of stop consonants relates to reading and speech-in-noise perception, Proc. Natl. Acad. Sci. U. S. A. 106 (31) (2009) 13022–13027. [22] N. Russo, T. Nicol, G. Musacchia, N. Kraus, Brainstem responses to speech syllables, Clin. Neurophysiol. 115 (2004) 2021–2030. [23] K.L. Johnson, T.G. Nicol, N.N. Kraus, Brainstem response to speech: a biological marker of auditory processing, Ear Hear. 26 (2005) 424–434. [24] N. Hemanth, P. Manjula, Representation of speech sounds at the auditory brainstem, JISHA 26 (2) (2012) 1–13. [25] B. Wible, T. Nicol, N. Kraus, A typical brainstem representation of onset and formant structure of speech sounds in children with language-based learning problems, Biol. Psychol. 67 (2004) 299–317. [26] T.P. Nikolopoulos, Outcomes and predictors in cochlear implantation, (Doctoral thesis), University of Nottingham, Nottingham, 2000, pp. 138–166. [27] J.L. Northern, M. Downs, Hearing in Children, 5th ed., Lippincott Williams and Wilkins, Baltimore, MD, 2002. [28] K.A. Gordon, D.D. Wong, J. Valero, S.F. Jewell, P. Yoo, B.C. Papsin, Use it or lose it? Lessons learned from the developing brains of children who are deaf and use cochlear implants to hear, Brain Topogr. 24 (3–4) (2011) 204–219. [29] P. Martinez-Beneyto, A. Morant, M.I. Pitarch, E. Latorre, A. Platero, J. Marco, Paediatric cochlear implantation in the critical period of the auditory pathway, our experience, Acta Otorrinolaringol. Esp. 60 (5) (2009) 311–317. [30] K.F. Alvarenga, L.C. Vicente, R.C.F. Lopes, L.M.P. Ventura, M.C. Bevilacqua, A.L.M. Moret, Development of P1 cortical auditory evoked potential in children presented with sensorineural hearing loss following cochlear implantation: a longitudinal study, Codas 25 (6) (2013) 521–526. [31] A. Sharma, M.F. Dorman, A. Kral, The influence of a sensitive period on central auditory development in children with unilateral and bilateral cochlear implants, Hear. Res. 203 (1–2) (2005) 134–143. [32] T. McGee, N. Kraus, Auditory development reflected by middle latency response, Ear Hear. 17 (1996) 419–429. [33] J.J. Eggermont, C.W. Ponton, M. Don, M.D. Waring, B. Kwong, Maturational delays in cortical evoked potentials in cochlear implant users, Acta Otolaryngol. 117 (1997) 161–163. [34] J.J. Eggermont, C.W. Ponton, Auditory-evoked potential studies of cortical maturation in normal hearing and implanted children: correlations with changes in structure and speech perception, Acta Otolaryngol. 123 (2003) 249–252.

7

[35] A. Sharma, M.F. Dorman, Central auditory development in children with cochlear implants: clinical implications, Adv. Otorhinolaryngol. 64 (2006) 66–88. [36] P.M. Gilley, A. Sharma, M.F. Dorman, Cortical reorganization in children with cochlear implants, Brain Res. 6 (1239) (2008) 56–65. [37] B. Wible, T. Nicol, N. Kraus, Correlation between brainstem and cortical auditory processes in normal and language-impaired children, Brain 128 (2005) 417–423. [38] G.C. Galbraith, E.M. Amaya, J.M. de Rivera, N.M. Donan, M.T. Duong, J.N. Hsu, et al., Brain stem evoked response to forward and reversed speech in humans, Neuroreport 15 (2004) 2057–2060. [39] J.H. Song, K. Banai, N.M. Russo, N. Kraus, On the relationship between speech- and nonspeech-evoked auditory brainstem responses, Audiol. Neurootol. 11 (2006) 233–241. [40] C.N. Rocha-Muniz, D.M. Befi-Lopes, E. Schochat, Investigation of auditory processing disorder and language impairment using the speech-evoked auditory brainstem response, Hear. Res. 294 (1–2) (2012) 143–152. [41] N. Kraus, K. Banai, Auditory processing malleability: focus on language and music, Curr. Dir. Psychol. Sci. 16 (2007) 105–109. [42] K. Johkura, S. Matsumoto, O. Hasegawa, Y. Kuroiwa, Defective auditory recognition after small hemorrhage in the inferior colliculi, J. Neurol. Sci. 161 (1998) 91–96. [43] F.E. Musiek, L. Charette, D. Morse, J.A. Baran, Central deafness associated with a midbrain lesion, J. Am. Acad. Audiol. 15 (2004) 133–151. [44] A.R. Møller, Neural mechanisms of BAEP, Electroencephalogr. Clin. Neurophysiol. 49 (Suppl.) (1999) 27–35. [45] D.G. Sinex, G.D. Chen, Neural responses to the onset of voicing are unrelated to other measures of temporal resolution, J. Acoust. Soc. Am. 107 (2000) 486–495. [46] J.J. Eggermont, C.W. Ponton, The neurophysiology of auditory perception: from single units to evoked potentials, Audiol. Neurootol. 7 (2002) 71–99. [47] A. Krishnan, Human frequency-following responses: representation of steadystate synthetic vowels, Hear. Res. 166 (2002) 192–201. [48] A. Krishnan, Y. Xu, J.T. Gandour, P.A. Cariani, Human frequency-following response: representation of pitch contours in Chinese tones, Hear. Res. 189 (2004) 1–12. [49] K. Banai, D. Abrams, N. Kraus, Sensory-based learning disability: insights from brainstem processing of speech sounds, Int. J. Audiol. 46 (2007) 524–532. [50] D.A. Abrams, T. Nicol, S. Zecker, N. Kraus, Auditory brainstem timing predicts cerebral asymmetry for speech, J. Neurosci. 26 (2006) 11131–11137. [51] J. Cunningham, T. Nicol, S. Zecker, N. Kraus, Speech-evoked neurophysiologic responses in children with learning problems: development and behavioral correlates of perception, Ear Hear. 21 (2000) 554–568. [52] M. Ahissar, S. Hochstein, The reverse hierarchy theory of visual perceptual learning, Trends Cogn. Sci. 8 (10) (2004) 457–464. [53] P.C. Wong, E. Skoe, N.M. Russo, T. Dees, N. Kraus, Musical experience shapes human brainstem encoding of linguistic pitch patterns, Nat. Neurosci. 10 (4) (2007) 420–422. [54] E. Hayes, C.M. Warrier, T. Nicol, S.G. Zecker, N. Kraus, Neural plasticity following auditory training in children with learning problems, Clin. Neurophys. 114 (2003) 673–684. [55] C. Warrier, K. Johnson, E. Hayes, T. Nicol, N. Kraus, Learning impaired children exhibit timing deficits and training-related improvements in auditory cortical responses to speech in noise, Exp. Brain Res. 157 (2004) 431–441. [56] National Institutes of Health (NIH) Cochlear Implants in Adults and Children. Consensus Development Conference Statement 1995; 15–17;13(2)1–30. [57] J. Geren, J. Snedeker, Syntactic and lexical development in children with cochlear implants, (Unpublished paper), Harvard University, 2009.

Please cite this article in press as: T.A. Gabr, M.R. Hassaan, Speech processing in children with cochlear implant, Int. J. Pediatr. Otorhinolaryngol. (2015), http://dx.doi.org/10.1016/j.ijporl.2015.09.002