Spatial and non-spatial multisensory cueing in unilateral cochlear implant users

Spatial and non-spatial multisensory cueing in unilateral cochlear implant users

Hearing Research xxx (2016) 1e14 Contents lists available at ScienceDirect Hearing Research journal homepage: www.elsevier.com/locate/heares Resear...

1MB Sizes 2 Downloads 32 Views

Hearing Research xxx (2016) 1e14

Contents lists available at ScienceDirect

Hearing Research journal homepage: www.elsevier.com/locate/heares

Research Paper

Spatial and non-spatial multisensory cueing in unilateral cochlear implant users Francesco Pavani a, b, c, *, Marta Venturini b, Francesca Baruffaldi d, Luca Artesini a,  Frau e, Wieske van Zoest a Francesca Bonfioli e, Giuseppe Nicolo a

Center for Mind/Brain Sciences (CIMeC), University of Trento, Rovereto, Italy Integrative Multisensory Perception Action & Cognition Team (ImpAct), Lyon Neuroscience Research Center, INSERM U1028, CNRS UMR5292, Lyon, France Department of Psychology and Cognitive Science, University of Trento, Rovereto, Italy d Ente Nazionale Sordi, Trento, Italy e HNT Department, “Santa Maria del Carmine” Hospital, Rovereto, Italy b c

a r t i c l e i n f o

a b s t r a c t

Article history: Received 16 May 2016 Received in revised form 21 October 2016 Accepted 27 October 2016 Available online xxx

In the present study we examined the integrity of spatial and non-spatial multisensory cueing (MSC) mechanisms in unilateral CI users. We tested 17 unilateral CI users and 17 age-matched normal hearing (NH) controls in an elevation-discrimination task for visual targets delivered at peripheral locations. Visual targets were presented alone (visual-only condition) or together with abrupt sounds that matched or did not match the location of the visual targets (audio-visual conditions). All participants were also tested in simple pointing to free-field sounds task, to obtain a basic measure of their spatial hearing ability in the naturalistic environment in which the experiment was conducted. Hearing controls were tested both in binaural and monaural conditions. NH controls showed spatial MSC benefits (i.e., faster discrimination for visual targets that matched sound cues) both in the binaural and in the monaural hearing conditions. In addition, they showed non-spatial MSC benefits (i.e., faster discrimination responses in audio-visual conditions compared to visual-only conditions, regardless of sound cue location) in the monaural condition. Monaural CI users showed no spatial MSC benefits, but retained non-spatial MSC benefits comparable to that observed in NH controls tested monaurally. The absence of spatial MSC in CI users likely reflects the poor spatial hearing ability measured in these participants. These findings reveal the importance of studying the impact of CI re-afferentation beyond auditory processing alone, addressing in particular the fundamental mechanisms that serves orienting of multisensory attention in the environment. © 2016 Elsevier B.V. All rights reserved.

Keywords: Cochlear implants Monaural hearing Spatial hearing Multisensory Spatial attention

1. Introduction A cochlear implant (CI) is a neuroprostheses that affords partial recovery of auditory sensations and speech understanding in people suffering severe to profound hearing loss (Moore and Shannon, 2009; Wilson and Dorman, 2008). Although CI surgery is indicated for people with bilateral hearing loss, most patients receive only one CI, and therefore experience unilateral hearing. The restricted spectro-temporal processing provided by the CI processor (Majdak et al., 2011) combined with the absence or reduction of binaural

* Corresponding author. Center for Mind/Brain Sciences, University of Trento, Corso Bettini 31, 38068 Rovereto, Italy. E-mail address: [email protected] (F. Pavani).

input (i.e., bimodal CI users individuals who continue to use a hearing aid in the ear contralateral to the implant) limit efficient spatial hearing. Sound localisation is typically at or near chance in unilateral CI users (Buhagiar et al., 2004; Grantham et al., 2008; Luntz et al., 2005; Nava et al., 2009a; Noble et al., 2009; Tyler et al., 2009), with better localisation abilities reported in some of the bimodal users (Potts et al., 2009; Seeber et al., 2004). The consequences of auditory spatial deficits in unilateral CI users have been primarily examined in relation to speech understanding (e.g., Tyler et al., 2009; van Hoesel and Tyler, 2003; for review see Ching et al., 2007). More recent evidence also indicates higher listening effort in unilateral compared to bilateral CI users (Hughes and Galvin, 2013). Instead, to the best of our knowledge, it has never been explored how sound-localisation abilities in CI users affects allocation of spatial visual attention and how this impacts the

http://dx.doi.org/10.1016/j.heares.2016.10.025 0378-5955/© 2016 Elsevier B.V. All rights reserved.

Please cite this article in press as: Pavani, F., et al., Spatial and non-spatial multisensory cueing in unilateral cochlear implant users, Hearing Research (2016), http://dx.doi.org/10.1016/j.heares.2016.10.025

2

F. Pavani et al. / Hearing Research xxx (2016) 1e14

selection of relevant information from the multisensory environment. Likewise, non-spatial auditory influences on visual processing (e.g., Andersen and Mamassian, 2008; Ngo and Spence, 2010; Noesselt et al., 2008; Van der Burg et al., 2008) also remained largely overlooked (but see Harris and Kamke, 2014; Kamke et al., 2014). The present study aimed to investigate the consequences of unilateral CI on visual and audio-visual attention, building on two decades of studies on multisensory links in attention-orienting (for reviews see Hillyard et al., 2015; Spence, 2010). A first important example of multisensory links concerns the deployment of spatial visual attention in the context of abrupt auditory events. There is no question that sudden sounds can capture attention, that is, they can result in a transient allocation of visual resources towards their location (Spence, 2010). This type of automatic and exogenous cue in multisensory attention has been documented with all combinations of sensory stimuli and results in behavioural benefits that are adaptive in the interactions with the €rmer et al., 2009). One environment (Spence and Driver, 1997; Sto key determinant of this exogenous multisensory interaction is the spatial proximity between the successive multisensory stimulations (Spence, 2010). Although multisensory links in exogenous attention have been demonstrated also when successive stimuli are delivered from different locations within the same spatial hemifield (e.g., Frassinetti et al., 2002; Schmitt et al., 2010), the interactions and potential benefits are strongest when the successive stimuli originate from the same spatial location (Prime et al., 2008). To the authors knowledge, the interactions between abrupt sound stimuli and orienting of visual attention have not been investigated in cochlear implant users. Given the key role of spatial proximity in exogenous multisensory cueing, it can be hypothesised that the reduced auditory spatial abilities of unilateral CI users should heavily impact on the integrity of this multisensory orienting mechanism. While this seems trivial it remains an empirical question. Evidence suggests that increased uncertainty regarding the spatial location of sounds (i.e., measured in a separate task that requires identification of the sound source in space) does not necessarily imply the immediate elimination of audio-visual spatial cueing effects. An example of this dissociation has been documented in the neuropsychological literature that examined auditory space perception and audio-visual cueing in braindamaged patients with hemispatial neglect. Neglect patients show increased uncertainty when localising sounds in contralesional compared to ipsilesional space (Pavani et al., 2001), and show rightward biases when pointing to sounds (for review see Pavani et al., 2004). Nonetheless, they benefit from spatial correspondences between auditory and visual stimuli when detecting visual targets in contralesional space (Frassinetti et al., 2002). The first aim of the present study was to examine spatial multisensory cueing in unilateral CI users to test whether sudden sounds influence visual spatial attention in a similar manner to hearing controls. A second example of multisensory links concerns a non-spatial multisensory enhancement that may occur independently of spatial multisensory cueing. The mere presence of a sound just prior to the presentation of the visual target might benefit performance, that is regardless of whether the location of the sound matches that of the visual target (e.g., Andersen and Mamassian, 2008; Ngo and C. Spence, 2010; Noesselt et al., 2008; Van der Burg et al., 2008). This effect is not just the consequence of sounds alerting participants and modulating response preparation, as documented by the fact that these multisensory advantages promoted visual discrimination (Noesselt et al., 2008) or reduced masking (Ngo and Spence, 2010; Van der Burg et al., 2008). Thus a second aim of this work was to investigate non-spatial multisensory cuing in unilateral CI users. Though the ability to localise

sounds in space is likely significantly reduced in CI users, multisensory integration might not be completely lost. Specifically, the ability to generally detect the presence sounds might still affect subsequent responses to visual targets, restoring the integrity of this multisensory advantage in CI users. Looking at both a spatial and non-spatial multisensory effects in CI users might help to provide further insight in the relative interdependence of sound detection and localisation. A third example of multisensory links in spatial attention can be found in the context of sustained attentional biases where the expectation of events occurring in one modality has been found to affect the detections of events in another modality. That is, when observers attend towards one hemispace to monitor one modality (e.g., audition), allocation of resources in a different modality (e.g., vision) is typically also biased towards the same region of space. For instance, in a now classic audio-visual experiment conducted by Spence and Driver (Spence and Driver, 1996; Experiment 4), participants were asked to discriminate the elevation of auditory and visual targets, delivered in the left or right hemispace. At the beginning of each block, participants were informed that auditory targets were more likely on one side of space compared to the other one. Instead, visual targets were overall less frequent and were in fact delivered with a higher proportion on the unattended than the attended auditory side. As expected, the results showed that participants were faster and more accurate at discriminating the elevation of auditory targets on the attended side. However, of critical importance here was that this attentional facilitation extended to visual targets that appeared on the attended auditory side, even though these visual targets were more likely to occur on the unattended auditory side. This link between sustained allocation of resources for one modality and attention biases for another modality has now been documented behaviourally for multiple multisensory combinations (Lloyd et al., 2003; Spence et al., 2000). As third aim, we explored whether the unilateral hearing experience of unilateral CI users may result in sustained orienting of auditory attention towards the implant side which could consequently bias visual attention. Specifically, evidence suggests that sound localisation biases exist in situations of monaural hearing. Hearing individuals with a temporary monaural ear-plug show systematic misperceptions of sounds towards the hemispace ipsilateral to the open ear (Butler et al., 1990; Oldfield and Parker, 1986; Slattery and Middlebrooks, 1994). Likewise, patients with single sided deafness (SSD) can show sound localisation biases towards the side of space ipsilateral to the hearing ear (Slattery and Middlebrooks, 1994; Van Wanrooij and Van Opstal, 2004). Finally, there is evidence suggesting that unilateral CI users are more likely to localise sounds towards the implant side (Nava et al., 2009a). If monaural hearing results in sustained and systematic auditory biases in unilateral CI users, attention biases towards the implant side may also extend to the visual modality. To experimentally address these aims we tested a group of unilateral CI participants in a series of auditory, visual and audiovisual tasks. First, we measured sound localisation ability for each participant, asking patients to point to free-field sounds delivered from four hidden loudspeakers in front space. This basic soundlocalisation task served to measure the degree of spatial uncertainty when perceiving sounds in the experimental environment, as well as detect any systematic bias towards the implant side. Second, we tested participants in an elevation-discrimination task, in which peripheral visual targets were presented together with abrupt sounds that matched or not-matched the spatial position of the visual targets. This audio-visual task examined whether sounds in specific locations spatially cue visual attention in monaural CI users, as it is typically found in hearing individuals. Finally, the elevation-discrimination task was also conducted as a visual-only

Please cite this article in press as: Pavani, F., et al., Spatial and non-spatial multisensory cueing in unilateral cochlear implant users, Hearing Research (2016), http://dx.doi.org/10.1016/j.heares.2016.10.025

F. Pavani et al. / Hearing Research xxx (2016) 1e14

task in the absence of concurrent sounds (i.e., in silence). The contrast between the audio-visual and visual only conditions task provided a measure of a non-spatial multisensory benefit. The visual-only task was also used to examine whether the prolonged lateralised experience of monaural CI users could produce directional biases in visual attention. The elevation-discrimination task we have adopted is based on the paradigm originally developed by Spence and Driver (Spence and Driver, 1997, 1996), and was modified from the version implemented by Koelewijn, Bronkhorst and Theeuwes (Koelewijn et al., 2009). A group of hearing participants was also recruited for the study. In this control group each participant was closely matched for age and gender with a corresponding CI patient. For all tasks, hearing participants were tested twice: first, in a monaural condition (i.e., with a temporary plug inserted in the ear canal); and second, in a binaural hearing condition. Our predictions were as follows. First, based on the literature on sound localisation in CI users, we expected worse sound localisation abilities in unilateral CI users compared to monaurally plugged hearing controls. Moreover, if the asymmetrical hearing condition results in a displacement of perceived sound-location, we expected to measure systematic localisation biases towards the hearing side (i.e., the implanted side in CI users, the unplugged side in hearing controls). Second, due to the expected difference in auditory spatial abilities between the two groups, we predicted the audio-visual exogenous spatial cueing effect to be weaker in monaural CI participants compared to hearing controls. Third, regarding the non-spatial multisensory cueing effect, there are two possibilities. On the one hand, it may be the case that the decreased representation of sounds in space might consequently affect sound detection and its utility value in general, and therefore decrease the general non-spatial benefit of sounds in CI participants. On the other hand, if sound detection is independent of sound localisation, CI participants might still be able to use sound as a non-spatial cue and reveal non-spatial multisensory enhancement. Finally, if the unbalanced hearing condition of unilateral CI users produces a sustained shift of attention resources towards the hearing side, we expected performance benefits for visual targets delivered ipsilateral to the implant. We predicted this unilateral bias of visual attention in silence to emerge selectively in CI users, given that no habituation to the monaural plug was possible for our hearing participants. 2. Materials and methods 2.1. Participants Seventeen adult CI users were recruited at the ear, nose and throat (ENT) department of the Santa Maria del Carmine Hospital (Rovereto, Italy) to participate in the study. All CI users were fitted with a single cochlear implant (6 on the left; 11 on the right) and 3 of them wore hearing aids on the non-implanted ear. Twelve CI users acquired deafness after 3 years of age (postlingual onset), whereas 5 acquired deafness before 3 years (prelingual onset). Anamnestic details for each CI participant, together with age at implantation, age at testing, CI type and CI side are reported in Table 1. Three CI users (CI9, CI10, CI17) used a hearing-aid (HA) in the non-implanted ear. Residual hearing with HA were the following: CI9 (500 Hz 60 dB; 1000 Hz 45 dB; 2000 Hz 40 dB), CI10 (500 Hz 60 dB; 1000 Hz 45 dB; 2000 Hz 75 dB), CI17 (500 Hz 80 dB; 1000 Hz 90 dB). All CI participants completed an Italian translation of the Nijmegen Cochlear Implant Questionnaire (NCIQ, Hinderink et al., 2000), aimed at assessing their quality of life with the implant, and the Italian version of the Short Form Health Survey

3

questionnaire (SF-36, Apolone et al., 2000), aimed at assessing their general health status (see Table 2 for the results in the two questionnaires). Seventeen hearing adults were also recruited to take part in the study, to serve as control group. Each hearing participant was individually matched for age and gender (±3 years) with a corresponding CI participant (11 female participants in each group; mean age hearing group: 42.7 years; SD ¼ 16.4; mean age CI group: 43.0 years; SD ¼ 16). All participants were tested in a single session, lasting approximately 90 min. All had normal or correct-to-normal vision and reported no prior history of neurological or psychiatric diseases. The study was approved by the Ethic Committee at the University of Trento (Protocol number: 2012-033), and all participants read and signed an informed consent before taking part in the study. 2.2. General structure of the experimental session All participants underwent the same experimental protocol: first they completed the sound localisation task, then the elevationdiscrimination task. CI participants were given written and oral instructions, to minimise misunderstandings; hearing participants were given only oral instructions. All participants readily understood the tasks. 2.3. Sound localisation task 2.3.1. Apparatus and stimuli The experimental set-up for the sound localisation task rested on a table (160  80 cm) and consisted of 4 loudspeakers, hidden behind a sound-transparent black cloth mounted on an arched frame (height: 50 cm; width: 150 cm). This apparatus prevented vision of exact loudspeaker position (as in Nava et al., 2009a, 2009b). Loudspeakers were raised approximately at ear level by wooden supports (height: 20 cm). With respect to the centre of the apparatus, loudspeakers were located at 55 and 20 on either side. A measuring tape (ranging from 0 to 150 cm; with 0.5 marks) was attached to the upper side of the arched frame, visible only to the experimenter, and it was used to measure pointing responses. Participants sat in front of the apparatus, with their body midline aligned with the centre of the setup. Their chest was adjacent to the edge of the table, their hand rested on the table next to the sternum and along their body midline, their head was not restrained. The approximate distance between the centre of the head and the apparatus was 65 cm. Note that restraining of head position was intentionally avoided, as this could potentially reduce sound localisation ability particularly for CI participants (Brimijoin et al., 2010). Nonetheless, the experimenter sat opposite the participant, on the other side of the apparatus, and delivered each stimulus only when the participant was facing straight ahead. The auditory stimulus was a white-noise burst (duration 200 ms), always delivered from one loudspeaker at a time and triggered manually by the experimenter. Stimulus loudness was approximately 60 dB SPL (A-weighted), as measured from the participant's head location. The experiment was implemented in ^t et al., 2012) on a DELL Latitude D620 laptop OpenSesame (Matho computer. Although the experimenter was always visible to the participant, he/she could not bias the participant's response because the sequence of sound locations was randomised by the computer and unknown to the experimenter. 2.3.2. Procedure Participants were instructed to keep their head and eyes still during sound delivery, and to indicate the perceived horizontal location of sounds immediately after sound onset, by touching with the index finger of their right hand the upper edge of the arched

Please cite this article in press as: Pavani, F., et al., Spatial and non-spatial multisensory cueing in unilateral cochlear implant users, Hearing Research (2016), http://dx.doi.org/10.1016/j.heares.2016.10.025

4

F. Pavani et al. / Hearing Research xxx (2016) 1e14

Table 1 Anamestic data for all CI users. Patient ID

Sex Handedness

Age at test (yrs)

Deafness onset

Implanted ear

Age at implantation (yrs)

Duration of deafness (yrs)

CI use (yrs)

CI brand Use of hearing aids during test

CI1 CI2 CI3 CI4 CI5 CI6 CI7 CI8 CI9 CI10 CI11 CI12 CI13 CI14 CI15 CI16 CI17

F M F F F F F M F F F M F M M F M

38 57 45 42 53 29 63 44 24 80 44 33 68 31 28 39 18

Post-verbal Post-verbal Pre-verbal Post-verbal Post-verbal Pre-verbal Post-verbal Post-verbal Pre-verbal Post-verbal Post-verbal Pre-verbal Post-verbal Pre-verbal Post-verbal Post-verbal Post-verbal

Left Right Right Left Left Left Right Left Left Right Left Right Right Left Left Left Left

31 55 42 36 51 25 60 38 23 78 35 23 57 27 25 28 5

11 5 38 12 18 23 46 7 20 27 31 23 27 24 19 25 1

7 1 3 6 1 3 2 6 1 2 8 10 11 4 3 11 13

MED-EL MXM MXM MED-EL MXM Cochlear MED-EL MED-EL Cochlear MXM Cochlear Cochlear Cochlear Cochlear Cochlear Cochlear Cochlear

Right Right Left Ambidextrous Right Right Right Right Left Right Left Right Right Right Right Right Right

structure. The experimenter entered into the computer the number indicated on the measuring tape (not visible to the participant) before instructing the participant to return the right hand to the starting position and initiate the following trial. A total number of 48 sounds were presented (12 trials for each loudspeaker location), and the experiment lasted about 7 min. No feedback was given during the task. Hearing controls performed the sound localisation task both in a binaural condition and with one ear plugged (monaural condition). 2.4. Elevation-discrimination task 2.4.1. Apparatus and stimuli The apparatus for this task combined the one used for the sound localisation measure (i.e., the semicircular structure with loudspeakers hidden behind a black curtain), with a laptop computer placed centrally in front of the participant and raised at eye-level on a platform. Each trial started with a white fixation cross appearing in the centre of the laptop display and remaining visible until response. Two auditory conditions were implemented: a soundpresent condition (to examine the effect of abrupt sounds on visual attention) and a sound-absent condition (to examine visual attention in silence). In sound-present conditions, after a random delay (450e600 ms), an auditory stimulus (white noise, duration

No No No No No No No No Yes Yes No No No No No No Yes

100 ms) was emitted from a loudspeaker located 20 to the left or to the right of central fixation, or from both loudspeakers at the same time. Loudness of the auditory stimulus was approximately 60 dB (as measured from head position). In the sound absent condition, no sound was delivered. After 100 ms from sound delivery (sound-present conditions) or after 450e600 ms from fixation onset (sound-absent condition), the visual target was presented. This consisted of a white filled circle (20 pixels radius, 0.6 of visual angle), appearing for 140 ms in the upper or lower hemifield with respect to the horizontal meridian passing through visual fixation (1.7 from the meridian), either in the left or right hemifield (13.7 from fixation). In soundpresent conditions, the trial was considered congruent when the visual target and the sound appeared on the same hemispace, whereas it was considered incongruent when the visual target and the sound appeared on opposite hemispaces. Finally, trials in which the sound was delivered from both speakers were considered neutral. 2.4.2. Procedure Participants sat at the experimental table with their head on the chinrest, at a viewing distance of 50 cm from the laptop monitor. They were asked to keep their gaze towards central fixation throughout the task and to indicate as quickly and accurately as

Table 2 CI users scores in the Nijmegen Cochlear Implant Questionnaire (NCIQ, Hinderink et al., 2000), and in the Short Form Health Survey questionnaire (SF-36, Apolone et al., 2000). Patient (code)

SF-36 V1 standard

NCIQ

Physical component

Mental component

Basic sound perception

Social interactions

Advanced sound perception

Self esteem

Speech production

Activity limitations

NCIQ total score

CI1 CI2 CI3 CI4 CI5 CI6 CI7 CI8 CI9 CI10 CI11 CI12 CI13 CI14 CI15 CI16 CI17

59 59 56 e 50 50 56 53 61 50 53 55 56 52 43 30 50

50 57 32 e 50 28 58 59 37 58 53 48 55 49 48 24 38

53 60 93 83 38 95 95 55 80 38 61 90 95 73 85 55 63

58 53 73 60 61 53 81 73 81 69 53 83 88 55 90 58 67

100 88 75 73 100 68 88 78 75 100 71 90 100 73 93 48 85

48 50 78 38 42 61 80 78 83 73 58 85 55 55 78 13 35

50 55 68 75 45 65 78 48 63 43 73 83 88 48 83 25 70

60 50 88 90 70 55 93 95 88 60 86 98 97 68 90 50 63

61 59 79 70 59 66 86 71 78 64 67 88 87 62 86 41 64

Please cite this article in press as: Pavani, F., et al., Spatial and non-spatial multisensory cueing in unilateral cochlear implant users, Hearing Research (2016), http://dx.doi.org/10.1016/j.heares.2016.10.025

F. Pavani et al. / Hearing Research xxx (2016) 1e14

possible the elevation of visual targets. Up/down responses were given using the up/down arrows keys on an external keyboard, using the middle/index fingers, respectively. Participants had a timeout of 2000 ms to give their answer and received feedback on accuracy (percentage of correct responses) and mean response time (in ms) only at the end of each block. Importantly, they were also explicitly informed that sounds were always entirely taskirrelevant. The experiment started with two blocks of practice trials, one visual and one audio-visual. Participants received a total number of 16 practice trials. Sound-present and sound-absent conditions were blocked, and each patient completed three blocks in which sounds were presented (180 audio-visual trials overall, equally divided between congruent, incongruent and neutral trials), and one block in which sounds were absent (60 trials). In total, the experiment comprised 240 trials. The audio-visual blocks followed one another, whereas the visual only block was presented either before or after the audio-visual ones (counterbalanced between participants). The ^ t et al., experiment was implemented on OpenSesame (Matho 2012). 3. Results 3.1. Sound localisation task To characterise sound-localisation performance for unilateral CI users and for NH participants in both binaural and monaural conditions, we examined three dependent variables: (1) the unsigned constant error, computed as the mean of the absolute difference on each trial between the source azimuth (i.e., the position of the speaker used to produce the target sounds, in degrees) and the response azimuth (i.e., where the participant pointed on the graduated scale); (2) the signed constant error (or bias), computed as the mean of the arithmetic differences between the source azimuth and the response azimuth; (3) the variable error, computed as the standard deviation of responses. Each of the dependent variable was computed separately for each of the source azimuth. Table 3 reports the values for each of dependent variable as a function of source azimuth and separately for NH tested in the bilateral hearing condition, the same group of NH tested in unilateral hearing condition, and unilateral CI users. The rightmost column report the values of chance performance. Following Grantham et al. (2007) we computed chance performance for each error measure via computer simulations: a custom made software in MATLAB generated random responses (using the 75 to þ75

5

scale, as available to participants) to 1000 repetitions of the 48 trials (4 source azimuth x 12 repetitions), i.e., the number of trials in a sound localisation block. 3.1.1. NH participants: bilateral hearing condition The top row of Fig. 1 shows responses as a function of source azimuth for two representative NH participants (ID 21 and ID 17) localising sounds in the bilateral hearing condition. Horizontal gray lines indicate the correct response, crosses indicate actual responses, and the black triangle indicates the mean response for each source azimuth. The box plot in the same top row illustrates the distribution of responses across all participants (N ¼ 17). NH participants were overall accurate when listening with both ears. To examine their performance on the three dependent variables introduced above, we entered each error measure in a separate Analysis of Variance (ANOVA) with SOURCE AZIMUTH (55 , 20 , þ20 ,þ55 ) as within-participant variable. The analysis on unsigned constant errors revealed performance well above chance (one sample t-tests, p < 0.001; see Table 3 for mean values), and comparable across sound positions (main effect of source azimuth: F(3,48) ¼ 0.737, p ¼ 0.535). The analysis on signed constant errors, however, revealed a main effect of SOURCE 2 AZIMUTH (F(3,48) ¼ 5.771, p ¼ 0.002, h p ¼ 0.265), caused by a small bias to localise left sounds towards the midline (i.e., rightwards; one sample t-test against zero, p < 0.03 for both left source azimuth). This asymmetry may reflect biomechanical constraints in pointing to leftward sounds with the right hand (i.e., the only hand used for pointing responses). Finally, the variable error was also modulated by SOURCE AZIMUTH (F(3,48) ¼ 3.582, p ¼ 0.02, h2p ¼ 0.183), reflecting a larger variable error for the locations further away from the midline (3.6 ± 1.0) compared to locations near the midline (2.9 ± 0.8; t(16) ¼ 2.709, p ¼ 0.008). Note however that both the signed constant error and the variable error were significantly smaller than chance performance (one sample t-tests, p < 0.001; see Table 3 for mean values). 3.1.2. NH participants: unilateral hearing condition The second and the third row in Fig. 1 show performance of NH participants when tested in right unilateral hearing condition (N ¼ 6) and left unilateral hearing condition (N ¼ 11), respectively. Overall, responses were more dispersed compared to when the same participants performed the task in bilateral hearing. Moreover, biases towards the hearing side were evident: sources on the hearing (unplugged) side were localised more accurately, whereas sources on the plugged side were significantly misplaced towards

Table 3 Unsigned constant error, signed constant error and variable error in NH participants (tested binaurally and monaurally) and in CI users. Error measure Source azimuth Unsigned constant error 55 20 þ20 þ55 Signed constant error 55 20 þ20 þ55 Variable error 55 20 þ20 þ55

NH Binaural (N ¼ 16)

NH Unilateral R (N ¼ 6)

NH Unilateral L (N ¼ 11)

CI R users (N ¼ 6)

CI L users (N ¼ 11)

Chance (computer responding randomly, N ¼ 1000)

4.7(1.9) 5.0(2.7) 4.1(1.8) 5.1(2.1)

37.5(19.7) 31.4(14.9) 20.4(15.7) 20.0(14.2)

9.6(4.6) 15.9(9.7) 23.2(14.4) 36.1(28.7)

25.5(19.4) 28.0(15.0) 44.7(21.6) 69.0(43.5)

38.2(22.7) 32.2(17.6) 39.1(16.3) 59.1(28.1)

57.5(11.3) 38.8(7.2) 38.8(7.2) 57.5(11.2)

2.6(3.8) 2.8(4.8) 1.3(4.0) 1.6(4.9)

37.0(20.5) 26.5(19.0) 14.9(20.9) 11.8(22.3)

2.7(9.4) 9.2(14.8) 13.3(22.3) 34.1(30.5)

22.6(22.9) 5.4(28.2) 26.5(43.2) 67.5(45.9)

35.9(24.2) 1.1(29.7) 21.3(27.1) 57.8(29.4)

55.4(12.1) 19.7(11.5) 20.2(12.0) 55.4(12.0)

3.6(0.9) 2.9(0.9) 2.8(1.0) 3.6(1.3)

9.7(5.2) 17.5(11.5) 9.8(3.1) 7.3(4.6)

6.7(3.5) 9.8(5.2) 11.4(8.9) 11.1(8.3)

13.3(13.5) 17.5(13.0) 12.2(11.3) 15.5(12.5)

27.1(12.9) 26.7(14.1) 31.2(13.3) 31.0(15.2)

41.4(6.4) 41.4(6.0) 41.0(6.0) 41.3(6.1)

Please cite this article in press as: Pavani, F., et al., Spatial and non-spatial multisensory cueing in unilateral cochlear implant users, Hearing Research (2016), http://dx.doi.org/10.1016/j.heares.2016.10.025

6

F. Pavani et al. / Hearing Research xxx (2016) 1e14

Fig. 1. Individual and average responses for representative NH participants (scatter plots) and distribution of mean responses across all NH participants (box plots) as a function of hearing condition (bilateral, unilateral right, unilateral left) and source azimuth (55 , 20 , þ20 , þ55 ). In the scatterplots illustrating performance for individual participants, horizontal gray lines indicate the correct response, crosses indicate actual responses, and the black triangle indicates the mean response for each source azimuth.

the open ear. To examine performance of NH unilateral participants as a function of source azimuth and hearing side, we entered separately the three error measures in an ANOVA with SOURCE AZI    MUTH (55 , 20 , þ20 ,þ55 ) as within-participant variable and HEARING SIDE (left or right) as between-participant variable. To correct for the violation of sphericity assumption in some of the analyses, we adopted the Greenhouse-Geisser method, which corrects the degree of freedom of the F-distribution based on an estimated ε. The analysis on unsigned constant errors revealed an interaction between SOURCE AZIMUTH and HEARING SIDE (F(1.97,29.49) ¼ 6.424, p ¼ 0.005, h2p ¼ 0.300, ε ¼ 0.655). This interaction is illustrated in Fig. 2a. NH with the right ear open made larger unsigned constant errors for left than right positions (34.4 ± 13.6 vs. 13.6 ± 7.4; t(5) ¼ 1.595, p ¼ 0.042, one-tailed); conversely, NH with the left ear open made larger unsigned constant errors for right than left positions (29.6 ± 21.0 vs. 12.8 ± 5.2; t(5) ¼ 2.416, p ¼ 0.009, one-

tailed). This documents in our NH controls the well-known difficulty in localising sounds in the hemispace ipsilateral to the plugged ear (Butler et al., 1990; Oldfield and Parker, 1986; Slattery and Middlebrooks, 1994). Nonetheless, for all sound positions the unsigned error remained significantly different from chance (for all one sample t-tests p < 0.001). A similar analysis on signed constant errors revealed a main effect of GROUP (F(1,15) ¼ 18.020, p < 0.001, h2p ¼ 0.546), caused by an average rightward bias in NH participants with right unilateral hearing (16.7 ± 12.7) and an average leftward bias in NH participants with left unilateral hearing (13.5 ± 14.6). The main effect of SOURCE AZIMUTH was also significant (F(1.284,19.26) ¼ 16.204, p < 0.001, h2p ¼ 0.519, ε ¼ 0.428; see Fig. 2b). The signed constant error was positive for left positions (55 : 14.8 ± 21.7; 20 : 3.4 ± 23.7) and negative for right positions (55 : 3.4 ± 25.3; 20 : 26.2 ± 29.3). For all sound positions the signed constant error

Please cite this article in press as: Pavani, F., et al., Spatial and non-spatial multisensory cueing in unilateral cochlear implant users, Hearing Research (2016), http://dx.doi.org/10.1016/j.heares.2016.10.025

F. Pavani et al. / Hearing Research xxx (2016) 1e14

7

Fig. 2. Unsigned constant error and signed constant error in NH participants (panels a and b) and CI users (panels c and d) as a function of hearing condition and source azimuth. Error bars indicate the standard error of the mean.

was always significantly different from chance (for all one sample ttests, p < 0.01). Finally, the analysis on variable error revealed an interaction between SOURCE AZIMUTH and GROUP (F(2.347,35.203) ¼ 3.308, p < 0.041, h2p ¼ 0.181, ε ¼ 0.782). This interaction reflected a tendency for higher variable error when pointing to right than left sounds (11.3 ± 7.9 vs. 8.3 ± 3.7; t(10) ¼ 1.438, p ¼ 0.045, one-tailed) in left unilateral hearing controls. Once again, for all sound positions the variable error remained significantly different from chance (for all one sample t-tests, p < 0.001). In sum, when compared with the binaural condition, unilateral hearing considerably degraded performance. The unsigned constant error increased from 4.7 ± 1.2 to 23.4 ± 10.5 (t(16) ¼ 6.994, p < 0.001). The signed constant error changed from 0.3 ± 2.6 to 16.7 ± 12.7 (i.e., rightward bias; t(5) ¼ 2.068, p < 0.03) for the group with right unilateral hearing, and from 1.2 ± 2.4 to 13.5 ± 14.6 (i.e., leftward bias; t(10) ¼ 2.898, p < 0.005), for the group with left unilateral hearing. In both cases, this reveals a bias towards the side ipsilateral to the open ear. Finally, the variable error also increased from the binaural to the monaural condition (from 3.2 ± 0.7 to 10.2 ± 4.8; t(16) ¼ 5.488, p < 0.001). 3.1.3. CI users Fig. 3 illustrates the performance of four representative right CI

users (ID 3, ID 11, ID 8 and ID 14), four representative left CI users (ID 2, ID 15, ID 12, ID 7), as well as the average of responses in the two groups of CI users. The performance of CI users is considerably more noisy compared to NH controls tested binaurally or monaurally. Moreover, the performance appears inconsistent across CI users. Although some CI users showed some discrimination between the sound sources in opposite hemispaces (e.g., ID 3 and ID 2), others provided responses that were scattered on the entire pointing area (e.g., ID 11, ID 15, ID 7), or showed extreme lateral biases (e.g., ID 8 and ID 12), or confined their responses to the midline area (e.g., ID 14). To examine the performance of CI users as a function of source azimuth and hearing side, we entered separately the three error measures in an ANOVA with SOURCE AZIMUTH (55 , 20 , þ20 ,þ55 ) as within-participant variable and HEARING SIDE (left or right) as between-participant variable. As before, violations of sphericity assumption were treated using the Greenhouse-Geisser correction. The analysis on unsigned constant errors revealed a main effect of SOURCE AZIMUTH (F(1.84,27.65) ¼ 7.149, p ¼ 0.004, h2p ¼ 0.323, ε ¼ 0.614), caused by larger errors for right than left sounds (51.8 ± 23.4 vs. 32.2 ± 15.0; t(16) ¼ 2.303, p ¼ 0.02), and particularly for the rightmost source azimuth (i.e., 55 : 62.6 ± 33.3; p < 0.01 for all t-test comparisons with the other source azimuth). This effect was not modulated as a function of CI side (interaction between

Please cite this article in press as: Pavani, F., et al., Spatial and non-spatial multisensory cueing in unilateral cochlear implant users, Hearing Research (2016), http://dx.doi.org/10.1016/j.heares.2016.10.025

8

F. Pavani et al. / Hearing Research xxx (2016) 1e14

Fig. 3. Individual and average responses for representative CI users (scatter plots) and distribution of mean responses across all CI users (box plots) as a function of hearing condition (CI right or CI left) and source azimuth (55 , 20 , þ20 , þ55 ). In the scatterplots illustrating performance for individual participants, horizontal gray lines indicate the correct response, crosses indicate actual responses, and the black triangle indicates the mean response for each source azimuth.

and CI SIDE, F(1.84,27.65) ¼ 0.76; see Fig. 2c). Performance of CI users was significantly above chance only for the leftmost sound position (i.e., 55 : t(16) ¼ 4.481, p < 0.001), and marginally above chance for the sound position to the left of the midline (i.e., 20 : t(16) ¼ 2.039, p ¼ 0.058). By contrast, the SOURCE AZIMUTH

unsigned constant error was not different from chance for sounds delivered from the right hemispace. A similar analysis on signed constant error revealed a main effect of SOURCE AZIMUTH (F(1.89,28.40) ¼ 42.367, p < 0.001, h2p ¼ 0.739, ε ¼ 0.631; see Fig. 2d), reflecting a bias to point leftwards when the

Please cite this article in press as: Pavani, F., et al., Spatial and non-spatial multisensory cueing in unilateral cochlear implant users, Hearing Research (2016), http://dx.doi.org/10.1016/j.heares.2016.10.025

F. Pavani et al. / Hearing Research xxx (2016) 1e14

sound was delivered from the right (þ20 : 61.2 ± 34.9; þ55 : 23.2 ± 32.4; p < 0.01 on one-sample ttests against zero), and a bias to point rightwards when the sound was delivered from the leftmost source (55 : 31.2 ± 24.0; p < 0.001 on one-sample t-tests against zero). Again, performance was significantly different from chance only for left sound sources (i.e., 55 : t(16) ¼ 3.028, p ¼ 0.008; 20 : t(16) ¼ 4.159, p < 0.001). By contrast, the signed constant error was comparable to that of chance performance when sounds were delivered in the right hemispace. Finally, the analysis on variable error only revealed a main effect of CI SIDE (F(1,15) ¼ 5.462, p ¼ 0.034, h2p ¼ 0.267, ε ¼ 0.833), caused by a larger variable error in left (29.0 ± 12.7) than right CI users (14.6 ± 10.7). The variable error was always significantly different from chance (for all source azimuth, p < 0.001). In sum, when compared with NH participants tested in the monaural condition, unilateral CI users showed considerably worse performance. The unsigned constant error increased from 23.4 ± 10.5 in NH participants to 42.0 ± 12.4 in CI users (t(16) ¼ 4.193, p < 0.001; chance level across all source azimuth: 48.2). On average the signed constant error did not differed between NH participants (2.9 ± 20.1) and CI users (13.6 ± 23.0; t(16) ¼ 0.812; chance level across all source azimuth: 0.1). However, the difference between groups in this error measure is clearly evident when comparing participants with matched unilateral experience (e.g., right unilateral NH and right CI user). While both left unilateral NH and left CI users showed a leftward bias in pointing to sounds (13.5 ± 14.6 vs. 10.5 ± 21.5, respectively), right unilateral NH showed a bias towards the right (16.7 ± 12.7) whereas right CI users showed a bias towards the left (19.2 ± 26.5; t(16) ¼ 2.115, p ¼ 0.025). Thus, no consistent bias towards the hearing side was detected in CI users. Finally, the variable error was smaller in NH unilateral controls than CI users (10.2 ± 4.8 vs. 23.9 ± 13.7, respectively; t(16) ¼ 3.266, p ¼ 0.002; chance level across all source azimuth: 41.3). 3.2. Elevation-discrimination task 3.2.1. Visual-only trials To assess whether unilateral hearing biased visual attention in the absence of any concurrent sounds, we examined visual discrimination performance as a function of visual target position (ipsilateral vs. contralateral with respect to hearing side) in CI users and in NH controls tested in the monaural condition. An ANOVA with VISUAL TARGET POSITION and GROUP as factors revealed no main effect of VISUAL TARGET POSITION, nor an interaction between VISUAL TARGET POSITION and GROUP. The main effect of GROUP was marginally significant, F(1,32) ¼ 3.19, p ¼ 0.08, h2p ¼ 0.09, caused by a tendency for slower RTs in CI compared to NH participants overall (NH, mean ¼ 316 ms, SE ¼ 19 ms; CI, mean ¼ 370 ms, SE ¼ 24 ms). A similar ANOVA was run on to arcsin transformed percent correct data. This analysis revealed also no significant main effect or interaction (all Fs < 1). 3.2.2. Audio-visual trials: spatial multisensory cueing effect To confirm that our version of the audio-visual cueing paradigm produced reliable spatial multisensory cueing (from now on, spatial MSC), we first examined the performance of NH participants when tested in the bilateral hearing condition, as in (Koelewijn et al., 2009) and in all previous works on the audio-visual cueing effect. RTs for audio-visual trials were entered into an ANOVA with ACTIVE LOUDSPEAKER (both, left and right) and VISUAL TARGET POSITION (left and right) as within-participant variables. The analysis revealed the expected interaction between ACTIVE LOUDSPEAKER and VISUAL TARGET PO2 SITION, F(2,32) ¼ 27.77, p < 0.0001, h p ¼ 0.63. This interaction is illustrated in Fig. 4a. When sounds were presented on the left,

9

discrimination was faster for left (mean ¼ 281 ms, SE ¼ 15 ms) than right visual targets (mean ¼ 314 ms, SE ¼ 21 ms; p ¼ 0.0001 on paired t-test). By contrast, when sounds were presented on the right, discrimination was faster for right (mean ¼ 282 ms, SE ¼ 17 ms) than left visual targets (mean ¼ 318 ms, SE ¼ 22 ms; p ¼ 0.0003 on paired t-test). No difference in RTs between visual targets emerged when sounds were delivered from both speakers simultaneously. A similar analysis on arcsin transformed percent correct data revealed no main effects or interaction (all Fs < 1.6). Having established the robustness of the spatial MSC in our experimental setup, we examined to what extent it was affected by monaural hearing in NH controls and in CI users. To compare results across participants who heard monaurally on different sides (CI on the left or right side; monaural hearing on the left or right side), we re-coded cue sound position for each participant as ipsilateral or contralateral with respect to hearing side. For instance, cue sound to the left of body midline were recoded as ipsilateral in CI users with the implant on the left side, but as contralateral in CI uses with the implant on the right side. We then conducted two separate ANOVAs with ACTIVE LOUDSPEAKER (both, ipsilateral or contralateral) and VISUAL TARGET POSITION (ipsilateral or contralateral) as withinparticipant factors. The analysis on RTs for NH participants in the monaural hearing condition revealed a main effect of ACTIVE LOUDSPEAKER, F(2,32) ¼ 3.34, p ¼ 0.05, h2p ¼ 0.17, and an interaction between ACTIVE LOUDSPEAKER and VISUAL TARGET POSITION, F(2,32) ¼ 11.55, p ¼ 0.0002, h2p ¼ 0.42. This interaction is illustrated in Fig. 4b. When sounds were ipsilateral to the hearing side, discrimination was faster for ipsilateral (mean ¼ 281 ms, SE ¼ 15 ms) than contralateral visual targets (mean ¼ 294 ms, SE ¼ 16 ms ms; p ¼ 0.02 on paired t-test). By contrast, when sounds were contralateral to the hearing side, discrimination was faster for contralateral (mean ¼ 286 ms, SE ¼ 16 ms) than ipsilateral visual targets (mean ¼ 301 ms, SE ¼ 17 ms; p ¼ 0.02 on paired t-test). No difference in RTs between visual targets emerged when sounds were delivered from both speakers simultaneously. A similar analysis on arcsin transformed percent correct data revealed no main effects or interaction (all Fs < 1). The analysis on RTs for CI users showed no significant main effect or interaction, as can be observed in Fig. 2c (Fs < 1). A similar analysis on arcsin transformed percent correct data revealed a main effect of Active Loudspeaker, F(2,32) ¼ 4.18, p ¼ 0.02, h2p ¼ 0.2. This was caused by better visual discrimination when the sounds were delivered by both speakers simultaneously (mean percent correct ¼ 99.4, SE ¼ 0.003), compared to when they were delivered from either the ipsilateral (mean percent correct ¼ 98.0, SE ¼ 0.007, p ¼ 0.05), or the contralateral speaker (mean percent correct ¼ 97.9, SE ¼ 0.007, p ¼ 0.04). No other main effect or interaction reached significance (all Fs < 1). To compare spatial MSC within the NH group as a function of hearing condition (i.e., NH binaural vs. NH monaural), and between NH and CI participants in the monaural hearing condition, we computed a spatial MSC index as the difference in RTs for trials in which sound and target position were congruent (also termed ‘valid’ trials in the audio-visual cueing literature) and trials in which sound and target position were incongruent (‘invalid’ trials). This index was computed on RTs only because no audio-visual cueing emerged in the accuracy measure. Also, because CI users were on average slower responders compared to NH monaural participants (CI users, mean ¼ 352 ms, SE ¼ 24 ms; NH monaural, mean ¼ 289 ms, SE ¼ 16 ms; p ¼ 0.03 on independent samples ttest), we also normalised the MSC index on the average RT of each participant. In NH participants, spatial MSC was smaller in the monaural compared to the binaural hearing condition (analysis on actual RT

Please cite this article in press as: Pavani, F., et al., Spatial and non-spatial multisensory cueing in unilateral cochlear implant users, Hearing Research (2016), http://dx.doi.org/10.1016/j.heares.2016.10.025

10

F. Pavani et al. / Hearing Research xxx (2016) 1e14

Fig. 4. Response times (RTs) in millisecond in the audio-visual conditions of the elevation discrimination task for (a) NH controls tested binaurally; (b) NH controls tested monaurally; (c) CI users. Note that in plot (b) and (c) visual targets and cueing sounds are coded with respect to the CI/unplugged side (ipsilateral or contralateral; see text for details). The dashed line in each of the plot indicates the average performance in the visual-only condition for each group. Error bars indicate the standard error.

data: p ¼ 0.004 on paired t-test; analysis on normalised RT data: p ¼ 0.003 on paired t-test; see Table 4 for details). In CI users, no spatial MSC emerged, leading to a significant difference with respect to the spatial MSC measured in the NH participants tested monaurally (analysis on actual RT data: p ¼ 0.03; analysis on

Table 4 Spatial and non-spatial MSC in CI users and NH controls, expressed in milliseconds or normalised with respect to the average response time of each participant. Spatial MSC

NH binaural NH monaural CI users

Non-spatial MSC

in ms

Normalised

in ms

Normalised

35(6) 14(3) 1(5)

0.11(0.02) 0.04(0.01) 0.00(0.01)

11(8) 26(6) 18(6)

0.03(0.02) 0.07(0.02) 0.05(0.01)

normalised RT data: p ¼ 0.01 on independent samples t-test; see Table 4). 3.2.3. Audio-visual trials: non-spatial multisensory cueing The horizontal dashed lines in Fig. 4 indicate average performance in the visual-only condition, and suggest that audio-visual trials were overall faster than visual-only trials, especially in the monaural circumstances (Fig. 4b and c). Any difference between the visual-only condition and the audio-visual condition irrespective of cue sound location can be described as a form of non-spatial multisensory cueing (from now on non-spatial MSC). Recall that the task entailed a visual discrimination, hence any RT advantage in the absence of reduced discrimination accuracy (i.e., speedaccuracy trade-off) indicates improved visual discrimination. To examine the non-spatial MSC we calculated the difference in

Please cite this article in press as: Pavani, F., et al., Spatial and non-spatial multisensory cueing in unilateral cochlear implant users, Hearing Research (2016), http://dx.doi.org/10.1016/j.heares.2016.10.025

F. Pavani et al. / Hearing Research xxx (2016) 1e14

RTs between audio-visual and visual-only trials and tested this against zero for each of the groups. Average non-spatial MSC for all groups (NH binaural, NH monaural, and CI users) are reported in Table 4. Similar to the previous analysis on the spatial MSC, the analyses were conducted both on actual and on normalised RTs. For the NH participants tested binaurally, the non-spatial MSC observed was not different from zero (p > 0.05). Instead, for both NH participants tested monaurally and for the CI user group alike, the non-spatial MSC was different from zero (p ¼ 0.0001 and p ¼ 0.002, respectively). The non-spatial MSC was statistically comparable in NH participants (monaural condition) and CI users (p > 0.05 on independent t-test). 3.3. Correlation analyses We predicted that multisensory links in audio-visual attention would be influenced by spatial hearing abilities. To assess this prediction, we ran correlation tests between our spatial-hearing indicators (i.e. unsigned constant error, signed constant error and standard deviation of sound localisation) and the multisensory effects in the elevation-discrimination task (i.e., spatial MSC and nonspatial MSC indices). When the spatial MSC was considered, significant correlations emerged only with the unsigned constant error (r ¼ 0.49, p ¼ 0.003) and with the standard deviation of sound localisation (r ¼ 0.47, p ¼ 0.005) when CI users and NH participants were considered together. The results showed that the poorer the performance in the sound localisation task, the smaller the influence of spatial congruency in the audio-visual task, suggesting sound-localisation is related spatial multisensory integration. These correlations were however non significant when the two groups were considered separately. When the non-spatial MSC was considered, no significant correlated emerged. Regarding the anamnestic data, age at test was significantly correlated the spatial MSC index (r ¼ 0.7, p ¼ 0.004). This strong correlation is compatible with increased benefits of multisensory stimulation in aged participants, as reported in previous studies (Laurienti et al., 2006). In order to understand if age could modulate spatial MSC in the entire group of participants, we ran this correlation including also the NH group. Also this analysis confirmed a significant correlation between the two variables (r ¼ 0.4, p ¼ 0.02). Age at implant also significantly correlated to the spatial MSC index (r ¼ 0.65, p ¼ 0.005). However, this correlation might be explained by the highly significant positive correlation between age at test and age at implant (r ¼ 0.98, p < 0.001). 4. Discussion The goal of the present study was to investigate audio-visual attention in monaural CI users, studying both spatial and nonspatial multisensory cueing in this population. A group of agematched hearing controls was tested under both bilateral and unilateral hearing conditions to provide a baseline for multisensory cueing effects in the typical hearing population. First, we measured auditory spatial abilities of all participants, measuring in particular the signed constant error to capture any systematic tendency to mislocalise sounds towards the hearing side. This bias is typically observed in hearing controls when tested with one ear plugged. Second, we tested whether prolonged monaural hearing produce systematic biases in visual attention orienting. To this aim, we measured visual discrimination for stimuli lateralised in the hemispace ipsilateral vs. contralateral to the hearing side, in the absence of any concurrent sounds (visual-only condition). Third, we used a classic multisensory cueing paradigm to test the integrity of spatial and non-spatial MSC in CI users. Results confirmed spatialhearing deficits of monaural CI users, but did not reveal any

11

directional bias in sound localisation in this population, unlike the hearing control tested with one ear plugged. In addition, we found no evidence for a sustained lateralised bias in visual attention in monaural CI users. Importantly, however, we found that spatial MSC effects were eliminated in CI users compared to hearing controls, whereas non-spatial MSC effects were equally present in both groups when tested under monaural hearing conditions. 4.1. Monaural CI users show poor spatial-hearing but no directional biases The literature on sound localisation abilities in CI users has grown rapidly in the last decade (Beijen et al., 2007; Buhagiar et al., 2004; Grantham et al., 2008, 2007; Litovsky et al., 2012, 2010; 2009; Nava et al., 2009a; T avora-Vieira et al., 2015; van Hoesel and Tyler, 2003; van Zon et al., 2016; Zheng et al., 2015), with important contributions also from research in animal models (ferrets: Isaiah et al., 2014). One main indication that emerged from this line of research is that bilateral cochlear implantation typically results in better spatial hearing abilities, compared to unilateral implantation. This behavioural advantage has been documented in bilateral CI recipients tested with two vs. one active device (Grantham et al., 2007; Litovsky et al., 2009; Seeber and Fastl, 2008; van Hoesel and Tyler, 2003), in group studies that compared sound localisation performance in bilateral vs. unilateral CI users (Kerber and Seeber, 2012; Murphy et al., 2011), in patients who retained residual hearing in both ears after CI surgery and could thus benefit of bilateral hearing aids (best-aided hearing; (Gifford et al., 2014). Notably, a recent multicentre randomised control trial (van Zon et al., 2016) found stable benefits of bilateral simultaneous CIs compared to unilateral CI in 38 postlingually deafened adults tested across a 2 year period. Bilateral CI users performed better in terms of sound localisation as well as discrimination of speech in noise when the stimuli came from different directions. However, the majority of adult CI recipients are unilateral CI users, and the limited spatial hearing afforded by unilateral CI could impact on selection of relevant information from the environment. Most of the monaural CI users tested in the present study were at or near chance when localising sounds in space. While, poor localisation performance emerged both in terms of unsigned constant localisation error and dispersion (i.e., standard deviation of the responses), we found no systematic localisation bias towards the side of the implant. This result conflicts with the results of previous studies testing sound localisation in monaural CI recipients (Nava et al., 2009a) or bilateral CI users performing the task with a single IC active (Grantham et al., 2007). Moreover, it disagrees with the observation that hearing people with one ear plugged systematically misperceive sounds as originating from their hearing side (Butler et al., 1990; Oldfield and Parker, 1986; Slattery and Middlebrooks, 1994). The only localisation bias we detected in our CI group was a tendency to indicate sounds towards the left side of space (regardless of CI side). While this finding was unexpected, it is compatible with a sensory-motor error produced by right-arm use when pointing to sounds (Ocklenburg et al., 2010). In sum, our sound localisation method captured and replicated typical findings in comparing sound localising performance with one vs. two ears in normal hearing controls (i.e., increased spatial uncertainty and bias towards the hearing side under monaural condition); moreover, it confirmed the substantial difficulty of localising sounds in monaural CI users. Nevertheless, we acknowledge that our approach to sound localisation was suboptimal for providing a thorough characterisation of spatial hearing in our participants, thus limiting the generalisation of our spatial-hearing results. Specifically, even though speakers were not visible and responses were provided on a continuous scale, the

Please cite this article in press as: Pavani, F., et al., Spatial and non-spatial multisensory cueing in unilateral cochlear implant users, Hearing Research (2016), http://dx.doi.org/10.1016/j.heares.2016.10.025

12

F. Pavani et al. / Hearing Research xxx (2016) 1e14

limited number of sources (4 speakers) could have affected localisation error (Hartmann et al., 1998). Likewise, placing of the apparatus on the table surface in a regular room of the clinical service, increased the impact of sound reflections and may have affected the reliability of localisation further. Finally, as already noted above, the use of manual pointing responses may have introduced biases in the measure. 4.2. Spatial and non-spatial audio-visual cueing attention in monaural CI users The innovative aspect of the present work is the study of the spatial and non-spatial multisensory attention in monaural CI users. As anticipated in the Introduction, one key determinant of the spatial multisensory mechanisms that contributes to attention orienting is the participant's ability to perceive the spatial proximity between the successive multisensory stimulations (Spence, 2010). In hearing participants tested binaurally this mechanism was entirely functional, resulting in strong facilitation (34 ms on average) of visual discrimination responses in the portion of space that was cued by the preceding (but unpredictive) sounds. When the same hearing participants were tested with one ear plugged this multisensory attention capture was reduced (14 ms on average) but still reliable. By contrast, audio-visual attention capture was minimal or absent in monaural CI users. This finding is compatible with the substantially reduced spatial-hearing abilities measured in monaural CI users. While a deficit in spatial multisensory cueing could be predicted on the basis of the poor spatial hearing abilities systematically documented in unilateral CI users, we believe that providing direct evidence of this prediction is far from trivial. First, as anticipated in the Introduction, the neuropsychological literature on braindamaged patients with neglect suggests that performance in sound localisation tasks may actually dissociate from the integrity of spatial multisensory cueing mechanisms. Patients with visual neglect can fail to localise sounds in contralesional space (Bisiach et al., 1984; Pavani et al., 2004, 2005, 2001), and yet show improved processing of contralesional visual stimuli when paired with sounds originating from the same region of space (Frassinetti et al., 2002, 2005). This indicates that difficulties in tasks that require explicit localisation of sounds do not necessarily foreclose spatial multisensory integration. Second, the results of the present study contribute to make explicit that spatial hearing deficits have consequences that go beyond the auditory domain alone. Specifically, they show that spatial hearing difficulties in monaural CI users impact the fundamental mechanism that serves attention orienting in the multisensory environment. In the everyday context in which even car manufacturers are starting to exploit spatial multisensory attention mechanisms when developing alert safetysignals (Spence and Ho, 2012) documenting that monaural CI users cannot benefit from these cues is not at all trivial. Finally and most importantly, the present results lead to the testable hypothesis that restoring binaural hearing, for instance with bilateral CI (van Zon et al., 2016), could result improved sound localisation abilities as well as in the recovery of spatial multisensory cueing. Importantly, even though we did not find evidence for spatial multisensory cueing in our group of monaural CI users, the results revealed clear non-spatial multisensory benefits in the same participants. Irrespective of location, unilateral CI users were faster to respond to a visual target in the presence of an irrelevant auditory signal compared to when the sound was absent. 4.3. Audio-visual processing in CI users Although to the best of our knowledge our study was the first to

investigate spatial and non-spatial multisensory cueing in CI users, the literature on audio-visual interactions in this population has grown rapidly in the last decade. Audio-visual interactions have been examined primarily in relation to linguistic materials, but more recently works are starting to shed light on audio-visual integration also for non-linguistic stimuli. One of the first studies of audio-visual interactions during language processing in CI users was conducted by Bergeson et al. (2005). They assessed sentence comprehension in 80 prelingually deaf children with CI, tested with auditory only, visual only or audio-visual presentation of the stimuli. Audio-visual presentation led to better performances in children compared to the unisensory presentation conditions (Bergeson et al., 2003; Kaiser et al., 2003; see also Lachs et al., 2001). Interestingly, audio-visual sentence comprehension abilities of children with CI improved consistently over the 5 year period after CI surgery. Subsequent studies primarily examined audio-visual integration in speech taking advantage of the McGurk illusion (Rouger et al., 2008, 2007; Schorr et al., 2005; Stropahl et al., 2015b; Tremblay et al., 2010); but see (HayMcCutcheon et al., 2009; Winn et al., 2013). The McGurk illusion (McGurk and MacDonald, 1976) consists in the illusory perception of a spoken phoneme, when the lip movements of an incongruent phoneme are concurrently seen (e.g., heard ‘ba’ plus seen ‘ga’ result in illusory ‘da’). A study in children born deaf who received a CI between 2 and 8 years of age reported for most children that perception was dominated by vision in trials that typically induce audio-visual fusions (Schorr et al., 2005). Nonetheless, a proportion of children with CI did show audio-visual fusions, revealing that multisensory processing is indeed possible also when perceiving the sound through a CI. Interestingly, Schorr et al. (2005) observed that most children implanted after 30 months of age did not perceive audiovisual fusions, suggesting a sensitive period for multisensory integration. Subsequent studies conducted in postlingually deaf CI users confirmed audio-visual integration for speech materials in this population (Rouger et al., 2008, 2007; Stropahl et al., 2015b; Tremblay et al., 2010), with evidence for stronger audio-visual integration in CI users compared to hearing controls (Rouger et al., 2007; Stropahl et al., 2015a). Research is now starting to address the possible neural correlates of this multisensory interaction during processing of speech-related information using PET (Song et al., 2015; Strelnikov et al., 2015) as well as fNIRS (van de Rijt et al., 2016). In more recent years research on audio-visual integration in CI users expanded beyond the investigation of speech materials. Gilley et al. (2010) tested prelingually deaf children who received a CI early (around 2.8 ± 0.8 years of age, N ¼ 8) or late (around 7.8 ± 2.1 years of age, N ¼ 8), together with hearing children of comparable age (N ¼ 9). All performed a simple detection task to the onset of either visual, auditory or audio-visual stimuli. Notably, all groups were faster in the audio-visual condition compared to the unisensory conditions, albeit the mechanisms subtending this advantage may have differed for late implanted children compared to the other two groups (as revealed by the absence of violation of Miller's race model in the late implanted group). A similar approach has recently been adopted in young and old postlingually deaf adults using a CI. Schierholz et al. (2015) investigated multisensory interactions using a simple speeded response task on basic auditory, visual and audioevisual stimuli. The results of this study demonstrated a redundant signals effect for both groups of CI participants, as well as for the young and old hearing control groups, showing that response times for audio-visual stimuli were consistently faster compared with response times to single visual or auditory stimuli. When comparing the non-spatial benefit observed in the present study with these two previous reports it is important to keep

Please cite this article in press as: Pavani, F., et al., Spatial and non-spatial multisensory cueing in unilateral cochlear implant users, Hearing Research (2016), http://dx.doi.org/10.1016/j.heares.2016.10.025

F. Pavani et al. / Hearing Research xxx (2016) 1e14

in mind that both Gilley et al. (2010) and Schierholz et al. (2015) used a simple detection task, with auditory and visual signal always presented in the same spatial location. By contrast, in the present study the visual task entailed a discrimination on target elevation and the auditory and visual signals were not always presented at the same location. Because a discrimination task was requested our result is the first one to document improved access to stimulus representation, rather than just increased reactiveness to stimulus onset. Moreover, the fact that participants were faster to respond in the audio-visual condition regardless of sound location indicates that the facilitation occurred in the temporal domain. This is compatible with previous reports in hearing participants showing that visual stimuli occurring at the time of an abrupt sound can be discriminated more easily (e.g., Andersen and Mamassian, 2008; Ngo and Spence, 2010; Noesselt et al., 2008; Van der Burg et al., 2008). While it remains to be ascertained to what extent this nonspatial multisensory cueing benefit could generalise across tasks, and to what extent it involves similar mechanisms to the ones that have been described for normal hearing adults, it is noteworthy that such multisensory benefit can emerge also when the sound is conveyed to the brain using an artificial transduction mechanism. This reveals a benefit of auditory reafferentation that remained largely unexplored until now. 5. Conclusions The present study examined the effects of monaural CI use on the deployment of spatial and non-spatial multisensory attention. We suggest that the deficit observed for the spatial MSC mechanisms reflects the spatial hearing difficulty of this CI population, and argue that auditory spatial deficits in people with unilateral CI impact more than auditory processing alone. These findings are relevant not just for research on the outcome of cochlear implantation, but also for other conditions of hearing loss that are associated with spatial hearing deficits, for example hearing-aid users (Ahlstrom et al., 2009; Van den Bogaert et al., 2006), people with single-side deafness (T avora-Vieira et al., 2015) and for people with tinnitus (An et al., 2012). Critically, finding that the monaural CI users in the present study demonstrate a reliable non-spatial multisensory enhancement confirms that some aspects of multisensory integration are still possible and efficient in this group. This is important because it could mean that other instances of multisensory integration that do not rely on spatial hearing may similarly be intact in monaural CI users, an idea that might offer further promise for the future. Acknowledgements We are grateful to all the CI users that participated in the study. We are very grateful to one anonymous reviewer that generously commented on a previous version of this manuscript. This work has been realized thanks to the support from the Fondazione Cassa di Risparmio di Trento e Rovereto to the Center for Mind/Brain Sciences (CIMeC). References Ahlstrom, J.B., Horwitz, A.R., Dubno, J.R., 2009. Spatial benefit of bilateral hearing aids. Ear Hear. 30, 203e218. http://dx.doi.org/10.1097/AUD.0b013e31819769c1. An, Y.-H., Lee, L.H., Yoon, S.W., Jin, S.Y., Shim, H.J., 2012. Does tinnitus affect the sound localization ability? Otol. Neurotol. 33, 692e698. http://dx.doi.org/ 10.1097/MAO.0b013e31825952e9. Andersen, T.S., Mamassian, P., 2008. Audiovisual integration of stimulus transients. Vis. Res. 48, 44e51. http://dx.doi.org/10.1016/j.visres.2008.08.018. Apolone, G., Mosconi, P., Ware, J.J., 2000. Questionario sullo stato di salute mentale. Manuale d'uso e guida all'interpretazione dei risultati. Guerini e Associati

13

Editore, Milano. Beijen, J.-W., Snik, A.F.M., Mylanus, E.A.M., 2007. Sound localization ability of young children with bilateral cochlear implants. Otol. Neurotol. 28, 479e485. http:// dx.doi.org/10.1097/MAO.0b013e3180430179. Bergeson, T.R., Pisoni, D.B., Davis, R.A.O., 2005. Development of audiovisual comprehension skills in prelingually deaf children with cochlear implants. Ear Hear. 26, 149e164. Bergeson, T.R., Pisoni, D.B., Davis, R.A.O., 2003. A longitudinal study of audiovisual speech perception by children with hearing loss who have cochlear implants. Volta. Rev. 103, 347e370. Bisiach, E., Cornacchia, L., Sterzi, R., Vallar, G., 1984. Disorders of perceived auditory lateralization after lesions of the right hemisphere. Brain 107 (Pt 1), 37e52. Brimijoin, W.O., McShefferty, D., Akeroyd, M.A., 2010. Auditory and visual orienting responses in listeners with and without hearing-impairment. J. Acoust. Soc. Am. 127, 3678e3711. http://dx.doi.org/10.1121/1.3409488. Buhagiar, R., Lutman, M.E., Brinton, J.E., Eyles, J., 2004. Localization performance of unilateral cochlear implant users for speech, tones and noise. Cochlear Implants Int. 5, 96e104. http://dx.doi.org/10.1179/cim.2004.5.3.96. Butler, R.A., Humanski, R.A., Musicant, A.D., 1990. Binaural and monaural localization of sound in two-dimensional space. Perception 19, 241e256. http:// dx.doi.org/10.1068/p190241. Ching, T.Y.C., van Wanrooy, E., Dillon, H., 2007. Binaural-bimodal fitting or bilateral implantation for managing severe to profound deafness: a review. Trends Amplif. 11, 161e192. http://dx.doi.org/10.1177/1084713807304357. davas, E., 2005. Audiovisual Frassinetti, F., Bolognini, N., Bottari, D., Bonora, A., La integration in patients with visual deficit. J. Cogn. Neurosci. 17, 1442e1452. http://dx.doi.org/10.1162/0898929054985446. Frassinetti, F., Pavani, F., L adavas, E., 2002. Acoustical vision of neglected stimuli: interaction among spatially converging audiovisual inputs in neglect patients. J. Cogn. Neurosci. 14, 62e69. http://dx.doi.org/10.1162/089892902317205320. Gifford, R.H., Grantham, D.W., Sheffield, S.W., Davis, T.J., Dwyer, R., Dorman, M.F., 2014. Localization and interaural time difference (ITD) thresholds for cochlear implant recipients with preserved acoustic hearing in the implanted ear. Hear. Res. 312, 28e37. http://dx.doi.org/10.1016/j.heares.2014.02.007. Gilley, P.M., Sharma, A., Mitchell, T.V., Dorman, M.F., 2010. The influence of a sensitive period for auditory-visual integration in children with cochlear implants. Restor. Neurol. Neurosci. 28, 207e218. http://dx.doi.org/10.3233/RNN-20100525. Grantham, D.W., Ashmead, D.H., Ricketts, T.A., Labadie, R.F., Haynes, D.S., 2007. Horizontal-plane localization of noise and speech signals by postlingually deafened adults fitted with bilateral cochlear implants. Ear Hear. 28, 524e541. http://dx.doi.org/10.1097/AUD.0b013e31806dc21a. Grantham, D.W., Ricketts, T.A., Ashmead, D.H., Labadie, R.F., Haynes, D.S., 2008. Localization by postlingually deafened adults fitted with a single cochlear implant. Laryngoscope 118, 145e151. http://dx.doi.org/10.1097/ MLG.0b013e31815661f9. Harris, J., Kamke, M.R., 2014. Electrophysiological evidence for altered visual, but not auditory, selective attention in adolescent cochlear implant users. Int. J. Pediatr. Otorhinolaryngol. 78, 1908e1916. http://dx.doi.org/10.1016/ j.ijporl.2014.08.023. Hartmann, W.M., Rakerd, B., Gaalaas, J.B., 1998. On the source-identification method. J. Acoust. Soc. Am. 104, 3546e3557. Hay-McCutcheon, M.J., Pisoni, D.B., Hunt, K.K., 2009. Audiovisual asynchrony detection and speech perception in hearing-impaired listeners with cochlear implants: a preliminary analysis. Int. J. Audiol. 48, 321e333. http://dx.doi.org/ 10.1080/14992020802644871. €rmer, V.S., Feng, W., Martinez, A., McDonald, J.J., 2015. Cross-modal Hillyard, S.A., Sto orienting of visual attention. Neuropsychologia 83, 170e178. http://dx.doi.org/ 10.1016/j.neuropsychologia.2015.06.003. Hinderink, J.B., Krabbe, P.F., Van Den Broek, P., 2000. Development and application of a health-related quality-of-life instrument for adults with cochlear implants: the Nijmegen cochlear implant questionnaire. Otolaryngol. Head. Neck Surg. 123, 756e765. http://dx.doi.org/10.1067/mhn.2000.108203. Hughes, K.C., Galvin, K.L., 2013. Measuring listening effort expended by adolescents and young adults with unilateral or bilateral cochlear implants or normal hearing. Cochlear Implants Int. 14, 121e129. http://dx.doi.org/10.1179/ 1754762812Y.0000000009. Isaiah, A., Vongpaisal, T., King, A.J., Hartley, D.E.H., 2014. Multisensory training improves auditory spatial processing following bilateral cochlear implantation. J. Neurosci. 34, 11119e11130. http://dx.doi.org/10.1523/JNEUROSCI.476713.2014. Kaiser, A.R., Kirk, K.I., Lachs, L., Pisoni, D.B., 2003. Talker and lexical effects on audiovisual word recognition by adults with cochlear implants. J. Speech Lang. Hear. Res. 46, 390e404. Kamke, M.R., Van Luyn, J., Constantinescu, G., Harris, J., 2014. Contingent capture of involuntary visual spatial attention does not differ between normally hearing children and proficient cochlear implant users. Restor. Neurol. Neurosci. 32, 799e811. http://dx.doi.org/10.3233/RNN-140399. Kerber, S., Seeber, B.U., 2012. Sound localization in noise by normal-hearing listeners and cochlear implant users. Ear Hear. 33, 445e457. http://dx.doi.org/ 10.1097/AUD.0b013e318257607b. Koelewijn, T., Bronkhorst, A., Theeuwes, J., 2009. Auditory and visual capture during focused visual attention. J. Exp. Psychol. Hum. Percept. Perform. 35, 1303e1315. http://dx.doi.org/10.1037/a0013901. Lachs, L., Pisoni, D.B., Kirk, K.I., 2001. Use of audiovisual information in speech

Please cite this article in press as: Pavani, F., et al., Spatial and non-spatial multisensory cueing in unilateral cochlear implant users, Hearing Research (2016), http://dx.doi.org/10.1016/j.heares.2016.10.025

14

F. Pavani et al. / Hearing Research xxx (2016) 1e14

perception by prelingually deaf children with cochlear implants: a first report. Ear Hear. 22, 236e251. Laurienti, P.J., Burdette, J.H., Maldjian, J.A., Wallace, M.T., 2006. Enhanced multisensory integration in older adults. Neurobiol. Aging 27 (8), 1155e1163. http:// dx.doi.org/10.1016/j.neurobiolaging.2005.05.024. Litovsky, R.Y., Goupell, M.J., Godar, S., Grieco-Calub, T., Jones, G.L., Garadat, S.N., Agrawal, S., Kan, A., Todd, A., Hess, C., Misurelli, S., 2012. Studies on bilateral cochlear implants at the University of Wisconsin Binaural Hearing and Speech Lab. J. Am. Acad. Audiol. 1e19. http://dx.doi.org/10.3766/jaaa.23.6.9. Litovsky, R.Y., Jones, G.L., Agrawal, S., van Hoesel, R., 2010. Effect of age at onset of deafness on binaural sensitivity in electric hearing in humans. J. Acoust. Soc. Am. 127, 400e414. http://dx.doi.org/10.1121/1.3257546. Litovsky, R.Y., Parkinson, A., Arcaroli, J., 2009. Spatial hearing and speech intelligibility in bilateral cochlear implant users. Ear Hear. 30, 419e431. http:// dx.doi.org/10.1097/AUD.0b013e3181a165be. Lloyd, D.M., Merat, N., Mcglone, F., Spence, C., 2003. Crossmodal links between audition and touch in covert endogenous spatial attention. Percept. Psychophys. 65, 901e924. http://dx.doi.org/10.3758/BF03194823. Luntz, M., Brodsky, A., Watad, W., Weiss, H., Tamir, A., Pratt, H., 2005. Sound localization in patients with unilateral cochlear implants. Cochlear Implants Int. 6, 1e9. http://dx.doi.org/10.1179/cim.2005.6.1.1. Majdak, P., Goupell, M.J., Laback, B., 2011. Two-dimensional localization of virtual sound sources in cochlear-implant listeners. Ear Hear. 32, 198e208. http:// dx.doi.org/10.1097/AUD.0b013e3181f4dfe9. ^t, S., Schreij, D., Theeuwes, J., 2012. OpenSesame: an open-source, graphical Matho experiment builder for the social sciences. Behav. Res. Methods 44, 314e324. http://dx.doi.org/10.3758/s13428-011-0168-7. McGurk, H., MacDonald, J., 1976. Hearing lips and seeing voices. Nature 264, 746e748. Moore, D.R., Shannon, R.V., 2009. Beyond cochlear implants: awakening the deafened brain. Nat. Neurosci. 12, 686e691. http://dx.doi.org/10.1038/nn.2326. Murphy, J., Summerfield, A.Q., O'Donoghue, G.M., Moore, D.R., 2011. Spatial hearing of normally hearing and cochlear implanted children. Int. J. Pediatr. Otorhinolaryngol. 75, 489e494. http://dx.doi.org/10.1016/j.ijporl.2011.01.002. Nava, E., Bottari, D., Bonfioli, F., Beltrame, M.A., Pavani, F., 2009a. Spatial hearing with a single cochlear implant in late-implanted adults. Hear. Res. 255, 91e98. http://dx.doi.org/10.1016/j.heares.2009.06.007. Nava, E., Bottari, D., Portioli, G., Bonfioli, F., Beltrame, M.A., Formigoni, P., Pavani, F., 2009b. Hearing again with two ears: recovery of spatial hearing after bilateral cochlear implantation. Neuropsychologia 47, 928e932. Ngo, M.K., Spence, C., 2010. Crossmodal facilitation of masked visual target identification. Atten. Percept. Psychophys. 72, 1938e1947. http://dx.doi.org/10.3758/ APP.72.7.1938. Noble, W., Tyler, R., Dunn, C., Bhullar, N., 2009. Unilateral and bilateral cochlear implants and the implant-plus-hearing-aid profile: comparing self-assessed and measured abilities. Int. J. Audiol. 47 (8), 505e514. http://dx.doi.org/ 10.1080/14992020802070770. Noesselt, T., Bergmann, D., Hake, M., Heinze, H.-J., Fendrich, R., 2008. Sound increases the saliency of visual events. Brain Res. 1220, 157e163. http:// dx.doi.org/10.1016/j.brainres.2007.12.060. Ocklenburg, S., Hirnstein, M., Hausmann, M., Lewald, J., 2010. Auditory space perception in left- and right-handers. BRAIN COGNITION 72, 210e217. http:// dx.doi.org/10.1016/j.bandc.2009.08.013. Oldfield, S.R., Parker, S.P., 1986. Acuity of sound localisation: a topography of auditory space. III. Monaural hearing conditions. Perception 15, 67e81. davas, E., Driver, J., 2004. Auditory deficits in visuospatial Pavani, F., Husain, M., La neglect patients. Cortex 40, 347e365. Pavani, F., L adavas, E., Driver, J., 2005. Gaze direction modulates auditory spatial deficits in stroke patients with neglect. Cortex 41, 181e188. davas, E., 2001. Deficit of auditory space perception in Pavani, F., Meneghello, F., La patients with visuospatial neglect. Neuropsychologia 39, 1401e1409. Potts, L.G., Skinner, M.W., Litovsky, R.A., Strube, M.J., Kuk, F., 2009. Recognition and localization of speech by adult cochlear implant recipients wearing a digital hearing aid in the nonimplanted ear (bimodal hearing). J. Am. Acad. Audiol. 20, 353e373. http://dx.doi.org/10.3766/jaaa.20.6.4. Prime, D.J., McDonald, J.J., Green, J., Ward, L.M., 2008. When cross-modal spatial attention fails. Can. J. Exp. Psychol. 62, 192e197. http://dx.doi.org/10.1037/11961961.62.3.192. Rouger, J., Fraysse, B., Deguine, O., Barone, P., 2008. McGurk effects in cochlearimplanted deaf subjects. Brain Res. 1188, 87e99. http://dx.doi.org/10.1016/ j.brainres.2007.10.049. Rouger, J., Lagleyre, S., Fraysse, B., Deneve, S., Deguine, O., Barone, P., 2007. Evidence that cochlear-implanted deaf patients are better multisensory integrators. Proc. Natl. Acad. Sci. U.S.A. 104, 7295e7300. http://dx.doi.org/10.1073/ pnas.0609419104. Schierholz, I., Finke, M., Schulte, S., Hauthal, N., Kantzke, C., Rach, S., Büchner, A., Dengler, R., Sandmann, P., 2015. Enhanced audioevisual interactions in the auditory cortex of elderly cochlear-implant users. Hear. Res. 328, 133e147. http://dx.doi.org/10.1016/j.heares.2015.08.009. Schmitt, M., Postma, A., de Haan, E.H.F., 2010. Cross-modal exogenous attention and distance effects in vision and hearing. Eur. J. Cogn. Psychol. 13, 343e368. http:// dx.doi.org/10.1080/09541440126272. Schorr, E.A., Fox, N.A., van Wassenhove, V., Knudsen, E.I., 2005. Auditory-visual

fusion in speech perception in children with cochlear implants. Proc. Natl. Acad. Sci. U. S. A. 102, 18748e18750. http://dx.doi.org/10.1073/pnas.0508862102. Seeber, B.U., Baumann, U., Fastl, H., 2004. Localization ability with bimodal hearing aids and bilateral cochlear implants. J. Acoust. Soc. Am. 116, 1698e1709. Seeber, B.U., Fastl, H., 2008. Localization cues with bilateral cochlear implants. J. Acoust. Soc. Am. 123, 1030e1042. http://dx.doi.org/10.1121/1.2821965. Slattery III, W.H., Middlebrooks, J.C., 1994. Monaural sound localization: acute versus chronic unilateral impairment. Hear. Res. 75, 38e46. http://dx.doi.org/ 10.1016/0378-5955(94)90053-1. Song, J.-J., Lee, H.-J., Kang, H., Lee, D.S., Chang, S.O., Oh, S.-H., 2015. Effects of congruent and incongruent visual cues on speech perception and brain activity in cochlear implant users. Brain Struct. Funct. 220, 1109e1125. http://dx.doi.org/ 10.1007/s00429-013-0704-6. Spence, C., 2010. Crossmodal spatial attention. Ann. N. Y. Acad. Sci. 1191, 182e200. http://dx.doi.org/10.1111/j.1749-6632.2010.05440.x. Spence, C., Driver, J., 1997. Audiovisual links in exogenous covert spatial orienting. Percept. Psychophys. 59, 1e22. Spence, C., Driver, J., 1996. Audiovisual links in endogenous covert spatial attention. J. Exp. Psychol. Hum. Percept. Perform. 22, 1005e1030. Spence, C., Pavani, F., Driver, J., 2000. Crossmodal links between vision and touch in covert endogenous spatial attention. J. Exp. Psychol. Hum. Percept. Perform. 26, 1298e1319. http://dx.doi.org/10.1037//0096-1523.26.4.1298. Spence, P.C., Ho, D.C., 2012. The Multisensory Driver. Ashgate Publishing, Ltd. € rmer, V.S., McDonald, J.J., Hillyard, S.A., 2009. Cross-modal cueing of attention Sto alters appearance and early cortical processing of visual stimuli. Proc. Natl. Acad. Sci. U. S. A. 106, 22456e22461. http://dx.doi.org/10.1073/ pnas.0907573106. Strelnikov, K., Rouger, J., Lagleyre, S., Fraysse, B., Demonet, J.F., Deguine, O., Barone, P., 2015. Increased audiovisual integration in cochlear-implanted deaf patients: independent components analysis of longitudinal positron emission tomography data. Eur. J. Neurosci. 41, 677e685. http://dx.doi.org/10.1111/ ejn.12827. €nfeld, R., Lenarz, T., Sandmann, P., Yovel, G., De Vos, M., Stropahl, M., Plotz, K., Scho Debener, S., 2015a. Cross-modal reorganization in cochlear implant users: auditory cortex contributes to visual face processing. NeuroImage 121, 159e170. http://dx.doi.org/10.1016/j.neuroimage.2015.07.062. Stropahl, M., Schellhardt, S., Debener, S., 2015b. McGurk stimuli for the investigation of multisensory integration in cochlear implant users: the Oldenburg Audio Visual Speech Stimuli (OLAVS). Psychon. Bull. Rev. 1e10. http:// dx.doi.org/10.3758/s13423-016-1148-9. vora-Vieira, D., De Ceulaer, G., Govaerts, P.J., Rajan, G.P., 2015. Cochlear implanTa tation improves localization ability in patients with unilateral deafness. Ear Hear. 36 http://dx.doi.org/10.1097/AUD.0000000000000130 e93e8. oret, H., 2010. Audiovisual fusion and Tremblay, C., Champoux, F., Lepore, F., The cochlear implant proficiency. Restor. Neurol. Neurosci. 28, 283e291. http:// dx.doi.org/10.3233/RNN-2010-0498. Tyler, R.S., Noble, W., Dunn, C., Witt, S., 2009. Some benefits and limitations of binaural cochlear implants and our ability to measure them. Int. J. Audiol. 45, 113e119. http://dx.doi.org/10.1080/14992020600783095. van de Rijt, L.P.H., Van Opstal, A.J., Mylanus, E.A.M., Straatman, L.V., Hu, H.Y., Snik, A.F.M., Van Wanrooij, M.M., 2016. Temporal cortex activation to audiovisual speech in normal-hearing and cochlear implant users measured with functional near-infrared spectroscopy. Front. Hum. Neurosci. 10, 48. http:// dx.doi.org/10.3389/fnhum.2016.00048. Van den Bogaert, T., Klasen, T.J., Moonen, M., Van Deun, L., Wouters, J., 2006. Horizontal localization with bilateral hearing aids: without is better than with. J. Acoust. Soc. Am. 119, 515e612. http://dx.doi.org/10.1121/1.2139653. Van der Burg, E., Olivers, C.N.L., Bronkhorst, A.W., Theeuwes, J., 2008. Pip and pop: nonspatial auditory signals improve spatial visual search. J. Exp. Psychol. Hum. Percept. Perform. 34, 1053e1065. http://dx.doi.org/10.1037/00961523.34.5.1053. van Hoesel, R.J.M., Tyler, R.S., 2003. Speech perception, localization, and lateralization with bilateral cochlear implants. J. Acoust. Soc. Am. 113, 1617e1714. http://dx.doi.org/10.1121/1.1539520. Van Wanrooij, M.M., Van Opstal, A.J., 2004. Contribution of head shadow and pinna cues to chronic monaural sound localization. J. Neurosci. 24, 4163e4171. http:// dx.doi.org/10.1523/JNEUROSCI.0048-04.2004. van Zon, A., Smulders, Y.E., Stegeman, I., Ramakers, G.G.J., Kraaijenga, V.J.C., Koenraads, S.P.C., Zanten, G.A.V., Rinia, A.B., Stokroos, R.J., Free, R.H., Frijns, J.H.M., Huinck, W.J., Mylanus, E.A.M., Tange, R.A., Smit, A.L., Thomeer, H.G.X.M., Topsakal, V., Grolman, W., 2016. Stable benefits of bilateral over unilateral cochlear implantation after two years: a randomized controlled trial. Laryngoscope 1e8. http://dx.doi.org/10.1002/lary.26239. Wilson, B.S., Dorman, M.F., 2008. Cochlear implants: a remarkable past and a brilliant future. Hear. Res. 242, 3e21. http://dx.doi.org/10.1016/ j.heares.2008.06.005. Winn, M.B., Rhone, A.E., Chatterjee, M., Idsardi, W.J., 2013. The use of auditory and visual context in speech perception by listeners with normal hearing and listeners with cochlear implants. Front. Psychol. 4, 824. http://dx.doi.org/10.3389/ fpsyg.2013.00824. Zheng, Y., Godar, S.P., Litovsky, R.Y., 2015. Development of sound localization strategies in children with bilateral cochlear implants. PLoS ONE 10, e0135790. http://dx.doi.org/10.1371/journal.pone.0135790.

Please cite this article in press as: Pavani, F., et al., Spatial and non-spatial multisensory cueing in unilateral cochlear implant users, Hearing Research (2016), http://dx.doi.org/10.1016/j.heares.2016.10.025