Hearing Research 291 (2012) 41e51
Contents lists available at SciVerse ScienceDirect
Hearing Research journal homepage: www.elsevier.com/locate/heares
Research paper
Noise reduction technologies implemented in head-worn preprocessors for improving cochlear implant performance in reverberant noise fields King Chung a, *, Lance Nelson b, Melissa Teske b a b
Department of Allied Health and Communication Disorders, Northern Illinois University, 323 Wirtz Hall, DeKalb, IL 60115, USA Department of Speech, Language, and Hearing Sciences, Purdue University, 500 Oval Dr., West Lafayette, IN 47906, USA
a r t i c l e i n f o
a b s t r a c t
Article history: Received 9 March 2012 Received in revised form 29 April 2012 Accepted 7 June 2012 Available online 28 June 2012
The purpose of this study was to investigate whether a multichannel adaptive directional microphone and a modulation-based noise reduction algorithm could enhance cochlear implant performance in reverberant noise fields. A hearing aid was modified to output electrical signals (ePreprocessor) and a cochlear implant speech processor was modified to receive electrical signals (eProcessor). The ePreprocessor was programmed to flat frequency response and linear amplification. Cochlear implant listeners wore the ePreprocessoreeProcessor system in three reverberant noise fields: 1) one noise source with variable locations; 2) three noise sources with variable locations; and 3) eight evenly spaced noise sources from 0 to 360 . Listeners’ speech recognition scores were tested when the ePreprocessor was programmed to omnidirectional microphone (OMNI), omnidirectional microphone plus noise reduction algorithm (OMNI þ NR), and adaptive directional microphone plus noise reduction algorithm (ADM þ NR). They were also tested with their own cochlear implant speech processor (CI_OMNI) in the three noise fields. Additionally, listeners rated overall sound quality preferences on recordings made in the noise fields. Results indicated that ADMþNR produced the highest speech recognition scores and the most preferable rating in all noise fields. Factors requiring attention in the hearing aid-cochlear implant integration process are discussed. Ó 2012 Elsevier B.V. All rights reserved.
1. Introduction Cochlear implants are sophisticated hearing devices designed to deliver electronic stimulation to the auditory system that can improve hearing sensitivity of people with severe to profound hearing loss to within normal limits. In the past 30 years, cochlear implant hardware designs and surgical techniques have undergone many revolutions. One focus of current research is to enhance users’ performance in background noise by employing signal processing algorithms to improve signal-to-noise ratios (SNRs) and to reduce noise interference (Chung et al., 2004, 2006; Spriet et al., 2007; Hu and Loizou, 2008, 2010; Chung and Zeng, 2009; Kokkinakis and Loizou, 2010). This study explored the effectiveness of a multichannel adaptive directional microphone and a modulation-based noise reduction algorithm on improving cochlear implant listeners’ speech recognition ability and overall sound quality preferences in three reverberant noise fields.
* Corresponding author. Tel.: þ1 815 753 8033. E-mail address:
[email protected] (K. Chung). 0378-5955/$ e see front matter Ó 2012 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.heares.2012.06.003
Modern cochlear implants have been shown to allow children with severe to profound hearing loss to achieve speech, language, and educational milestones at ages comparable to normal hearing children (Miyamoto et al., 2008; Nicholas and Geers, 2007). Cochlear implants also are reported to improve the quality of lives and to provide new opportunities for social, vocational, and personal developments for hearing-impaired adults (Zhao et al., 2008). Although their speech recognition ability in quiet has been improved steadily through the years (Zeng et al., 2008), cochlear implant listeners still face tremendous challenges in background noise. They often require 10e20 dB higher SNRs to understand the same amount of information as people with normal hearing (Nelson et al., 2003; Stickney et al., 2004; Zeng et al., 2008). In addition, cochlear implant listeners tend to have more difficulty in understanding speech in noise with fluctuating temporal envelopes (e.g., speech babble) than in noise with relatively low temporal variations (e.g., refrigerator hum, Dorman et al., 1998; Fu and Shannon, 1999; Zeng and Galvin, 1999; Nelson et al., 2003; Stickney et al., 2004). These challenges potentially subject cochlear implant users to the negative consequences associated with having a hearing loss, such as frustration, fatigue, social isolation, low quality of life, and low employment rates for adults
42
K. Chung et al. / Hearing Research 291 (2012) 41e51
(Arlinger, 2003; Barton et al., 2005; Gates and Mills, 2005; Nachtegaal et al., 2009); and speech and language delays, low academic achievement, low energy level, and lower self-esteem for children (Yoshinaga-Itano et al., 1998; Hick and Tharpe, 2002; Stacey et al., 2006). Understanding speech in background noise has been a common challenge for both hearing aid and cochlear implant users. Various technologies have been developed to reduce background noise. Directional microphones, which take advantage of spatial separation between speech and noise, are commonly used in digital hearing aids and implemented in some cochlear implants. Firstorder directional microphones usually consist of two omnidirectional microphones combined in subtraction mode. They are reported to be effective in enhancing speech recognition and perceived sound quality for both hearing aid and cochlear implant listeners in environments where speech comes from the front and noise comes from other directions (Chung et al., 2006; Chung and Zeng, 2009; Gifford and Revit, 2010). The average SNR improvements for hearing aid users are reported to be 3e4 dB in real-world environments for hearing aids (Valente, 1999; Ricketts, 2001; Chung, 2004). The term “adaptive directional microphones” is used to describe directional microphones with automatically adjusted polar patterns. Adaptive directional microphones are reported to be more effective than fixed-pattern directional microphones in laboratory environments with a limited number of noise sources or with dominant sources coming from focused spatial locations (Ricketts and Henry, 2002; Bentler et al., 2004; Valente and Mispagel, 2004; Spriet et al., 2007; Chung and Zeng, 2009). Advantages of adaptive- over fixed-pattern directional microphones usually diminish or disappear as the number of the noise sources increases or as the sound field becomes more diffuse (Ricketts and Henry, 2002; Bentler et al., 2004; Valente and Mispagel, 2004; Spriet et al., 2007; Chung and Zeng, 2009). In general, the higher the microphone order (i.e., more microphones/ports) and the less the number of noise sources, the higher the directional effects (Spriet et al., 2007; Chung and Zeng, 2009; Kokkinakis and Loizou, 2010). One limitation of directional microphone is that their effectiveness tends to reduce in reverberant environments. Directional microphones rely on the directions of sound sources and the time differences between front microphone input and the back microphone input. Sounds from the front (assumed to be signals) enter the front microphone first and the back microphone later due to the time required to travel between the two microphones. A time delay is typically applied to the back microphone input so that the cancellation is minimal at the directional microphone output when the back microphone input is subtracted from the front microphone input. On the other hand, noises from the back enter the back microphone first and then the front microphone. As the noise entering the back microphone is time delayed, the noise is canceled at the directional microphone output when the back microphone input is subtracted from the front microphone input. In a reverberant environment, however, the distinction between signals from the front and noises from the back is blurred. Signals originating in the front can be reflected by surfaces from the back and the reflected signals enter the back microphone first, thus get canceled at the directional microphone output. Similarly, noises originating from the back can be reflected by surfaces from the front. The reflected signals enter the front microphone first and they are not being canceled at the directional microphone output. Most studies examining the effects of directional microphones on cochlear implant performance are conducted in relatively anechoic environments or environments with unspecified reverberation time (Chung et al., 2004, 2006; Spriet et al., 2007; Hu and Loizou, 2008, 2010; Chung and Zeng, 2009;
Gifford and Revit, 2010; Kokkinakis and Loizou, 2010). Studies are needed to examine the effects of directional microphones on cochlear implant performance in reverberant environments. Another noise reduction strategy commonly used in hearing aids and cochlear implants is noise reduction algorithms. Modulation-based noise reduction algorithms are commonly implemented in hearing aids to increase listening comfort by reducing the overall levels of background noise with relatively small temporal fluctuations (Kuk et al., 2002; Johns et al., 2002; Edwards et al., 1998; Latzel et al., 2003; Schum, 2003; Walden et al., 2000). These algorithms analyze the incoming signal and determine the presence of speech and noise. Modulation-based noise reduction algorithms are designed to reduce the amount of gain in frequency channels with noise dominance. The amount of gain reduction differs among algorithms and it may depend on the overall level, the estimated modulation rate, the estimated SNRs in each channel, the frequency importance function of the channel, hearing aid noise reduction settings, and other factors (Kuk et al., 2002; Alcantara et al., 2003; Boymans and Dreschler, 2000; Tellier et al., 2003; Powers et al., 1999; Bentler and Chiou, 2006). Modulation-based noise reduction algorithms are capable of reducing the overall noise level but they cannot improve the SNR within each frequency channel or improve speech understanding ability for hearing aid users. This is because the same amount of gain reduction is applied to both signal and noise in a frequency channel and the bandwidths of hearing aid frequency channels are typically wider than the bandwidths of hearing aid users’ auditory filters. Modulation-based noise reduction algorithms, however, are reported to reduce noise interference, listening effort, and aversiveness of sounds, and increase listening comfort and cognitive capability in background noise (Boymans and Dreschler, 2000; Bentler and Chiou, 2006; Mueller et al., 2006; Palmer et al., 2006; Sarampalis et al., 2009). When applied to cochlear implants, some modulation-based noise reduction algorithms also are found to improve users’ overall sound quality preferences (Chung et al., 2006). Hu and Loizou (2008) described a similar noise reduction algorithm designed for cochlear implant speech processors using ACE coding strategies. This strategy divides the incoming signals into multiple filter banks and estimates the signal-to-noise ratios (SNRs) in each frequency channel using established sound classification algorithms. It then takes advantage of the discrete nature of cochlear implant stimulation to reduce background noise by only stimulating the electrodes representing the frequency channels with estimated SNRs greater than 0 dB (i.e., frequency regions with high levels of background noise are not presented to cochlear implant listeners). This algorithm was reported to allow cochlear implant listeners using ACE coding strategies to improve speech recognition in background noise with limited temporal fluctuations (Hu and Loizou, 2008, 2010; Dawson et al., 2011; Mauger et al., 2012). Currently, two of the three major cochlear implant manufacturers do not employ directional microphones or noise reduction algorithms in their speech processors. If hearing aid technologies can be used to clean up the signal before sending it to the cochlear implant speech processor (analogy: pre-wash clothes before putting them into the washer), these technologies can help cochlear implant users improve speech understanding and enhance listening comfort in the presence of background noise. This application also is the fastest way to deliver advanced signal processing technologies to cochlear implants. To date, both cochlear implant manufacturers that do not currently offer directional microphones have shown interests in adopting hearing aid technologies as a front-end preprocessor to enhance cochlear implant performance (Buchner, 2011).
K. Chung et al. / Hearing Research 291 (2012) 41e51
Chung et al. (2004, 2006) and Chung and Zeng (2009) recorded speech and noise signals being processed by directional microphones, modulation-based noise reduction algorithms, and singlechannel adaptive directional microphones implemented in different hearing aids and then tested speech recognition and overall sound quality preferences of cochlear implants. They reported that these signal processing algorithms improved cochlear implant listeners’ speech recognition scores and perceived sound quality. As all recordings were made in environments with minimal reverberation, it is unsure if directional microphones can help cochlear implant listeners in environments with reverberation. In addition, whether multichannel adaptive directional microphones can help cochlear implant users understand speech in background noise is rarely studied. It would be interesting to examine if these algorithms can help cochlear implant users in reverberant environments with background noise. Further, hearing aids and cochlear implants are two very different instruments. If hearing aid technologies are to be integrated into cochlear implant speech processors, effectiveness of the algorithms needs to be examined and technical issues need to be addressed and resolved before the integration. The purpose of the current study was to examine the effectiveness of multichannel adaptive directional microphones and modulation-based noise reduction algorithms implemented in head-worn preprocessors on improving cochlear implant listeners’ speech recognition ability and perceived sound quality in reverberant sound fields. These noise reduction strategies were examined because their applications were not limited to particular coding strategies or particular cochlear implant speech processors. The head-worn hearing aid preprocessor (ePreprocessor) outputted electrical signals which were fed into cochlear implant speech processors modified to receive electrical inputs (eProcessor). Cochlear implant listeners were tested when they wore the ePreprocessoreeProcessor system or their cochlear implants. The results can shed light on the effectiveness of the signal processing algorithms for cochlear implant users and on potential complications that need to be addressed should the hearing aid signal processing algorithms be integrated into cochlear implant speech processors in the future. 2. Methods Five digital hearing aids were modified to have electrical outputs (ePreprocessors), and three cochlear implant speech processors were modified to receive electrical inputs (eProcessors). The ePreprocessors were modified from the same model of hearing aids and the eProcessors were modified from the same model of cochlear implant speech processors. Cochlear implant listeners’ speech maps were downloaded into the eProcessors. The characteristics of the ePreprocessors were adjusted and finetuned so that no perceivable difference existed between their own cochlear implant speech processor and the ePreprocessoreeProcessor system in quiet and when they listened to recorded speech in background noise. Listeners then wore the ePreprocessoreeProcessor system and listened to Hearing in Noise Test sentences (HINT, Nilsson et al., 1994) presented in three reverberant noise fields. They repeated the sentences when ePreprocessors were programmed to omnidirectional microphone (OMNI), OMNI plus modulation-based noise reduction algorithm (OMNI þ NR), and ADM plus noise reduction algorithm (ADM þ NR). Listeners’ speech recognition in quiet and in background noise was also tested when they wore their own cochlear implants. Additionally, listeners rated the sound quality of speech recorded through the three ePreprocessor settings.
43
2.1. Participants A total of eighteen cochlear implant listeners (13 females and 5 males) wearing MED-EL Tempo þ speech processors participated in the study. Their ages ranged from 20 to 76 years with an average of 56 years. All listeners had their cochlear implants turned on for at least 12 months prior to participating in the study and the average cochlear implant use time was 4.9 years. Five listeners were implanted with the Pulsar ci100 and thirteen with the Combi 40 þ electrode arrays. When a cochlear implant listener arrived, telemetry testing was performed to make sure that their internal components were functioning properly. None of the cochlear implant listeners had unusual telemetry findings on their active electrodes. Listeners were also asked which ear was the better one if they had bilateral implants. The cochlear implant on the “poorer” side or the hearing aid on the non-implanted side was removed prior to testing. 2.2. ePreprocessor A total of five hearing aid ePreprocessors with electrical outputs were created for this study. The receivers of these preprocessors were replaced by electrical cables that had CS44 jacks at the end. The ePreprocessors were modified from digital hearing aids with four compression channels, eight frequency shaping bands, and eight noise reduction channels. They also had four-channel adaptive directional microphones that could adapt to a different polar pattern in each frequency channel at any instance. The adaptive directional microphones implemented in the ePreprocessor took advantage of the fact that 1) speech sounds are modulated; 2) environmental sounds rarely are modulated; and 3) environmental sounds fill in the gaps between modulated speech sounds. The algorithms utilize maxima detectors to detect the peak levels and minima detectors to estimate the trough levels in the incoming signal in each of the frequency channel. The differences between the maxima and minima were inferred as the SNR (speech-to-noise ratio) in the frequency channel. Adaptive directional microphone algorithms were designed to automatically switch to the polar pattern that would result in the highest SNR in each channel (Flynn, 2004; Chung, 2004). The modulation-based noise reduction algorithms implemented in the ePreprocessor used similar detectors to estimate the SNRs in each frequency channel. If enabled, it started to reduce background noise when the overall levels of incoming signals were greater than 65 dB SPL. The amount of noise reduction was proportional to the overall level and inversely proportional to the modulation depth (Flynn, 2004; Chung, 2004). The ePreprocessors were programmed to 1) the omnidirectional microphone mode at all four channels (OMNI); 2) the omnidirectional microphone mode plus noise reduction algorithm (OMNI þ NR); and 3) the adaptive directional microphone mode at all four channels plus noise reduction algorithm (ADM þ NR). The maximum adaptation time for the adaptive directional microphone was measured to be 12 s. As for the noise reduction algorithm, the maximum time between the onset of noise and the starting of gain reduction was 9 s and the maximum time between the starting and ending of gain reduction was approximately 2 s in the noise fields tested. The ePreprocessors had feedback cancellation algorithms which acted on feedback signals above 2000 Hz. The algorithm could not be disabled in the hearing aid fitting software. As the ePreprocessors had electrical output and feedback or feedback-like signals (i.e., pure tones) were not encountered, it was not likely that feedback cancellation algorithms were activated during the study.
44
K. Chung et al. / Hearing Research 291 (2012) 41e51
2.3. ePreprocessor fitting Prior to testing the listeners, the frequency response of the OMNI and ADM modes of each ePreprocessor was programmed when worn on Knowles Electronic Manikin for Acoustic Research (KEMAR, Burkhard and Sachs, 1975) in a large test chamber with minimal reverberation (i.e., the reverberation time (R60) ¼ 125 ms). A nine-loudspeaker array was set up in the middle of the test chamber. Eight of the loudspeakers were arranged every 45 in a circle with 1 m radius and the 9th loudspeaker was hung on the ceiling above the center of the eight-loudspeaker array (Fig. 1). Loudspeaker 1 was a powered loudspeaker (Makie HR824) with a flat frequency response from 39 to 20,000 Hz (þ/1.5 dB) and a maximum output of 100 dB SPL. Loudspeakers 2e9 were passive loudspeakers (Hafler M5) with flat frequency responses from 70 to 21,000 Hz (þ/3 dB) and maximum outputs of 100 dB SPL. A composite noise was presented at 70 dB SPL from Computer 1 to two Delta 1010 sound cards using the Audition 1.0 (Adobe) multitrack function and then to Loudspeaker 1 placed at 0 azimuth. The composite noise had frequency components from 100 to 8000 Hz in 100 Hz intervals. The electrical output of the ePreprocessor was sent to an ER11-F Amplifier (Etymotic Research) which provided þ20 dB gain, and then to an external sound card (Sound Blaster Extigy). The frequency response of the ePreprocessor was analyzed using SpectraPlus software on Computer 2. Programming of the ePreprocessor was accomplished by using manufacturer’s hearing aid fitting software. A programming cable was attached to the ePreprocessor and programming instructions were sent to the ePreprocessor via a HiPro programming interface (Fig. 1). The gain of the ePreprocessor was adjusted to be identical for soft, medium, and loud sounds so that the ePreprocessor had linear amplification at levels below the compression threshold of its output limiting algorithm. Linear amplification was used in order to avoid double-compression from the output compression algorithm of ePreprocessor and from the cochlear implant speech processor. In addition, the spectrum of the original composite noise was saved on the screen of Computer 2 to serve as a reference. Flat frequency response of the ePreprocessor was achieved by matching the spectrum of the ePreprocessor output to the spectrum of the reference. This matching was carried out so that the ePreprocessor would be acoustically transparent to the eProcessor. The gains of the ePreprocessors were set to be below 10 dB across the frequency regions in order to take advantage of a wider dynamic range before the output limiting algorithm was activated. Fig. 2A shows the typical frequency responses of ePreprocessors programmed to the OMNI condition in response to a composite noise of 50e90 dB SPL presented in 10 dB increments at 0 azimuth. Similar adjustment also was made for the ADM condition. Fig. 2B
Fig. 1. Equipment setup in the sound field and lab/office. The outputs of the power amplifiers (Power Amp) were fed to loudspeakers 2e9.
shows an example of the final frequency responses of OMNI and ADM. After the gains of the ePreprocessor in the OMNI and ADM modes were adjusted, the final programs were saved in a fitting session in the hearing aid fitting software. The programming of OMNI þ NR and ADM þ NR was achieved by enabling the noise reduction algorithm in the hearing aid fitting software. The above programming procedures were then repeated for other ePreprocessors. 2.4. eProcessors A total of three cochlear implant speech processors with electrical inputs were made for the study. The microphones of these eProcessors were replaced by CS44 receivers, which accepted electrical signals from the ePreprocessors. The electroacoustic characteristics of the ePreprocessoreeProcessor system were tested in a hearing aid analyzer (Fonix 7000, Frye Electronics) using the equipment setup shown in Fig. 3. The ePreprocessor picked up the test signal from the hearing aid analyzer and sent it to the eProcessor, the output of which was drawn from the socket that was normally linked to the transmitter/coil and sent to Channel 1 of a dual-channel amplifier (i.e., þ20 dB, ER11-F Amplifier, Etymotic Research). The signal was then sent to a clinical audiometer (GSI 61, Grason Stadler) and then to the second channel of the amplifier (þ20 dB), a high-pass filter, and an ER3A insert earphones (Maico). As the frequency response of the ER3A insert earphones has a rolloff at frequencies above 3500 Hz, the ER11-F Amplifier along with the high-pass filter was added to the signal path so that the final frequency response was flat up to 6000 Hz. The output of the insert earphone was then coupled to an HA2 coupler and the microphone of the hearing aid analyzer. 2.5. The ePreprocessoreeProcessors system Electroacoustic analysis of the ePreprocessoreeProcessor system using the composite noise indicated an inputeoutput function with a linear region at lower input levels and a curvilinear compression region with a compression threshold of 80 dB SPL. This finding was consistent with the frequency responses shown in Fig. 2A that each 10 dB increase in the input resulted in a 10 dB increase in the output up to 80 dB SPL. A further increase in the input from 80 to 90 dB SPL, however, resulted in only 6 dB increase in the output. Additionally, the OMNI and ADM frequency responses of each ePreprocessoreeProcessor system that yielded a flat in situ frequency response on KEMAR, were documented in the HA2 coupler. This procedure was carried out because 1) the in situ frequency responses on KEMAR were affected by the head shadow and body baffle effects, and 2) the frequency responses measured in situ were typically different when measured in different couplers (i.e., Zwislocki coupler on KEMAR and in HA2 coupler in hearing aid analyzer). Once the corresponding in situ and HA2 frequency responses were established, verifying and monitoring of flat in situ frequency responses of each ePreprocessoreeProcessor system could be carried out by programming the system in the hearing aid analyzer. Further, the fronteback ratios of the ePreprocessors in ADM mode were measured in the beginning of the experiment and monitored throughout the course of data collection to guard against microphone drift, which could reduce or diminish directional effects. The fronteback ratio was the difference in dB between the frequency responses obtained when the front microphone of the hearing aid was pointed toward the loudspeaker in hearing aid analyzer and when the back microphone of the hearing
K. Chung et al. / Hearing Research 291 (2012) 41e51
45
Fig. 2. A) The spectrum of the composite noise (CompN) and typical in situ frequency responses of ePreprocessors programmed in the OMNI mode in response to a 50e90 dB SPL presented at 0 azimuth. B) Frequency responses of different ePreprocessor modes. OMNI ¼ omnidirectional microphone at all frequency channels; ADM ¼ adaptive directional microphone at all frequency channels; and In Quiet ¼ circuit noise in the absence of sound presentation (i.e., the noise floor). Note that COMBO and ADM_LC were not used in this study.
aid was pointed toward the loudspeaker. The latter was obtained after turning on the signal for 30 s to ensure that the directional microphone was fully adapted to the intended directivity patterns. The fronteback ratios of the directional microphone measured in the beginning of the study were found to be an average of 13.8 dB at 500, 1000, 2000, and 4000 Hz (Fig. 4). The fronteback ratios were tested every 2e3 days between listeners. If the fronteback ratio of an ePreprocessor was reduced to be within an average of 10 dB, the ePreprocessor was replaced with one that had a higher fronteback ratio. 2.6. Sound field calibration After the ePreprocessors were programmed, the large test chamber was modified to become a reverberant chamber. Smoothsurfaced panels normally used to line bathrooms were mounted between the wedges of the chamber, and vinyl flooring was laid on the floor. The final reverberation time (R60) of the test chamber was approximately 350 ms. Audition multi-track sessions were created to test cochlear implant listeners’ speech recognition scores and overall sound quality preferences. Speech and noise were presented to the nineloudspeaker array in three different noise fields. Speech was always
Fig. 3. Equipment setup for checking the ePreprocessoreeProcessor system in a hearing aid analyzer.
Fig. 4. An example of fronteback ratio measurements for the omnidirectional microphone (OMNI) and the adaptive directional microphones (ADM) in a hearing aid analyzer.
46
K. Chung et al. / Hearing Research 291 (2012) 41e51
sent to Loudspeaker 1. The presentation levels of the calibration noise in HINT test were adjusted in Audition so that it measured 80 dB SPL by a Type I sound level meter (Quest Precision 1200) located at the center of the loudspeaker array in the absence of KEMEAR or a listener. Noise was presented to the loudspeaker array in three different fields:
a preamplifier (TubePre, PreSonus) and then fed to the power amplifier and Loudspeaker 9 so that the listener could hear the examiner’s instructions directly above the head during speech recognition tests. The level of Loudspeaker 9 was adjusted so that normal conversational speech was roughly 80 dB SPL measured at the center of the loudspeaker array in the absence of the listener. 2.7. Stimulus recording for HA fitting and sound quality ratings
i) Vari1N: one noise source from any one of Loudspeakers 3e7 (i.e., 90 e270 ); ii) Vari3N: three uncorrelated noise sources from three of Loudspeakers 3e7 (i.e., 90 e270 ); iii) All8N: eight uncorrelated noise sources from all loudspeakers including Loudspeaker 1 (i.e., 0 e360 ). The overall level of each noise field was calibrated to be 75 dB SPL, making a constant SNR of þ5 dB which was reported to be the SNR in noisy environments (Pearsons et al., 1976). In the Vari1N noise field, the presentation level of the air conditioning noise was adjusted in Audition so that it measured 75 dB SPL from each loudspeaker at the center of the loudspeaker array in the absence of KEMAR or a listener. This noise field was intended to simulate an environment that the listener was sitting in a restaurant with a loud air condition noise. During testing, the air condition noise was presented twice from each of Loudspeakers 3e7 for each HINT sentence list (i.e., 10 sentences/list). The order of presentation from the loudspeakers was randomized. In the Vari3N noise field, the presentation levels of the air conditioning noise and two uncorrelated restaurant noises from the Connected Speech Test (Cox et al., 1987) were adjusted to measure 71.5 dB SPL from each of Loudspeakers 3e7. The air condition noise was presented from each loudspeaker twice and the restaurant noise was presented four times during each HINT sentence list. This noise field simulated an environment that the listener was seated in a restaurant with air condition noise and with groups of people sitting either on both sides of the listener or on one side of the listener. In addition to calibrating each individual track, the noise sources were presented simultaneously to make sure that the overall noise level measured 75 dB SPL for all noise combinations. In the All8N noise field, the air conditioning noise was presented from one of the Loudspeakers 3e7 and uncorrelated restaurant noise was presented from all 8 loudspeakers to simulate an environment that the listener was seated in the middle of a busy restaurant. In other words, the air conditioning noise varied in location but the restaurant noise came from all around. For calibration, the presentation level of the air conditioning noise was adjusted in Audition to measure 72 dB SPL from each of Loudspeakers 3e7 and the level of a restaurant noise was adjusted to measure 63 dB SPL from each loudspeaker (the overall/combined restaurant noise level ¼ 72 dB SPL). The overall level of all noises presented simultaneously also was checked to measure 75 dB SPL for all combinations. In order to keep the relative locations of the noise sources identical for listeners with “left” or “right” cochlear implants, “left” and “right” Audition sessions were created in mirror images for the three noise fields. For example, if the noise in Vari1N was presented from Loudspeaker 4 in the “left” session for the first sentence of a HINT list, then the noise in the “right” session was presented from Loudspeaker 6 for the sentence. The above calibration procedures were carried out for each “left” and “right” session for each noise field. Loudspeaker 9 was a ceiling loudspeaker mounted above the center of the loudspeaker array. A microphone was placed on the examiner’s desk in the laboratory. Its output was amplified by
After the speech and noise levels were calibrated, HINT sentences and their calibration noise were recorded in the reverberant chamber. These recordings were later used in the ePreprocessoreeProcessor fitting process in order to simulate the testing environment. They were also used for testing listeners’ perceived overall sound quality of different ePreprocessor settings in order to reduce the delay between testing conditions. An ePreprocessor was worn on KEMAR’s right ear when the KEMAR head was placed at the center of the loudspeaker array. Each noise segment lasted 30 s and a HINT sentence was presented 25 s after the beginning of the noise. This arrangement was made to allow ample time for the adaptive directional microphone and the noise reduction algorithms to fully adapt to the intended/calculated directivity pattern when the noise location/configuration changed. The ePreprocessor output was fed to an ER11-F Amplifier (þ20 dB gain) and an external sound card (Sound Blaster Extigy) and then recorded in Computer 2 (Fig. 1). Recordings were made when the ePreprocessor was programmed to OMNI, OMNI þ NR, and ADM þ NR in each noise field. The presentation sequences of the HINT sentences and noise locations were identical to that used in testing listeners with right cochlear implants. 2.8. Preparation of sound quality comparison pairs The recorded HINT sentences were then edited to form comparison pairs for overall sound quality ratings. Each pair consisted of a sentence processed by one ePreprocessor setting in a noise field, a 300 ms pause, and the same sentence processed by the same (i.e. AeA) or another setting in the same noise field (i.e., AeB). The AeA comparison pairs were used for reliability and variability checks. They were OMNIeOMNI for Vari1N, OMNIþNReOMNIþNR for Vari3N, and ADMþNReADMþNR for All8N. Six different sentences were edited to form these AeA pairs in each noise field. The AeB pairs were prepared for comparing the subjective preferences of different ePreprocessor settings in different noise fields. There were three combinations of AeB comparison pairs: OMNIeOMNIþNR, OMNIeADMþNR, OMNIþNReADMþNR. In addition, a BeA pair was created for each combination to eliminate the order of presentation effect (i.e., OMNIþNReOMNI, ADMþNReOMNI, ADMþNReOMNIþNR). Three different sentences were edited to form the AeB pairs and another three sentences were edited to form the BeA pairs in each noise field (e.g., there were three OMNIeOMNIþNR comparison pairs and three OMNIþNReOMNI pairs). 2.9. ePreprocessor fitting and fine-tuning for individual listener Prior to testing listeners in the reverberant chamber, the loudness and sound quality of the ePreprocessoreeProcessor system was matched to those of their own individual cochlear implants in the lab/office (Fig. 1). The ePreprocessor fitting and fine-tuning process was divided into several steps. In Step 1, the microphone frequency response of each listener’s own cochlear implant was analyzed in a hearing aid analyzer using the setup similar to Fig. 2. The cochlear implant microphone,
K. Chung et al. / Hearing Research 291 (2012) 41e51
instead of the microphone of the ePreprocessoreeProcessor system, was placed at the center of the test chamber and the microphone output was drawn from the direct audio input. The signal was then sent to an ER11-F (þ20 dB gain), the audiometer, the high-pass filter, the ER3A, the HA2 coupler, and the microphone of the analyzer. The frequency response of the cochlear implant in response to a 70 dB SPL composite noise was measured and saved on the screen to serve as a reference. In Step 2, the OMNI frequency response of the ePreproc essoreeProcessor system in response to the 70 dB SPL composite noise was adjusted to match the reference. The ePreprocessoreeProcessor system replaced the listener’s own cochlear implant speech processor in the hearing aid analyzer test chamber and other equipment was kept identical to that used to obtain the reference. The starting ePreprocessor OMNI program was always the one obtained in the large test chamber for the particular ePreprocessor yielding flat in situ frequency responses on KEMAR. The “program link” function was turned on in the hearing aid fitting software so that whatever change made in the OMNI mode was also made in the ADM mode. In Step 3, the listener was set up with the ePreproc essoreeProcessor system with cables sufficiently long to allow the ePreprocessor to be worn behind-the-ear on the implant/better side, the eProcessor on the contralateral side, and the magnet/coil on the implanted side (Fig. 5). The ePreprocessor picked up external sounds from the implanted side, preprocessed the signal, and then sent it to the eProcessor. The output of the eProcessor was transmitted to the electrodes via a magnet/coil placed on the implanted/ better side. The listener was asked to sit approximately 1 m away from a loudspeaker placed at the ear-level of the listener in the lab/ office. In Step 4, the loudness and sound quality of the ePreprocessor OMNI program of the ePreprocessoreeProcessor system was
compared with the listener’s own cochlear implant in a pairedcomparison paradigm. Two sentences recorded in a noise field were presented from the loudspeaker at 75 dB SPL. Then the examiner (first author, a licensed audiologist) quickly removed the listener’s ePreprocessor and its magnet, put the listener’s own cochlear implant speech processor and the corresponding magnet in place, and then presented the same two sentences again. The time intervals between the first and second presentations were generally less than 3 s. The listener was asked to tell the examiner if the two presentations sounded the same in loudness and sound quality. If not, he/she was asked to describe the differences. The examiner then adjusted the ePreprocessor frequency responses/ gains in 1 or 2 dB steps. In Step 5, the listener was asked if his/her own voice, the examiner’s voice, and sentences recorded in the other two noise fields sounded the same in the two listening setups. If not, the ePreprocessor program was fine-tuned. The ePreprocessor OMNI fitting was considered complete if the listener commented that his/ her own cochlear implant and the ePreprocessoreeProcessor system sounded identical during all the presentations. Finally, the ePreprocessor was switched between OMNI and its corresponding ADM mode in the same fitting session and the listener was asked to comment on the loudness and sound quality when listening to the HINT sentences, his/her own voice, and the examiner’s voice. Most listeners commented that the OMNI and ADM modes sounded identical, except one who needed the overall gain of the ADM mode to be increased for 2 dB. The final OMNI and ADM programs for each listener were saved in one fitting session entry in the hearing aid fitting software. 2.10. Speech recognition test Listeners were tested individually. Prior to testing, the appropriate “left” or “right” Audition sessions were calibrated once again for listeners with left or right implants, respectively. Listeners were then lead to the reverberant chamber and seated at the center of the loudspeaker array. The height of the chair was adjusted so that the loudspeakers were at their ear-levels. Listeners were instructed to look at Loudspeaker 1 during the test and they listened to HINT sentences presented from Loudspeaker 1 in the three noise fields. A programming cable was attached to their ePreprocessors so that the examiner could change the ePreprocessor program without entering the test chamber. In each noise field, they were tested in the following conditions: a) b) c) d)
Fig. 5. KEMAR wearing a hearing aid with electrical output (ePreprocessor) on the implanted ear (right) and a cochlear implant speech processor that receives electrical inputs (eProcessor) on the contralateral ear (left). The output of the eProcessor was sent to the magnet and the internal components on the implanted side.
47
OMNI with the ePreprocessoreeProcessor system OMNI þ NR with the ePreprocessoreeProcessor system ADM þ NR with the ePreprocessoreeProcessor system CI_OMNI with their own cochlear implants
The ePreprocessoreeProcessor system conditions were presented as a group (i.e., one after another) but the presentation order of the CI_OMNI condition and the ePreprocessoreeProcessor system conditions was counterbalanced within the group of listeners. In addition, the three ePreprocessoreeProcessor settings and the noise fields were counterbalanced within the group. Listeners’ speech recognition ability was also tested in the test chamber in the absence of background noise, when they wore their own cochlear implants, to document their performance in quiet (CI in Quiet). The order of the CI_OMNI and CI in Quiet conditions was also counterbalanced. Further, the assignment of HINT sentence lists to different experimental conditions was counterbalanced. For the CI_OMNI condition, the sensitivity dial of the listener’s own cochlear implant was set at the 12 o’clock position. The
48
K. Chung et al. / Hearing Research 291 (2012) 41e51
inputeoutput functions of the ePreprocessoreeProcessor systems tested in the hearing aid analyzer showed a curvilinear compression region between input levels of 80 and 90 dB SPL and no further increase at output for inputs higher 90 dB SPL. The inputeoutput function of the CI_OMNI condition when set to 12 o’clock was the closest to that of the ePreprocessoreeProcessor system. The exception was that CI_OMNI had a linear (instead of curvilinear) compression region for inputs above 90 dB SPL. During testing, each Audition session was presented to the sound field without pausing in between sentences (i.e., continuous noise between sentences). The listener was given a hand-held microphone which was amplified by a mixer (Behringer XENYX 802) and then fed into a loudspeaker located in the lab/office so that the examiner (second/third author) could hear their responses over the background noise. They repeated the sentences after listening to each sentence and they were encouraged to guess if not sure. The examiner also monitored the reverberant chamber via a video camera and scored the key words in the listener’s responses. The key words of each HINT sentence were defined as words that were not articles (i.e., “a”, “an”, “the”) or auxiliary words (e.g., “is” in the sentence: Someone (is/was) crossing (a/the) road.). Any alternative words allowed in the original HINT was also allowed during scoring (e.g., “has” or “had” in (A/The) house (has/had) nine bedrooms.). The number of key words in a HINT list ranged from 39 to 45. The listener’s speech recognition score for each test condition was then calculated as the number of key words repeated correctly divided by the total number of key words in the sentence list multiplied by 100. 2.11. Overall sound quality preference ratings After the speech recognition tests, the listener rated overall sound quality preferences of comparison pairs in a sound booth (Interacoustics Corporation) when they listened to sound quality rating pairs presented from a GSI sound field loudspeaker using their own cochlear implants. Note that the ePreprocessoreeProcessor systems were not used in the sound quality rating process, and only the recordings made at the output of an ePreprocessoreeProcessor system were included. The sound quality comparison pairs were presented from a computer using a custom MatLab program. The computer output was fed to an external sound card (Sound Blaster Extigy), an audiometer, and then a loudspeaker located at the corner of the sound booth. The listeners sat at a chair so that their heads were 1 m from the loudspeaker. They were instructed to imagine the two presentations were two cochlear implant programs that they were going to use in their daily lives for an extended period of time. The MatLab program was also implemented with a “Repeat” function that would allow the listeners to listen to the comparison pair again if needed. Each listener rated overall sound quality preferences in a total of 72 paired-comparison pairs using a 0e100 point scale (i.e., 6 sentences 3 AeB or BeA comparison combinations 3 noise fields þ 6 sentences 1 AeA comparison pair 3 noise fields). If they liked the first condition more than the second condition, they would respond by telling the examiner “1” and then rate how much more they liked the first condition than the second condition (i.e., 1e30 if slightly more preferable, 31e70 if moderate more preferable, or 71e100 if significantly more preferable). The computer would record the score under Condition 1 and 0 under Condition 2 (i.e., the computer recorded the differences between Conditions 1 and 2). The presentation order of the comparison pairs was randomized. Both listener and examiner were blinded to which conditions were presented.
3. Results 3.1. Amounts of noise reduction The amounts of noise reduction provided by the modulationbased noise reduction algorithm were calculated by measuring the noise level differences in the OMNI and OMNI þ NR conditions. As the adaptation time of the noise reduction algorithm was approximately 12 s, the noise levels were measured 3 s before the presentation of each sentence (i.e., between 27th and 30th seconds). The average amounts of noise reduction provided in between sentences were 5.5, 3.1, and 3.5 dB in Vari1N, Vari3N, and All8N, respectively. Note that the amounts of noise reduction provided by modulation-based noise reduction algorithms could be minimal in the presence of speech because the gains of frequency channels with high estimated SNR were not reduced. The amounts of noise reduction provided by the multichannel adaptive directional microphone algorithm were estimated by measuring the noise levels differences 3 s before the presentation of each sentence in the OMNI þ NR and ADM þ NR conditions. The average amounts of noise reduction were 5.0, 5.6, and 2.9 dB for Vari1N, Vari3N, and All8N, respectively. As adaptive directional microphones reduced background noise levels if there were spatial separations between speech and noise sources, these amounts of noise reduction translated to SNR improvements in the presence of speech signals. 3.2. Speech recognition Listeners’ average speech recognition scores are shown in Fig. 6. As the speech recognition scores appeared to be affected by the ceiling effect in Vari1N, the raw scores were transformed to rationalized arcsine unit (RAU) using the formula reported in Studebaker (1985). Statistical analyses were conducted using raw scores and RAUs. The significance patterns were found to be identical using either raw scores or RAUs. All the following results are, therefore, reported as raw scores for easy interpretation. Analysis of variance (ANOVA) with repeated measure on the speech recognition scores revealed significant Noise Field effect [F(2, 102) ¼ 98.0, p < .0001], Instrument Setting effect [F(3, 102) ¼ 76.9, p < .0001], and significant Noise Field Instrument Setting interactions [F(6, 102) ¼ 9.9, p < .0001].
Fig. 6. Average speech recognition scores obtained in three noise fields when the ePreprocessor was set to the omnidirectional microphone mode (OMNI), OMNI plus modulation-based noise reduction (OMNI þ NR), and multichannel adaptive directional microphone plus noise reduction (ADM þ NR) modes. CI_OMNI represents the average speech recognition scores obtained using listeners’ own cochlear implants. Error bars show the standard deviations.
K. Chung et al. / Hearing Research 291 (2012) 41e51
Post-hoc Bonferroni tests indicated that the scores obtained in the Vari1N noise field was significantly higher than Vari3N and All8N. Within Vari1N, the scores obtained by ADM þ NR and CI_OMNI were significantly higher than those by OMNI and OMNI þ NR (p < .008, p adjusted for 6 tests conducted). There was no significant difference between OMNI and OMNI þ NR or between ADM þ NR and CI_OMNI. Among the conditions in Vari3N, ADM þ NR obtained significantly higher scores than OMNI, OMNI þ NR, and CI_OMNI (p < .008) and CI_OMNI was significantly higher than OMNI þ NR (p < .008). There was no significant difference between OMNI and OMNI þ NR or between OMNI and CI_OMNI. For All8N, ADM þ NR obtained significantly higher scores than OMNI, OMNI þ NR, and CI_OMNI (p < .008). CI_OMNI also was significantly higher than OMNI (p < .008). There was no significant difference between OMNI and OMNI þ NR or between OMNI þ NR and CI_OMNI. 3.3. Overall sound quality preferences Fig. 7 shows the average overall sound quality preference scores obtained in the 12 testing conditions. The first and second condition names label the first and second column of each comparison pair, respectively. The asterisks (*) mark the significantly different conditions. t-Tests revealed that the ADM þ NR conditions obtained significantly higher preference ratings than the OMNI and OMNI þ NR conditions in all three noise fields (p < .004, p adjusted for 12 t-tests conducted). No other significantly different condition was found. The low preference scores and insignificant differences between the AeA pairs indicated that the listeners were reliable and consistent in their sound quality rating tasks. 4. Discussion Speech recognition abilities of cochlear implant listeners were tested in three different noise fields when listeners wore their own cochlear implants and the ePreprocessoreeProcessor systems. Subjective sound quality preference ratings also were examined when they listened to sentences recorded in the three noise fields. In general, ADM þ NR yielded the highest speech recognition scores
Fig. 7. Average overall sound quality preference ratings obtained in three noise fields using modified paired-comparison paradigm. Error bars show the standard deviations and * shows statistical significance at p < .05.
49
and was rated the highest in overall sound quality preferences. Listeners obtained an average of 21%, 45%, and 33% higher speech recognition scores in ADM þ NR than OMNI in the Vari1N, Vari3N, and All8N, respectively. They also rated ADM þ NR being mildly more preferable than OMNI and OMNI þ NR in the three noise fields. As there was no significant difference between OMNI and OMNI þ NR in speech recognition scores nor overall sound quality preference ratings, the improvement seen in ADM þ NR was likely due to the effects of the multichannel adaptive directional microphone algorithm instead of the noise reduction algorithm. Both speech recognition scores and subjective ratings indicated no statistically significant difference between OMNI and OMNI þ NR. The insignificant difference in speech recognition scores was expected because modulation-based noise reduction algorithms were capable of reducing the overall noise levels but could not improve SNRs within each frequency channel. The insignificant difference in overall sound quality preference ratings, however, was unexpected because Chung et al. (2006) reported that a modulation-based noise reduction algorithm implemented in another hearing aid from a different manufacturer significantly improved overall sound quality ratings for cochlear implant listeners. The reason for this discrepancy was examined. Chung (2007) reported that when the noise reduction and compression algorithms were implemented appropriately, the activation of compression algorithm did not affect the actions of noise reduction algorithms (i.e., the same amount of noise reduction with or without compression). When the two algorithms were implemented in series, however, the amounts of noise reduction were greatly reduced. This was because compression algorithms counteracted the actions of noise reduction algorithms by increasing the levels of low level sounds (noise) and decreasing the levels of high level sounds (speech) in each frequency channels. As the noise reduction algorithm in ePreprocessor and the compression algorithm in eProcessor were implemented in series in this study and the average amounts of noise reduction were only between 3.1 and 5.5 dB before compression, it is likely that the effects of noise reduction algorithm were nullified in the integration process. The good news is that the modulation-based noise reduction algorithm used in Chung et al. (2006) provided 7.8e16 dB of noise reduction and cochlear implant listeners were able to perceive its noise reduction effects even though the algorithm in the hearing aid was essentially implemented in series with the compression algorithm in the cochlear implant speech processor. These results suggest that more aggressive noise reduction is needed in order for cochlear implant listeners to perceive the effects of noise reduction algorithms. The average SNR improvements provided by the multichannel adaptive directional algorithm in Vari1N, Vari3N, and All8N were 5.0, 5.6, and 2.9 dB. These data are generally consistent with previous studies that the effectiveness of adaptive directional microphone decreases with the number of noise sources (Ricketts and Henry, 2002; Bentler et al., 2004; Chung, 2004). The exception was that the multichannel adaptive directional microphone algorithm tested in the current study provided comparable amounts of SNR improvement in environments with one or three noise sources. As these results were obtained in reverberant noise fields, they could not be directly compared with those reported in Chung and Zeng (2009), in which the authors examined a singlechannel adaptive directional microphone in an environment with much lower reverberation time. The results of this study suggested that the multichannel adaptive directional microphone algorithm was effective in reducing noise interference in modestly reverberant environments. Additionally, the improvement in SNRs translated to improvements
50
K. Chung et al. / Hearing Research 291 (2012) 41e51
in speech recognition scores in background noise. The average improvements in the ADM þ NR conditions compared to the OMNI þ NR conditions were 17% in Vari1N, 48.5% in Vari3N, and 28.5% in All8N. Theoretically, the OMNI conditions should have yielded similar speech recognition scores as the CI_OMNI conditions. The speech recognition scores, however, indicate that listeners obtained higher scores when using their own cochlear implants (CI_OMNI) than when using the ePreprocessoreeProcessor system in the OMNI or OMNI þ NR modes. Two factors might have contributed to the difference. First, recall that the inputeoutput function of listeners’ own cochlear implant had a compression limiting threshold at 90 dB SPL, yet that of the ePreprocessoreeProcessor system had a compression limiting threshold of 80 dB SPL. As speech was presented at 80 dB SPL root mean square level, some speech peaks were likely compressed by the ePreprocessor in addition to their own cochlear implants. The double-compression occurred at high input levels likely have resulted in reduced speech recognition scores in background noise, suggesting that front-end signal processing algorithms need to have a wide dynamic range that is either equal to or greater than the dynamic range of the cochlear implant speech processors. Second, microphone locations could also have contributed to the difference. The omnidirectional microphones of listener’s own cochlear implant speech processors were located on a surface directly facing the front whereas those of the ePreprocessors were located on the upper side of a standard behind-the-ear case. Should the microphone locations be identical in the two devices, the disparity between OMNI and CI_OMNI would likely be reduced. Taken together, at least one type of multichannel adaptive directional microphones implemented in hearing aids helped cochlear implant listeners understand speech at high levels of background noise and improved perceived sound quality preferences compared to omnidirectional microphones. Modulation-based noise reduction algorithms need to provide relatively high amounts of noise reduction in order for cochlear implant listeners to perceive the benefits. Regarding the integration of hearing aid and cochlear implant signal processing algorithms, caution should be taken to avoid the compression algorithms counteracting the actions of noise reduction algorithms. In addition, the dynamic ranges of the algorithms need to be matched if different digital signal processing chips or different scaling factors are used in the signal processing path. Future studies are needed to examine the effects and benefits of signal processing algorithms in real-world environments. Acknowledgment We would like to thank Scott Kepner and Derek Tully for technical support and Oticon Foundation and MED-EL Cooperation for sponsoring the project. While the study protocol was devised in collaboration with sponsoring organizations, the authors are solely responsible for the interpretation and the presentation of the results in this paper. Appendix A. Supplementary material Supplementary material associated with this article can be found, in the online version, at http://dx.doi.org/10.1016/j.heares. 2012.06.003. References Alcantara, J.L., Moore, D.P., Kuhnel, V., et al., 2003. Evaluation of the noise reduction system in a commercial digital hearing aid. Int. J. Audiol. 42 (1), 34e42.
Arlinger, S., 2003. Negative consequences of uncorrected hearing loss e a review. Int. J. Audiol. 42 (2), S17eS20. Barton, G.R., Bankart, J., Davis, A.C., 2005. A comparison of the quality of life of hearing impaired people as estimated by three different utility measures. Int. J. Audiol. 44, 157e163. Bentler, R., Chiou, L.-K., 2006. Digital noise reduction: an overview. Trends Amplif. 10, 67e82. Bentler, R.A., Tubbs, J.L., Egge, J.L.M., Flamme, G.A., Dittberner, A.B., 2004. Evaluation of an adaptive directional system in a DSP hearing aid. Am. J. Audiol. 13 (1), 73e79. Boymans, M., Dreschler, W.A., 2000. Field trials using a digital hearing aid with active noise reduction and dual microphone directionality. Audiology 39, 260e268. Buchner, A., 2011. Hearing aid pre-processing in cochlear implants. In: Presentation at 13th Symposium on Cochlear Implants in Children, Chicago, IL, USA. Burkhard, M.D., Sachs, R.M., 1975. Anthopometric manikin for acoustic research. J. Am. Acad. Audiol. 58, 214e222. Chung, K., 2004. Challenges and recent developments in hearing aids Part I: speech understanding in noise, microphone technologies and noise reduction algorithms. Trends Amplif. 8 (3), 83e124. Chung, K., 2007. Effective compression and noise reduction configurations for hearing protectors. J. Acoust. Soc. Am. 121 (2), 1090e1101. Chung, K., Zeng, F.-G., 2009. Using adaptive directional microphones to enhance cochlear implant performance. Hear. Res. 250, 27e37. Chung, K., Zeng, F.-G., Waltzman, S., 2004. Using hearing aid directional microphones and noise reduction algorithms to enhance cochlear implant performance. Acoust. Res. Lett. Online 5 (2), 56e61. Chung, K., Zeng, F.-G., Acker, K.N., 2006. Effects of directional microphone and adaptive multi-channel noise reduction algorithm on cochlear implant performance. J. Acoust. Soc. Am. 120 (4), 2216e2227. Cox, R.M., Alexander, G.C., Gilmore, C., 1987. Development of the connected speech test (CST). Ear Hear. 5 (Suppl.), 119Se126S. Dawson, P.W., Mauger, S.J., Hersback, A.A., 2011 MayeJun. Clinical evaluation of signal-to-noise ratio-based noise reduction in nucleus cochlear implant recipients. Ear Hear. 32 (3), 382e390. Dorman, M.F., Loizou, P.C., Fitzke, J., Tu, Z., 1998. The recognition of sentences in noise by normal-hearing listeners using simulations of cochlear-implant signal processors with 6e20 channels. J. Acoust. Soc. Am. 104, 3583e3585. Edwards, B.W., Struck, C.J., Dharan, P., Hou, Z., 1998. New digital processor for hearing loss compensation based on the auditory system. Hear. J. 51 (8), 38e49. Flynn, M., 2004. The Syncro concept. Oticon Internal Report. Fu, Q.J., Shannon, R.V., 1999. Phoneme recognition by cochlear implant users as a function of signal-to-noise ratio and nonlinear amplitude mapping. J. Acoust. Soc. Am. 106, L18eL23. Gates, G.A., Mills, J.H., 2005. Presbycusis. Lancet 366 (9491), 1111e1120. Gifford, R.H., Revit, L.J., 2010. Speech perception for adult cochlear implant recipients in a realistic background noise: effectiveness of preprocessing strategies and external options for improving speech recognition in noise. J. Am. Acad. Audiol. 7, 441e451. Hick, C.B., Tharpe, A.M., 2002. Listening effort and fatigue in school-age children with and without hearing loss. J. Speech Lang. Hear. Res. 45 (3), 573e584. Hu, Y., Loizou, P., 2008. A new sound coding strategy for suppressing noise in cochlear implants. J. Acoust. Soc. Am. 124, 498e509. Hu, Y., Loizou, P., 2010. Environment-specific noise suppression for improved speech intelligibility by cochlear implant users. J. Acoust. Soc. Am. 127, 3689e3695. Johns, M., Bray, V., Nilsson, M., 2002. Effective Noise Reduction. www. audiologyonline.com (accessed 03.01.03). Kokkinakis, K., Loizou, P.C., 2010. Multi-microphone adaptive noise reduction strategies for coordinated stimulation in bilateral cochlear implant devices. J. Acoust. Soc. Am. 127 (5), 3136e3144. Kuk, F., Ludvigsen, C., Paludan-Muller, C., 2002. Improving hearing aid performance in noise: challenges and strategies. Hear. J. 55 (4), 34e46. Latzel, M., Kiessling, J., Margolf-Hackl, S., 2003. Optimizing noise suppression and comfort in hearing instruments. Hear. Rev. 10 (3), 76e82. Mauger, S.J., Dawson, P.W., Hersbach, A.A., 2012. Perceptually optimized gain function for cochlear implant signal-to-noise ratio based noise reduction. J. Acoust. Soc. Am. 131 (1), 327. Miyamoto, R.T., Hay-McCutcheon, M.J., Kirk, K.I., Houston, D.M., Bergeson-Dana, T., 2008. Language skills of profoundly deaf children who received cochlear implants under 12 months of age: a preliminary study. Acta Otolaryngol. 128, 373e377. Mueller, H., Weber, J., Hornsby, B., 2006. The effects of digital noise reduction on the acceptance of background noise. Trends Amplif. 10, 83e93. Nachtegaal, J., Smit, J.H., Bezemer, P.D., van Beek, J.H., Festen, J.M., Kramer, S.E., 2009. The associate between hearing status and psychosocial health before the age of 70 years: results from an internet-based national survey on hearing. Ear Hear. 30 (3), 302e312. Nelson, P.B., Jin, S.B., Carney, A.E., Nelson, D.A., 2003. Understanding speech in modulated interference: cochlear implant users and normal-hearing listeners. J. Acoust. Soc. Am. 113 (2), 961e968. Nicholas, J.G., Geers, A.E., 2007. Will they catch up? The role of age at cochlear implantation in the spoken language development of children with severe to profound hearing loss. J. Speech Lang. Hear Res. 50, 1048e1062.
K. Chung et al. / Hearing Research 291 (2012) 41e51 Nilsson, M.J., Soli, S.D., Sullivan, J., 1994. Development of a hearing in noise test for the measurement of speech reception threshold. J. Acoust. Soc. Am. 95, 1985e1999. Palmer, C.V., Bentler, R., Mueller, H.G., 2006. Amplification with digital noise reduction and the perception of annoying and aversive sounds. Trends Amplif. 10, 95e104. Pearsons, K.S., Bennett, R.L., Fidell, S., 1976. Speech Levels in Various Noise Environments. U.S. Environmental Protection Agency, Environmental Health Effects Research Series Report No. EPA-60011-77-025. NTIS, Washington, DC. Powers, T., Holube, I., Wesselkamp, M., 1999. The use of digital features to combat background noise. Hear. Rev. 3 (Suppl.), 36e39. Ricketts, T.A., 2001. Directional hearing aids. Trends Amplif. 5 (4), 139e176. Ricketts, T.A., Henry, P., 2002. Evaluation of an adaptive directional-microphone hearing aid. Int. J. Audiol. 41, 100e112. Sarampalis, A., Kalluri, S., Edwards, B., Hafter, E., 2009. Objective measures of listening effort: effects of background noise and noise reduction. J. Speech Lang. Hear. Res. 52, 1230e1240. Schum, D., 2003. Noise-reduction circuitry in hearing aids: goals and current strategies. Hear. J. 56 (6), 32e40. Spriet, A., Van Deun, L., Eftaxiadis, K., Laneau, J., Moonen, M., van Dijk, B., van Wieringen, A., Wouters, J., 2007. Speech understanding in background noise with the two-microphone adaptive beamformer BEAM in the nucleus freedom cochlear implant system. Ear Hear. 28 (1), 62e72. Stacey, P.C., Fortnum, H.M., Barton, G.R., Summerfield, A.Q., 2006. Hearing-impaired children in the United Kingdom, I: auditory performance, communication skills,
51
educational achievements, quality of life, and cochlear implantation. Ear Hear. 27 (2), 161e186. Stickney, G.S., Zeng, F.-G., Litovsky, R., Assmann, P., 2004. Cochlear implant speech recognition with speech maskers. J. Acoust. Soc. Am. 116 (2), 1081e1091. Studebaker, G.A., 1985. A “rationalized” arcsine transform. J. Speech Hear. Res. 28, 455e462. Tellier, N., Arndt, H., Luo, H., 2003. Speech or noise? Using signal detection and noise reduction. Hear. Rev. 10 (5), 48e51. Valente, M., 1999. Use of microphone technology to improve user performance in noise. Trends Amplif. 4 (3), 112e135. Valente, M., Mispagel, K.M., 2004. Performance of an automatic adaptive dualmicrophone ITC digital hearing aid. Hear. Rev. 11 (2), 42e46. 71. Walden, B.E., Surr, R.K., Cord, M.T., Edwards, B., Olsen, L., 2000. Comparison of benefits provided by different hearing aid technologies. J. Am. Acad. Audiol. 11 (10), 540e560. Yoshinaga-Itano, C., Sedey, A.L., Coulter, D.K., Mehl, A.L., 1998. Language of early- and later-identified children with hearing loss. Pediatrics 102 (5), 1161e1171. Zeng, F.-G., Galvin 3rd, J.J., 1999. Amplitude mapping and phoneme recognition in cochlear implant listeners. Ear Hear. 20, 60e74. Zeng, F.-G., Rebscher, S., Harrison, W., Sun, X.A., Feng, H.H., 2008. Cochlear implants: system design, integration, and evaluation. IEEE Rev. Biomed. Eng. 1, 115e142. Zhao, F., Bai, Z., Stephens, D., 2008. The relationship between changes in self-rated quality of life after cochlear implantation and changes in individual complaints. Clin. Otolaryngol. 33, 427e434.