Multi-channel cochlear prosthesis adapted to Hebrew: A case study

Multi-channel cochlear prosthesis adapted to Hebrew: A case study

SPEECH COM/dUNICATION ELSEVIER Speech Communication 14 (1994) 263-277 Multi-channel cochlear prosthesis adapted to Hebrew: A case study L. A r o n ...

1MB Sizes 0 Downloads 29 Views

SPEECH

COM/dUNICATION ELSEVIER

Speech Communication 14 (1994) 263-277

Multi-channel cochlear prosthesis adapted to Hebrew: A case study L. A r o n s o n

a

j. R o s e n h o u s e

b,* L.

P o d o s h i n a a n d G. R o s e n h o u s e

c

The Bnai Zion Medical Center, Department of Otolaryngology, Faculty of Medic#w, Technion, Israel Institute o/" Technology, Haifa. Israel t, Department of General Studies, Technion, Israel Institute of Technology, Haifa 32000, Israel " Faculty of Civil Engineering, Technion, Israel Institute of Technology, Haifa, Israel

Received 20 March 1992; revised 15 September 1993 and 25 February 1994

Abstract The purpose of this work was to investigate the speech comprehension of four deaf Hebrew-speaking patients implanted with a cochlear prosthesis, the Nucleus 22-channels (N-22) system. Experiments were performed under two conditions: The speech tests (isolated vowels, bisyllabic words and fluent speech in closed and open sets) were first conducted using the Default Frequency Boundaries (DFBs) of the cochlear implant's speech processor. The Default Frequency Boundaries of each electrode which are specified by the computer program of the system, arc assumed to be selected on the basis of English. Different sets of frequency boundaries were then established by altering the frequency-to-electrode mapping, taking into account the formant patterns of the modern Hebrew vowels and the number of active electrodes implanted in each patient. These changes yielded what we called Modified Frequency Boundaries (MFBs). The patients were then retested using the same speech material, and the results were compared with those previously obtained. As a result of the Modified Frequency Boundaries, improvements in the patients' comprehension of the speech elements were noted. The differences in performance between the two sets of frequency boundary distributions suggest that better speech comprehension could be achieved by implanted patients, at least partly, by adjusting the frequency-to-electrode mapping of the N-22 speech processor on a language basis.

Zusammenfassung

Der Zweck dieser Arbeit bestand darin, die Sprachverstehensleistung von vier gehiSrlosen hebr~iischsprechenden Patienten zu untersuchen, denen eine Cochlearprothese eingepflanzt worden war. Bei dieser Prothese handelt es

* Corresponding author. Tel: 00-972-4-235546. E-mail: [email protected] 0167-6393/94/$07.00 © 1994 Elsevier Science B.V. All rights reserved SSDI 0167-6393(94)00005-U

264

L. Aronson et al. / Speech Communication 14 (1994) 263-277

sich um das 22-kanalige System N-22 des Herstellers Nucleus. Die Experimente wurden unter zwei Bedingungen durchgefi~hrt. Zuniichst wurde das Testmaterial, bestehend aus isolierten Vokalen, zweisilbigen W6rtern, sowie fliel3end gesprochenen S~itzen unter Anwendung von geschlossener und oftener Antwortform, den Patienten unter Beibehaltung der vom Hersteller voreingestellten Werte fiir die Kanalfrequenzen der Implantate priisentiert. Hierbei gehen wir davon aus, dag die voreingestellten Werte, die dutch das Computerprogramm des Systems festgelegt sind, auf der Basis der englischen Sprache ausgew~ihlt sind. Dem zweiten Versuch wurden andere Kanalfrequenzen (modifizierte Kanalfrequenzen, MFBs) zugrunde gelegt; diese wurden eingestellt aufgrund der Formantkarte der Vokale des modernen Hebr~iisch und der Zahl aktiver implantierter Elektroden in jedem Patienten. Die Patienten wurden erneut mit dem gleichen Sprachmaterial getestet. Hierbei haben wir - als ein Ergebnis der ]~nderrung der Kanalfrequenzen - Verbesserungen in der Verstehensleistung der Patienten beobachten k6nnen. Die Unterschiede zwischen den Ergebnissen der beiden Versuche, abh~ingig vonder jeweiligen Einstellung der Kanalfrequenzen, legen es nahe, dab die Sprachverstehensleistung bei Patienten mit Cochlearimplantaten zum mindesten teilweise dadurch verbessert werden kann, dab die Abbildung des Frequenzbandes auf die Kaniile des N-22-Sprachprozessors sprachenabh~ingig durchgefiihrt wird.

R~sum~ Le but de cette Etude a dtE d'examiner les capacitEs de comprehension de parole de quatre patients sourds apr~s implantation de la proth~se cochlEaire Nucleus 22-canaux (N-22). Les experiences ont EtE menEes dans deux conditions. D'abord, des tests d'intelligibilitE (voyelles isolEes, mots bisyllabiques et parole continue, en listes ferm~es et ouvertes) ont EtE men~s avec les frontiEres de frEquence par dEfaut (DFBs) du processeur de parole de la prothbse. Ces DFBs de chaque Electrode, spdcifiEes par le logiciel du systEme, sont censdes avoir ~tE optimisEes pour l'Anglais. Dans un deuxi~me temps, les valeurs de ces DFBs ont EtE chang~es, par modification des associations frEquence-~lectrode, pour prendre en compte les modules d'dvolution formantique des voyelles de l'HEbreu moderne et le nombre d'~lectrodes actives implant~es chez chaque patient. Les patients ont 6tE alors de nouveau test~s avec ces valeurs de frEquence modifi~es (MFBs) sur le m~me matdriau de parole. Pour chaque classe de sons de parole, les r~sultats obtenus dans les deux cas ont Et6 compares. Les meilleures performances observ6es, au moins sur certaines classes de sons, avec les frEquences modifi~es (MFBs) sugg~rent que l'ajustement, en fonction de la langue, de la carte des associations frEquence-~lectrode de la proth~se N-22 peut amEliorer, au moins en partie, la comprehension de parole. Key words: Multi-channel cochlear implant; Hebrew vowel formants; Hebrew comprehension

1. Introduction Various acoustic processing procedures are employed at present in cochlear prostheses. Some of them provide analogue electrical information of the acoustic signal to help speech understanding via a single channel electrode, and the "whole speech" signal is adjusted to improve the patient's perceptual speech ability (Hochmair-Desoyer et al., 1983, 1985; Agelfors and Risberg, 1989). In other systems, amplitude and time variations of the acoustic signal are processed and sent to multiple-channel systems (Eddington, 1983; White, 1985). Another strategy extracts speech

features and transfers them to various kinds of electrode arrays (Abberton et al., 1985; Clark et al., 1984). However, providing speech information through a cochlear prosthesis is generally considered a very difficult task and severe limitations must be overcome before reaching an optimal coding scheme (Evans, 1984; Schubert, 1985; Summerfield, 1985). Several papers discuss investigations dealing with the improvement of speech performance by patients with cochlear implants by varying the electrical parameters of the stimuli or by optimizing speech strategies, taking into account individ-

L. Aronson et a l . / Speech Communication 14 (1994) 263-277

ual responses to psychophysical tests. The discrimination of vowels, consonants and fluent speech, using analogue filtered stimulation in a multiple-channel cochlear prosthesis, has also been investigated (Eddington, 1983; Parkin et al., 1988; Dorman et al., 1988, 1990). Vowel and consonant recognition using a formant estimating cochlear system has been studied in (Blarney et al., 1987a). Formant frequency discrimination in subjects implanted with multi-channel systems has also been studied (White, 1983). It is rather difficult to evaluate the different speech coding strategies, since numerous factors are involved, e.g., the limited number of patients, differences in pathologies, number of active electrodes, duration of training, etc. Nevertheless, several studies have been performed in this area (Simmons et al., 1986; Tye-Murray and Tyler, 1989), using specially designed test batteries (Owens et al., 1981; Tyler et al., 1983). The present study describes a method intended to improve the perception of Hebrew, which was tested on four deaf patients implanted with the Cochlear Corporation Nucleus 22 channels (N-22) system (Tong et al., 1983; Clark et al., 1984; Dowell et al., 1986; Boothroyd, 1987; Skinner et al., 1991). In this system the electrode array is implanted along the cochlea so that the higher electrode numbers are at the apical end and the smaller electrode numbers are at the basal end. The Wearable Speech Processor III (WSPIII) of this system is based on the formant extraction speech strategy. Basically, the place of stimulation of the implanted electrode array is determined by the frequency value of the first (F1) and second (F2) formants of the incoming speech signal. The F2 value selects the basal electrode pairs to be activated and the F1 value selects the apical pairs. The repetition rate of stimulation for each electrode is determined by the glottal frequency (F0). This is called the FO/F1/F2 strategy. It is also possible to use the FO/F2 strategy for which the rate of stimulation is determined by the F0 value, and the electrode pairs are stimulated by the frequency of F2. The current level of the stimulus is determined by the amplitude of each formant. The frequency boundary of each electrode, i.e., the range of

265

frequencies which activate an electrode, may be varied at will ~ and the stimulation mode is selected from five different modes 2. From an acoustical point of view, vowels occupy a rather central position in the human hearing system: vowels bear most of the speech energy (as compared to the various kinds of consonants) and their most important formants ( F I , F2) are in the 300-3000 Hz range, where the human auditory system is very sensitive. Vowels constitute the centre of syllables and thus carry the linguistic information provided by prosodic features: stress, rhythm and intonation. (Liberman and Mattingly, 1985). These prosodic cues, carried by the glottal frequency variations, are tightly related to the phonetic, grammatical and semantic contents of the message. Different vowels distinguish between different words; cf. the following word pairs in Hebrew 3: / h u / (he) versus / h i / (she); / y o m / ( d a y ) v e r s u s / y a m / (sea); / b o ' n e / ( h e builds) versus /bo'na/(she builds); /xa'fuv/(important m.s.) v e r s u s / x a ' f o v / ( t h i n k ! m.s.); etc. All these elements are of great communicative value, which makes proper vowel percep-

I For the F O / F 1 / F 2 strategy, the frequency ranges are distributed among the electrodes in two different ways: (1) the F1 electrodes are assigned to linearly equal frequency bands from 280 Hz to 1000 Hz, while the F2 electrodes are assigned to logarithmically equal frequency bands from 1000 Hz to 4000 Hz (lin/log spacing). (2) The second distribution splits the total frequency range into logarithmically equal frequency bands (log/log spacing). The F O / F 2 coding str~ttegy uses the logarithmic distribution as default spacing. 2 The stimulation mode may be selected from several modes. In the bipolar (BP) mode, electrical stimulation produces a distribution of current flow between an active ~:lectrode and the adjacent one in the apical direction: this is the indifferent or reference electrode. In the Bipolar plus one (BP+ I) mode, the pair is created in such a way that if an electrode is activated, the next but one electrode (in the apical direction) is the reference. The software will accommodate up to BP + 5 changes in bipolar configuration. It is also possible to stimulate an electrode using the "common ground" mode, which means that when one electrode is activated, all the others are connected together to form the indifferent electrode. 3 The apostrophes in the Hebrew examples mark the beginning of a stressed syllable.

L. Aronson et al. / Speech Communication 14 (1994) 263-277

266

tion particularly important. Vowel perception also helps in consonant perception, since vowel formants also carry information about place and manner of consonant articulation. In fact, movements of vowel formants are a very important cue for the identification of the place of articulation of the consonants (Manrique and Massone, 1982). Modern Hebrew has basically a set of five vowels (which may also combine to form a few diphthongs), the formant patterns of which are distributed rather well apart, so that there is little oveHap between F1 and F2 values (Aronson et al., 1993). This feature is advantageous for deaf Hebrew speakers, who in turn will have relatively few overlaps in vowel perception compared to speakers of languages with more vowels. This encouraged us to emphasize this effect for the patients implanted with a system which applies a formant extraction strategy. Based on all the acoustic-linguistic background of vowel formant roles, the speech processor of each of the four patients was modified by altering the frequency-to-electrode mapping. These changes were intended to create a distinct and easily perceived cue for each vowel in order to improve the patients' speech comprehension. (Formant amplitudes were not altered, since the WSP III automatically alters the relationships among them.) In this study we report the effects

of these changes in the first four Hebrew speaking deaf patients implanted at our Medical Center.

2. The Hebrew vowels

The F1 and F2 values of the five Modern Hebrew vowels / i , e, a, o, u / , recorded in isolation, were analyzed using Linear Prediction Coding (LPC) and Fast Fourier Transform (FFT) as complementary methods. (F3 values were also determined but not used in this study). Both methods yielded similar results, though for almost all the vowels the LPC values are slightly higher than the FFT values. Differences between male and female values are similar to those obtained in experiments using other languages. In Hebrew these differences seem to be mainly concentrated in the speakers' F3 values, although F1 and F2 also show some significant differences. Average differences between male and female formant values range from 2% to 30% for the various vowels. In an F 1 - F 2 map, each vowel is located in a rather specific area. Fig. l(a, b) shows the F1 and F2 values in Hz, averaged from 6 male and 6 female native Hebrew speakers without hearing loss, derived from the FFT analysis.

HEBREW VOWEL FORMANTS FEMALE VOICES

HEBREW VOWEL FORMANTS MALE VOICES

;,uJ

F1 in Hz

[i] 48(

a)

b)

F1 in Hz

[~] 40£

[o]

50(

50£

]

80C

80(

[a] 70C

700 g00

80(

900

90£

0

I 2800

I 2800

I 2200

I 1900

I 1800

I 13OO

I 1000

I 700

0

F2in Hi:

[a] "4' I 2800

I 2500

I 2200

I 1000

I 1600

I 1300

I 1OOO

I 700

F2

in Hz

<3==== Fig. 1. First (F]) and second (F2) formants in Hz of Hebrew vowels averaged among (a) 6 male and (b) 6 female subjects (Fast Fourier Transform (FFT) analysis). The values of the respective formants (in Hz) arc M: / a / 575, ]185; / o / 4 3 5 , 860; / u / 3 2 0 , 785; / e / 4 6 0 , ]870; / i / 2 9 5 ; 2620; F: / a / 880, ]485; / o / 5 5 5 , 1 0 0 0 / u / 4 0 0 , 875; / e / 5 8 0 , 2280; / i / 295, 2665.

L. Aronson et al. / Speech Communication 14 (1994) 263-277

3. The patients

This paper reports the case of the first four patients implanted at the Bnai Zion Medical Center. Table 1 summarizes some of their details. One of the patients was prelingually deaf and had hardly any oral speech. All of them scored less than 5% on the pre-operative speech tests (especially designed for Hebrew) using their appropriate hearing aids.

4. Method

267

modify the WSP III of the N-22, taking into account the number of active electrodes and the strategy (FO/F1/F2 or FO/F2) used by each patient. This procedure yielded a new fi:equencyto-electrode mapping, i.e., Modified F'requency Boundaries (henceforth: MFBs). Since there are gender related differences in vowel structures, another aim in applying the MFBs was to reinforce such distinctions wherever necessary. The speech tests administered to the patients, using DFBs and MFBs, examined the differences in patients' perception under both conditions. Counterbalance mode tests were conducted to validate the conclusions.

4.1. Rationale 4.2. Test material The Default Frequency Boundary (DFB) of the N-22 speech processor, i.e., the frequency boundary of each electrode specified by the computer program of the system, has been chosen "to maximize the separation of the F1 values among electrodes" (Audiologist's Handbook, 1989). It is assumed that this selection was made with respect to the English language. The present study aimed at finding out whether changes in the frequency boundaries of active electrodes, according to the formant pattern of a specific language, could improve speech comprehension of implanted speakers of that language, since formant patterns are language specific. In this particular case the formant patterns of the five modern Hebrew vowels were used to

The speech tests included the five isolated Hebrew vowels, namely, / i , e, a, o, u / , closed sets of 15 bisyllabic words, 5 short every-day sentences in closed lists, and 15 short open-set stories. The bisyllabic words were linguistically adapted from the speech material of the Nucleus rehabilitation program (Mecklenburg et al., 1987) and were presented in three different groups. The reason for grouping them was to present to the patients short closed lists of phonologically similar words. The groups were: 1. / ' b o k e r / 3 (morning), / ' l a i l a / (night), /tsa'var/ (neck), / b a ' s a r / (meat), / k a ' f e / (coffee); 2. / ' s a b a / (grandfather), / k i ' t a / (classroom), /xdlav/ (milk), / ' g e f e m / (rain), / ' x e d e r / (room); 3.

Table 1 Patient characteristics Patient number Age Duration of deafness (years) Etiology Number of implanted electrodes Number of active electrodes Strategy Mode Oral communication

1 16 15

2 65 14

3 42 1.5

4 60 8

acute purulent meningitis 18

unknown progressive 13

cochlear otosclerosis sudden deafness 22

unknown ~' progressive 22

4

13

20

19

FO/F2

FO/F1/F2

FO/FI/F2

FO/FI/F2

BP+I very poor

BP+I very good

BP+ 1 excellent

BP+ 1 fair

a This patient has tinnitus in the ear carrying the implant.

268

L. Aronson et al. / Speech Communication 14 (1994) 263-277

/'sefer/ (book), / ' f e m e f / (sun), / ' d e l e t / (door), / m a ' n a / (portion), / f u l ' x a n / (table). Five sentences of equal length and different rhythms were designed for the closed set sentence tests. This material was also adapted (not translated) for Hebrew from the rehabilitation program of the N-22. The 15 open set stories were unknown to the patients and their vocabulary was adapted to the standard style of simple Hebrew story books. 4.3. Procedure 4.3.1. Tests using the DFBs The experiments began when the patients had had four months daily experience with their prosthesis under stable conditions, in their normal environment. Before this, their threshold (T) and comfortable levels ( C ) 4 were controlled and modified several times according to their responses. This was particularly true for the prelingually deaf patient, for whom many modifications were needed until stability was reached in his T / C values. The patients were using the DFBs distribution and the frequency spacing l i n / l o g or l o g / l o g which a p p e a r e d to be the best suited for their individual speech comprehension needs in daily use. Without preliminary training, each patient was tested, in individual weekly working sessions, starting with the vowels, then the bisyllabic words and sentences, and finally the stories. This allowed the patients to gain some experience for the more complex tests. The vowels, and later the sentences, were presented 15 times each in three different blocks. For the bisyllabic words, a different group was presented in each of the three blocks, each word being presented 15 times. This made a total of 225 presentations per stimulus set. The working-session lasted about 1 hour per block. Patients were allowed a 15-minute break

4 In the N-22 system, the amplitude of the stimulus current ranges from 0.025 to 1.5 mA. This range is logarithmically transformed into another scale which has 239 logarithmic steps over the current range. Each step increases the current level by about 2.5% above the preceding value. The threshold (T) and comfortable (C) levels are determined in this scale.

between block presentations. Bisyllabic words and sentences were first presented in writing. All the test material was read aloud by a female speaker, seated 1.5 m from the patient in a quiet room. The patient used h i s / h e r own speech processor at a comfortable listening level and the head-set microphone. The patients were asked to identify each stimulus. The vowel and the bisyllabic word confusion studies were performed using the Diagnosis Program System (DPS) of the N-22 (see Audiologist's Handbook, 1989). The program receives written speech data and presents them in writing in random order on the monitor. These stimuli were then presented orally to the patient. The patient's responses were stored in the computer and at the end of the test a confusion matrix of the stimuli (in rows) and responses (in columms) was displayed on the monitor screen. The number of times the patient judged that s h e / h e perceived the stimulus and the percentage of correct responses were also displayed. A speech tracking technique (De Filippo and Scott, 1978) was used to evaluate the comprehension of the open set speech material. According to this method, the segment length equalled the number of words in the sentences presented as a "logical linguistic constituent". In our experiments, segments did not exceed five words. This method requires subjects to repeat the text as perceived. In case of error, the word or segment was re-read by the examiner, modifying the style of the oral presentation, as suggested by De Filippo and Scott (1987). The subjects' ability to understand connected speech was measured by the number of words repeated per minute (wpm). The tests were repeated during five work-sessions using a different story each time. For patient no. 1 the tests were performed using lipreading plus electrical stimulation (LR + ES), since he was unable to identify most of the speech sounds without lipreading. The other patients performed the tests using electrical stimulation only (ESO), i.e., without lipreading. 4.3.2. Fitting the speech processor to the Hebrew towels After the patients had been tested using the DFBs, the frequency-to-electrode mapping of

L. Aronson et al. / Speech Communication 14 (1994) 263-277 Table 2 Patient no. 1. Default Frequency Boundaries (DFBs). Modified Frequency Boundaries (MFBs) and F2 vowels allocation for each active electrode Electrode

DFBs Hz

F 2 vowels allocation

MFBs

F 2 vowels allocation

20 19 18 17

0 1098 1098-1584 1584-2290 2290-4000

/o/-/u/

0-880 880-1580 1580-2320 2320-4000

/u/

/a/ /e/ /i/

/o/-/a/ /e/ /i/

Frequency spacing: Log; Pulse width (PW) = 200 Ixs.

their speech processors was changed and the MFBs were established. Using the Diagnosis Program System of the N-22 makes it possible to change the upper limit of the frequency boundary of each electrode inducing changes in the lower limit of the frequency boundary for the adjacent electrode in the basal direction. Fitting the frequency boundaries depended on the n u m b e r of active electrodes and the strategy (FO/F1/F2 or FO/F2) used in each case. The MFBs were established according to the following principles: (i) Each F1-F2 pair for different Hebrew vowels should activate a different pair of electrodes. Even when for certain vowels this was already achieved using the DFBs, if vowel pairs were confused in the tests by more

269

than 30%, or if the F1-F2 values of certain vowels were close together, changes in the DFBs were made in order to maximize the separation of the activated electrodes. The boundaries of the respective frequency bands, i.e. the MFBs values, were little dependent on the patients' performance in the tests using the DFBs. (ii) Modified frequency boundaries were established in such a way that formant values were allocated in the middle of the frequency bands. (iii) Frequency boundaries were changed so that male and female formant values for the same vowel would activate different electrodes. In the N-22 system, the glottal frequency is coded as the repetition rate of stimulation on each electrode. Thus, patients may distinguish male from female voices. In addition, we took into consideration vowel formant differences in m a l e / f e m a l e speech and accordingly selected different electrodes. This helped patients for whom the variation in pitch perception was better achieved by changing the place of stimulation (place coding) rather than the stimulus repetition rate (time coding). (This difference was also tested by psychophysical tests and is reported in another paper).

Table 3 Default Frequency Boundaries (DFBs), Modified Frequency Boundaries (MFBs) and Formant vowel allocation for male (m) and female (f) w>ices for each active electrode of patient no. 2 Electrode

DFBs Hz

F1 vowels allocation

20 19 18 17 16 15 14 13 12 11 10 9 8

0 - 329 329- 471 471- 690 690-1004 1004-1145 1145-1302 1302-1490 1490-1694 1694-1945 1945-2212 2212-2525 2525-2886 2886-4000

/i/(m, f)-/u/(m) /u/(f)-/o/(m) /e/(m) /a/(m) /o/(f)-/e/(f) /a/(f) -

F 2 vowels allocation / o / ( m , f ) - / u / ( m , f) /a/(m) /a/(f) /e/(m) /e/(f) / i / ( m , f)

Frequency spacing: L o g / L o g ; Pulse width ( P W ) = 200 ~xs.

F2 vowels

MFBs Hz

F I w~wels allocation

0 - 310 310- 380 380- 450 450- 620 620- 850 850-1000 1000-1250 1250-1550 1550-1750 1750 2100 2100-2350 2350-2550 2550-4000

/ i / ( m , f) /u/(m) /o/(m)-/u/(f)

-

/o/(f)-/e/(m)-/e/(f)

-

/a/(m) /a/(f)

/u/(m) /u/(f) /o/(m) /o/(fI-/a/(m) /a/(f)

allocation

-

/e/(m) /e/(f) / i / ( f , m)

L. Aronson et al. / Speech Communication 14 (1994) 263-277

270

Table 4 Default

Frequency

Boundaries

(DFBs),

Modified

female (f) voices for each active electrode Elec-

DFBs

trode

Hz

F1 vowels allocation

20

0-

408

/u/(m,

19

408-

502

18

502-

17

Boundaries

(MFBs)

and Formant

vowel allocation for male (m) and

f)

F2 vowels

MFBs

F1 vowels

F2 vowels

allocation

Hz

allocation

allocation -

-

0-

310

/i/(m,

/o/(m)-/e/(m)

-

310-

380

/u/(m)

-

596

/o/(f)-/e/(f)

-

380-

450

/u/(f)-/o/(m)

-

596-

706

/a/(m)

-

450-

540

/e/(m)-/e/(f)

-

16

706-

800

-

/u/(m)

540-

600

/o/(f)

-

15

800-

894

/a/if)

/u/(f)-/o/(m)

600-

740

/a/(m)

-

14

894-1004

-

/o/(f)

740-

800

-

/u/(m)

13

1004-1098

-

-

800-

890

/a/(f)

/o/(m)-/u/(f)

12

1098-1208

-

/a/(m)

890-1080

-

/o/(f)

11

1208-1318

-

-

1080-1150

-

-

10

1318-1443

-

-

1150-1306

-

/a/(m)

9

1443-1584

-

/a/if)

1306-1490

-

/a/(f)

8

1584-1741

-

-

1490-1555

-

-

7

1741-1898

-

/e/(m)

1555-1700

-

-

6

1898-2086

-

-

1700-1920

-

/e/(m)

5

2086-2290

-

/e/(f)

1920-2200

-

-

4

2290-2510

-

-

2200-2400

-

/e/(f)

3

2510-2745

-

/i/(m,

2400-2550

-

-

2

2745-3012

-

-

2550-2995

-

/i/(m)-/i/(f)

1

3012-4000

-

-

2995-4000

-

-

Frequency

f)-/i/(m,

Frequency

of patient no. 3

f)

f)

s p a c i n g : L i n - L o g ; P u l s e w i d t h ( P W ) = 2 0 0 ixs.

Table 5 Default Frequency

Boundaries

(DFBs), Modified

female (f) voices for each active electrode Elec-

DFBs

trode

Hz

F1 vowels allocation

20

0-

408

/u/(m,

19

408-

502

18

502-

17

f)-/i/(m,

Frequency

Boundaries

(MFBs)

and Formant

vowel allocation for male (m) and

of patient no. 4

f)

F2 vowels

MFBs

F1 vowels

F2 vowels

allocation

Hz

allocation

allocation -

-

0-

310

/i/(m,

f)

/o/(m)-/e/(m)

-

310-

380

/u/(m)

-

596

/o/(f)-e(f)

-

380-

420

/u/(f)-/o/(m)

-

596-

706

/a/(m)

-

420-

510

/e/(m)-/e/(f)

-

16

706-

800

-

/u/(m)

510-

600

/o/(f)

-

15

800-

894

/a/(f)

/u/(f)-/o/(m)

600-

750

/a/(m)

-

14

894-1004

-

/o/(0

750-

850

-

/u/(m)

13

1004-1098

-

-

850-

950

/a/if)

/o/(m)-/u/(f)

12

1098-1224

-

/a/(m)

950-1100

-

/o/if)

11

1224-1349

-

-

1100-1350

-

/a/(m)

10

1349-1490

-

/a/(f)

1350-1490

-

/a/(f)

9 8

1490-1647 1647-1820

-

-

1490-1645 1645-1820

-

-

7

1820-2008

-

/e/(m)

1820-2010

-

/e/(m)

6

2008-2212

-

-

2010-2215

-

-

5

2212-2447

-

/e/if)

2215-2350

-

/e/if)

4

2447-2698

-

/i/(m,

2350-2500

-

-

3

2698-2980

-

-

2500-2800

-

/i/(m,

2980-4000

-

2800-4000

-

-

2

a

Frequency a Electrode

f)

s p a c i n g : L i n - L o g P u l s e w i d t h ( P W ) = 2 0 0 Ixs. 1 was disconnected

due to an unpleasant

auditory sensation.

f)

L. Aronson et al. / Speech Communication 14 (1994) 263-277

27 l

electrode. But the final decision on electrode allocation was made according to the patients' responses to yield the best separation among vowels. Tables 2-5 depict the F1 and F2 vowel allocations for DFBs and MFBs, frequency spacing and pulse width for each

We used the formant values obtained using the FFT method (Fig. 1) for adjusting the frequency allocation of each electrode. The standard deviations of the formant values were also taken into consideration, to avoid overlapping of the assigned passband of each Table 6 Confusion matrices for the four patients Patient no. 1. Test condition: L R + ES Default Frequency Boundaries Stimuli /a/ /o/ /u/ /e/ /i/ TOTAL

Modified Frequency Boundaries

Responses /a/ 45

Responses /o/

/u/

/e/

/i/

/a/

/o/

/u/

/e/

/i/

26 9

19 36

23 10 45 35 55 33 Correct answers: 165 out of 225 = 73%

22 35 57

45 9 8

36 37

12 11 45 17 73 23 Correct answers: 137 out of 225 = 61%

33 34 67

Patient no. 2. Test condition: ESO Stimuli /a/ /o/ /u/ /e/

/i/ TOTAL

Responses /a/ 45

Responses /o/

/u/

25 13

20 32

/e/

16 19 45 38 52 35 Correct answers: 144 out of 225 = 64%

/i/

29 26 55

/a/ 45

/o/

/u/

31 12

14 33

/e/

34 14 45 43 47 48 Correct answers: 174 out of 225 = 77%

/i/

11 31 42

Patient no. 3. Test condition: ESO Stimuli /a/ /o/

Responses /a/ 45

Responses /o/ 30

/u/

5

/e/

12

/i/ TOTAL

/u/

/e/

/i/

/a/ 45

15

40

10 14 45 47 40 39 Correct answers: 156 out of 225 = 69%

23 31 54

/o/

/u/

36 5 5

40

/e/

/i/

9

34 10 45 46 40 53 Correct answers: 190 out of 225 = 84%

6 35 41

Patient no. 4. Test condition: ESO Stimuli /a/ /o/ /u/ /e/

/i/ TOTAL

Responses /a/ 30 11

Responses /o/ 15 31 9

/u/

/e/

3 6 20 29 41 55 29 58 Correct answers: 117 out of 225 = 52% 20 9

/i/

/a/ 36 15

10 16 16 42

/o/ 9 28 17

/u/

/e/

2 28

9

26 20 51 54 30 46 Correct answers: 143 out of 225 = 63%

/i/

19 25 44

272

L. Aronson et al. / Speech Communication 14 (1994) 263-277

patient. Table 6 shows the vowel confusion matrices of each patient as obtained under both conditions of frequency distribution. Patient no. 1 is implanted with four active electrodes. He uses the FO/F2 strategy. In this case, the lin-log and log-log distribution strategies produce the same frequency distribution. The MFBs were established taking into account only F2, without gender-dependent differences. As the patient was able to discriminate / o / from / a / by lipreading, we decided to adjust electrode number 19 so that it would receive the second formant o f / o / a n d / a / . As the vowel / o / w a s confused with / u / (see Table 6), their channel allocations were separated, / u / being activated by electrode 20 and / o / by electrode 19. In the DFBs distribution, the vowel / e / was perceived as / i / most of the time, although each vowel stimulated a different electrode. The change in the upper limit of electrode 17 was intended to improve the range of detection for the vowel / e / , which in time was expected to produce a better discrimination b e t w e e n / e / a n d / i / . For patient no. 2, the vowels' F1 allocations with DFBs were concentrated in the four most apical electrodes (Table 3) and the second formants were distributed among electrodes 17-9. As this patient distinguishes well between male and female voices, changes in the DFBs were aimed at improving his poor discrimination between the vowels / o / and / u / , as well as between / e / and / i / (see Table 6). In the DFBs distribution, the F2s of both / o / and / u / w e r e

located in electrode 17, while the three other vowels had different electrode allocations each. The MFBs were established so as to produce a less compressed F1 allocation and to further separate F 2 allocations of / e / and / i / . Thus the MFBs allocated F l s among electrodes 20 to 15, and F2s among electrodes 16 to 8, which meant that all the vowel formant allocations were shifted towards the basal electrodes. Patient no. 3 had a relatively good vowel discrimination with DFBs except f o r / e / ( s e e Table 6). Therefore, the changes in frequency boundaries addressed this vowel to improve its discrimination and to re-allocate its F l s and F2s in a less compressed way. Thus the MFBs (Table 4) allocate F l s in about the first third of the electrodes array, while the F2s were shifted to the more basal area. Patient no. 4 had a poor vowel discrimination (Table 6). In addition, because of her tinnitus, it was hard to elicit reliable responses from her. Changes in the frequency boundaries were made mainly taking into account points (i) to (ii) above, so that gender-dependent differences were hardly applied. Fortunately, she had a good male-female voice discrimination. For F1, changes were made in order to separate the vowel / u / from the other vowels. The F l s were re-allocated in the third part of the electrode array. For the F2s, the MFBs maintained almost the same vowel allocations as the DFBs, with only two electrodes shifted to the basal end. After the MFBs had been stored in each speech

Table 7 Percentages of correct reponses for the tests performed with the four patients for vowels, bisyllabic words and sentences, using DFBs and MFBs. Numbers in brackets are the ranges of correct responses with 95% confidence level. Patients no. 2, 3, 4 used electrical stimulation only (ESO). Patient no. 1 also used lipreading (LR + ES) Patient

1

2 3 4

Vowels

Bisyllabic words (closed sets)

Sentences (closed sets)

DFBs

MFBs

DFBs

MFBs

DFBs

61 (54.6-67.4) 64 (57.7-70.3) 69 (63.0-75.0) 52 (45.5-58.5)

73 (67.2-78.8) 77 (75.5-82.5) 84 (79.2-88.8) 63 (56.7-69.3)

59 (52.6-65.4) 67 (60.9-73.1) 70 (64.0-76.0) 50 (43.5-56.5)

63 (56.7-69.3) 74 (68.3-79.7) 80 (74.8-85.2) 58 (51.6-64.4)

Condition

MFBs L R + ES

67 (60.9-73.1) 70 (64.0-76.0) 52 (45.5-58.5)

75 (69.3-80.7) 82 (77.0-87.0) 59 (52.6-65.4)

ESO ESO ESO

L. Aronson et al. / Speech Communication 14 (1994) 263-277

processor, the patients used the prosthesis for one month in their usual environments. They were then retested using the same material and the same procedure as before. 4.3.3. Counterbalance mode tests After the tests using the MFBs, the DFBs were stored again in the speech processor of each patient. After 3 weeks of daily use under these conditions, patients were retested, using the same procedures and materials as before. The open set sentences comprehension was retested using a new set of five stories. This counterbalance m o d e test, again using the default frequencies, DFBs (C), was made to ensure that better results obtained with the MFBs were not due to training effects but to the modified vowel formant allocations.

273

WORDS PER MINUTE

PATIENT # 2

35

26

20

MEAN DFBs = 21.4 wpm

DFBs 15

MEAN MFBs = 25.4 wpm 10

MEAN DFBs(C) = 22 wpm 1

2

3

4

6

SESSION NUMBER Fig. 2. Speech tracking rates in w o r d s p e r minute, f o r five sessions, f o r patient no. 2 using the speech processor alone.

The tests were conducted using the Default Frequency Boundaries, DFBs, the Modified Frequency Boundaries, MFBs, and the Default Frequency Boundaries in the counterbalance mode tests, DFBs (C). Also depicted are the mean values across sessions.

5. Results

Table 7 presents the percentage of correct responses for 225 stimuli in the vowel confusion study, bisyllabic words and sentences, using DFBs, MFBs and DFBs (C) as confusion matrices for the four patients. Averaged values are also depicted. The results for the bisyllabic words were averaged among the groups of words. Note that quite similar results were obtained for both DFBs and DFBs (C). The 95% confidence limit for the percentage of correct responses in the vowels, bisyllabic words and sentence tests was calculated for both DFBs and MFBs distributions. The percentage of correct scores for the 225 stimuli is significantly greater than the chance score ( p < 0.001), assuming a binomial distribution of scores. The binomial chance scores are 20% for vowels, 6.7% for bisyllabic words and 20% for sentences. A separate analysis of variance for each of the above tests was carried out with patients and frequency boundaries as factors, using angular transformation of percentages. The patients' effect was significant (at the 5% level) in the bisyllabic words and sentence tests, but not significant in the vowel tests. The frequency boundary effect was significant in all the tests: for vowels, F(1, 3)

= 95.35, p < 0.001; for bisyllabic words, F(1, 3) = 26.29, p < 0.014 and for sentences F(I, 2) = 22.04, p < 0.04. Since the frequency boundary effect has two levels, DFBs and MFBs, the per-

WORDS P~--q MINUTE

F~&TIENT ~ 3

!

1~- j--j

3 0 ~-

. J'e"-

MFBs

20

~-~

---

+---J

Df: Bs(C)

~ ' ~ - - ~ "

DFBs MEAN DFBs - :?2 wpm

16

MEAN MFBs - 2 8 5 wpm

lc,i 5~.--

MEAN DFBs(G~- 23 wpm , 1

3

4

5

SESSION NUMBER

Fig. 3. Speech tracking rates in words per minute, for five sessions, for patient no. 3 using the speech processor alone. The tests were conducted using the Default Frequency Boundaries, DFBs, the Modified Frequency Boundaries, MFBs, and the Default Frequency Boundaries in the counterbalance mode tests, DFBs (C). Also depicted are the mean values across sessions.

L. Aronson et aL / Speech Communication 14 (1994) 263-277

274

WORDS PER MINUTE

PATIENT # 4

6. Discussion and conclusions

36 MEAN DFBs = 12.6 wpm 30-

* MEAN MFBs = 15.8 wpm

25 20

. . . . .

MEAN DFBS(c) = i2 Wpm .

MFBs,

.

.

*

.

16 10 DFBs(O)

6

i

i

i

i

i

1

2

3

4

5

SESSION NUMBER

Fig. 4. Speech tracking rates in words per minute, for five sessions, for patient no. 4 using the speech processor alone. T h e tests were conducted using the Default Frequency Boundaries, DFBs, the Modified Frequency Boundaries, MFBs, and the Default Frequency Boundaries in the counterbalance mode tests, DFBs (C). Also depicted are the m e a n values across sessions.

centage of correct responses for MFBs is noticeably greater than for DFBs in all the tests. Regarding the open-set (sentence) tests, the results show that the patients were able to partially understand connected speech without lipreading under both conditions. Figs. 2-4 depict the speech tracking rates in words per minute for patients no. 2, no. 3 and no. 4, respectively, under DFBs, MFBs and DFBs (C). Mean values are also noted in the figures. Note that DFBs (C) results were quite similar to the DFBs results. An analysis of variance was carried out on the data, with patients and frequency boundaries, DFBs and MFBs, as factors. Results of different sessions were used as repeated measures. The frequency boundary effect was significant, with F(1, 2) = 154.1, p < 0.05, showing noticeably higher mean rates with MFBs than with DFBs. The main effect on patients was significant with F(2, 2) = 50.2, p < 0.05, because patient no. 4 has fair communication abilities. The significant session x patient interaction (F(8, 8) = 4.6, p < 0.05) showed that a learning effect occurred as the testing progressed.

An analysis of patients' results regarding vowels shows that using MFBs, the perception of / o / and / e / by patient no. 1 (see Table 6) partially improved compared to the scores obtained using the DFBs strategy. For the vowel / a / , good scores were obtained under both conditions, while / u / and / i / presented quite the same pattern of confusion for both frequency distributions. Patient no. 1, as described above, uses the FO/F2 strategy. Considering that place of articulation (front, back) features depend on the F2-F1 relation, we assume that he is probably unable to distinguish the place features for vowels, though he apparently perceives spectral changes related to F2. At present, he is able to partially discriminate vowels only, using the speech processor alone (i.e., without lipreading). Patient no. 2 improved his discrimination of vowels/o/,/e/and/i/with the MFBs (Table 6), although no changes in the perception o f / u / were noted. The vowel / a / was correctly perceived in both frequency boundary distributions. The results of the vowel tests for patient no. 3 indicate that most of the improvements obtained using MFBs concern the vowel / e / ; vowels / o / and / i / show some improvement, while f o r / a / a n d / u / , the score is the same as with DFBs. Patient no. 4 has poor vowel discrimination with either DFBs or MFBs. Using MFBs, she obtains somewhat better rates for the v o w e l s / a / and / u / than using DFBs; / o / is better perceived with DFBs; / e / is still not well perceived with the MFBs, although it is now confused only w i t h / i / ; better results are noted f o r / i / . A general comparison of vowel results shows that scores are better for / o / , / e / and / i / using the MFBs; indeed, most of the changes in the frequency boundaries were meant to improve the perception of these vowels. The v o w e l / a / is well perceived with both frequency distributions by all the subjects except patient no. 4; / u / is also relatively well understood with both frequency distributions. In the bisyllabic word tests all patients scored better using the MFBs distribution than when using the DFBs (see Table 7).

L. Aronson et al. / S p e e c h Communication 14 (1994) 263-277

Patient no. 1 did not undergo the closed set tests. Patients no. 2 and no. 3 scored better in the closed set sentence tests in the MFBs distribution than in the DFBs. Patient no. 4 also showed the same tendency in the closed set sentence tests, though her results were poor compared to those of the other patients. It is plausible that her tinnitus also caused her difficulties in speech discrimination. In the analysis of open set sentences higher scores are observed in the MFBs condition for the three patients (see Figs. 2 to 4). They were able to partially understand fluent speech with the speech processor alone using both frequency distributions, although the results were better for the MFBs. Training effects are also noted in both conditions. All the patients except patient no. 1 reported better intelligibility of fluent speech (whatever the speaker's gender)5 in daily use, using the speech processor alone with the MFBs condition. Table 7 shows some inter-test differences; namely, the average values of all patients scored better in the MFBs distributions of the vowel identification tests; the average score of the sentence comprehension tests was second in line, while the bisyllabic word tests average produced the lowest improvement score. The absolute average improvement of patient performance was about 20% in the vowel tests, 14% in the sentence tests, and 12% in the bisyllabic words. Speech tracking rates averaged among patients show 21% absolute improvement in the MFBs distribution scores in comparison with the DFBs distribution scores. Thus we see that although the acoustic analysis of the vowels was performed in a static context, i.e., without the transition components which exist in natural speech, the ensuing MFBs also affected the sentence recognition tests, yielding improved scores under this condition. The lower scores obtained in the counterbalance tests (Table 7) validate this finding: better

Although the tests were conducted by one female speaker, the patients did not mention any difficulties in the comprehension of either male or female voices in their reports of daily experience.

275

results obtained in the MFBs condition are not only due to training effects but mainly to the fact that this condition is better for the patients. The speech processor of the N-22 makes an estimation of the formant values by taking the mid-frequency spectral peak in predetermined ranges of frequencies by means of band-pass filters. It has been shown (Blarney et al., 1987b) that there is a very good correlation between the estimated F1 frequency using the LPC analysis and that obtained using the N-22 speech processor; but the correlation for F2 (in English) is, however, not satisfactory: the speech processor estimates a lower value than the LPC. According to Blarney et al. (1987b), this is a deliberate design feature of the speech processor. The F2 values of the Hebrew v o w e l s / a , o, u / are in the low frequency skirt of the filter used in the speech processor to derive the F 2 values. We might expect that the "real value" of the f 2 of these vowels may deviate from that obtained using the speech processor. The "problematic" vowels in this study were, however, / e , i / , as well as / o / . We attribute this discrepancy between expectations and results to the patterns of these Hebrew vowels rather than to the physical characteristics of the N-22 filters. We may conclude that the MFBs improved vowel perception but did not completely solve the problem. One reason is that the stamtard deviation of formant values is much greater in fluent speech than in isolated presentations. Speech comprehension depends, among other things, on the subjects' pathologies, duration of deafness and individual speech processing abilities. Clinical studies have shown that some patients implanted with the N-22 system were able to understand a limited number of words, sentences and some connected speech using the speech processor alone (Dowell et al., 1985). Speech tracking rates depend on the lexical complexity of the selected material, on the reading rate and segment length, among other features. In our study, all the patients were exposed to the same speech material presented at the same reading rates, and they had the same training experience at the time of the tests. But in spite of their individual characteristics, we see that under

276

L. Aronson et al. / Speech Communication 14 (1994) 263-277

the MFBs condition the subjects' speech comprehension was better than with the DFBs. As tests were performed in a quiet room using the speech processor of each patient and the head-set microphone, it may be possible to infer that in a noiseless environment these patients could understand some connected speech uttered at a lower rate than normal. Results are still far from those obtained with normal-hearing people who, according to (De Filippo and Scott, 1978), use up to 113 words per minute. The fact that patients are able to understand unknown speech material without lipreading could indicate that the quality of the perceived speech sounds was partly but sufficiently similar to that perceived by normal hearing subjects. This seems to be the case for patients no. 2 and no. 3, since they are able to communicate over the telephone. U n d e r the DFBs condition, the patients reported that comprehension of H e b r e w was somewhat better than that of the other languages they used at home, namely English and German. After the MFBs were established, they reported a much better comprehension of Hebrew, so that the difference between this language and the others was more marked. For example, patient no. 3 who is bilingual (Hebrew-English), reported having worse speech understanding of English when using MFBs than when using DFBs. This could be related to the fact that vowel systems of English (Peterson and Barney, 1952), G e r m a n (Malmberg, 1963; Iivonen, 1987), French (Maimberg, 1963), Swedish (Fant, 1960), etc. comprise more vowels than Hebrew, often with very close or overlapping formant values. The formant channel allocation for the corresponding active electrodes, for speakers of these languages who are implanted with the N-22 system, should therefore also be overlapping. The associated perception of the vowels is accordingly expected to be more confused in such languages than in Hebrew, or Spanish (Quilis and Esgueva, 1983), or Japanese, which are similar to H e b r e w regarding their simple set of five vowels. We should be cautious about generalizing the results, since there were only a few patients, their pathologies and histories were different, and each

of them was implanted with a different number of active electrodes. However, it is clear that every one of the subjects gained some benefit in Hebrew when using the device adapted to the Hebrew vowels' formant patterns, not only for isolated vowels but also in fluent speech. A similar procedure of MFBs could be applied to other languages, especially those with few vowels. It would be interesting to study the effects of such adaptation on the comprehension of fluent speech.

Acknowledgments We thank the Technion V P R Albert Goodstein Research Fund and V P R Grant No. 200082 for partial support of this work. We also thank the C O N I C E T (National Council of Scientific and Technological Research), Buenos Aires, Argentina, for partial support of the work. Our gratitude is also extended to the Medical Electronics Laboratory at the Technion for their kindness, technical help and the permission to use their facilities throughout our study. Likewise, we offer special thanks and gratitude to the patients for their patience and collaboration.

References E. Abberton, A.J. Fourcin, S. Rosen, J.R. Walliker, D.M. Howard, B.C.J. Moore, E.E. Douek and S. Frampton (1985), "Speech perceptual and productive rehabilitation in electro-cochlear stimulation", in Cochlear Implants, ed. by R.A. Schindler and M.M. Merzenich (Raven Press, New York), pp. 527-537. E. Agelfors and A. Risberg (1989), "Speech feature perception by patients using a single-channel Vienna 3M extracochlear implant", QPSR Issue N 1, Speech Transmission Lab., KTH (Sweden), pp. 145-149. L. Aronson, J. Rosenhouse, G. Rosenhouse and L. Podoshin (1993), "An analysis of modern Hebrew vowels and voiced consonants", Submitted for publication. Audiologist's Handbook (1989), Nucleus 22-Channel Cochlear Implant System. No. 2008, Issue 4, Cochlear Corporation Ltd. P.J. Blamey, R.C. Dowell, A.M. Brown and G.M. Clark (1987a), "Vowel and consonant recognition of cochlear implant patients using formant-estimating speech processor", J. Acoust. Soc. Amer., Vol 82, pp. 48-57.

L. Aronson et al. / Speech Communication 14 (1994) 263-277 P.J. Blarney, P.M. Seligman, R.C. Dowell and G.M. Clark (1987b), "Acoustic parameters measured by a formantbased speech processor for a multiple-channel cochlear implant", J. Acoust. Soc. Amer., Vol. 82, pp. 38-47. A. Boothroyd (1987), "Perception of speech pattern contrasts via cochlear implants and limited hearing", Ann. Otol. Rhinol. Laryngol. SuppL, Vol 96, pp. 58-64. G.M. Clark, Y.C. Tong and R.C. Dowell (1984), "Comparison of two cochlear implant speech processing strategies", Ann. OtoL Rhinol. Laryngol., Vol. 93, pp. 127-131. C.L. De Filippo and B.L. Scott (1978), "A method for training and evaluating the reception of ongoing speech", J. Acoust. Soc. Amer., Vol 63, pp. 1186-1192. M.F. Dorman, M.T. Hannley, G.A. McCandless and L.M. Smith (1988), "Auditory/phonetic categorization with the Symbion multichannel cochlear prosthesis", J. Acoust. Soc. Amer.. Vol 84, pp. 501-510. M.F. Dorman, S. Soil, K. Dankowski, L.M. Smith, G.A. McCandless and J. Parkin (1990). "Acoustic cues for consonant identification by patients who use the Ineraid cochlear implant", J. Acoust. Soc. Amer., Vol. 88, pp. 2074-2079. R.C. Dowell, L.F.A. Martin, G.M. Clark and A.M. Brown (1985), "Results of a preliminary clinical trial on a multipie-channel cochlear prosthesis", Ann. Otol. Rhinol. Laryngol., Vol. 94, pp. 244-50. R.C. Dowell, D.J. Mecklenburg and G.M. Clark (1986), "'Speech recognition for 40 patients receiving multi-channel cochlear implants", Arch. Otolaryngol. Head and Neck Surgery, Vol. 112, pp. 1054-1059. D.K. Eddington (1983), "Speech recognition in deaf subjects with multi-channel intracochlear electrodes", in Cochlear Prostheses: An International Symposium, ed. by C.W. Parking and S.W. Anderson, Ann. New York Acad. Sci., Vol. 405, pp. 241-258. E.F. Evans (1984), "How to provide speech through an implant device: dimension of the problem: An overview". In Cochlear Implants, ed. by R.A. Schindler and M.M. Merzenich (Raven Press, New York), pp. 167 183. G. Fant (1960), Acoustic Theory of Speech Production (Mouton, The Hague). l.J. Hochmair-Desoyer, E.S. Hocbmair, K. Burian and tt.K. Stiglbrunner (1983), "Percepts from the Vienna cochlear prosthesis", in Cochlear Prostheses: An International Symposium, ed. by C.W. Parking and S.W. Anderson, Ann. New York Acad. Sci., Vol 406, pp. 295-306. I.J. Hochmair-Desoyer, E.S. Hochmair and H.K. Stiglbrunner (1985), "Psychoacoustic temporal processing and speech understanding in cochlear implant patients", in Cochlear Implants, ed. by R.A. Schindler and M.M. Merzenich (Raven Press, New York), pp. 291-304. A. Iivonen (1987). "A set of German stressed monophthongs analyzed by RTA, FFT and LPC", in: In honour o]" Ilse Lehiste, ed. by R. Channon and L. Shockey (Dordrecht, The Netherlands/Providence, USA), pp. 125-138. A.M. Liberman and I.G. Mattingly (1985), The motor theory of speech revised, Haskins Laboratories: Status Report on speech research, SR-82/83, pp. 63-93.

277

B. Malmberg (1963), Structural Linguistics and Human Communication (Springer, Berlin). A.M.B. Manrique and M.I. Massone (1982), "Acoustic analysis and perception of Spanish fricative consonants". J. Acoust. Soc. Amer., Vol. 62, pp. 1145-1153. D.J. Mecklenburg, R.C. Dowel and V.W. Jcnison (1987), Nucleus 22 Channel Cochlear Implant System, Rehabilitation Manual, No. 3008, Iss. 1, Cochlear Corporation Ltd. E. Owens, D.K. Kessler, C.C. Telleem and E.D. Schubert (1981). "The minimal auditory capabilities (MAC) battery", Hear. Aid J., Vol 34, pp. 9-34. T. Parkin, W. Boggers and B. Dinner (1988),' Multichannel cochlear implantation: Utah-Design". La~'rtgoscope, Vol. 98, pp. 262-265. G. Peterson and H. Barney (1952), "Control methods used in a study of vowels", J. Acoust. Soc. Amer., Vol. 24, pp. 175-184. A. Quilis and M. Esgueva (1983), "Realizacion ee los fonemas vocalicos espaholes en posicion fonetica normal", in Estudios de fonetica, ed. by M. Esgueva and Cantarero, Collectanea Phonetica Vol. VII (CSIC, Madrid). pp. 137-252. E.D. Schubert (1985), "Some limitation on speech coding for implants", in Cochlear Implants, ed. by R.A. Schindler and M.M. Merzenich (Raven Press, New York), pp. 269-276. F.B. Simmons, L.J. Dent and D.V. Compernolle (1986), "Comparison of different speech processing strategies on patients receiving the same implant", Ann. Otol. Rhinol. Laryngol., Vol 95, pp. 71-75. M.W. Skinner, L.K. Holden, T.A. Holden, R.C. Dowell, P.M. Seligman, J.A. Brimacombe and A i . Beiter (1991), "Performance of postlinguistically deaf adults with the Wearable Speech Processor (WSP Ill) and Mini Speech Processor (MSP) of the Nucleus Multi-electrode cochlear implant", Ear and fh'aring, Vol. 12, pp. 3 22. Q. Summerfield (1985), "Speech-processing a~ternatives for electrical auditory stimulation", in Cochlear Implants, ed. by R.A. Schindler and M.M. Merzenich (Raven Press, New York), pp. 417-420. Y.C. Tong, P.J. Blarney, R.C. Dowell and G.M. Clark (1983), "Psychophysical studies evaluating the feasibility of a speech coding strategy for a multiple-channel cochlear implant", J. Acoust. Soc. Amer., Vol. 74, pp. 73-8(/. R.S. Tyler, J.P. Preece and M.W. Lowden (1983), The Iowa cochlear implant tests, University of Iowa, Iowa City. N. Tye-Murray and R. Tyler (1989), "Auditory zonsonant and word recognition skills of cochlear implm~t users", Ear and Hearing, Vol. 10, pp. 292-298. M.W. White (1983). "Formant frequency discrimination and recognition in subjects implanted with intracochlear stimulating electrodes", in Cochlear Prostheses: A,~ International Symposium, ed. by C.W. Parking and S.W. Anderson, Ann. New York Acad. Sci., Vol. 405, pp. 348 35¢). M.W. White (1985), "Speech and stimulus processing strategies for cochlear prostheses", in Cochlear bnplants, ed. by R.A. Schindler and M.M. Merzenich (Raven Press, New York), pp. 243-267.