BRAIN
AND
LANGUAGE
18,
212-223 (1983)
Deep Dysphasia: An Analog of Deep Dyslexia in the Auditory Modality F. Laboratoire
de Neuropsychologie L&pine,
MICHEL
Expt+imentale, INSERM-U 69500 Lyon-Bron, France
94, 16, avenue
du Doyen
AND
E. ANDREEWSKY INSERM-U
84, Hdpital
de la Salpe^triZre,
75013 Paris,
France
A right-handed patient, with two left hemisphere lesions, a small one in the prefrontal lobe and a larger one in the temporal, presents an unusual syndrome: a massive deficit for oral language (expression and comprehension) contrasting with a fairly good preservation of written language (expression and comprehension). The processing of isolated words and sentences has been extensively tested with repetition and dictation tasks. The patient performs rather well with nouns, verbs, and adjectives, poorly with adverbs and function words, and completely fails with nonsense words. A remarkable feature of his repetition is the frequency of semantic paraphasias. Thus, this patient exhibits a behavior rather similar to deep dyslexia, hence the possible label “deep dysphasia.” The paper presents a “preunderstanding” hypothesis to account for such behaviors.
Mechanisms of language comprehension are difficult to study in normal subjects because in an experimental paradigm we access input and output data, but not internal processing. Internal mechanisms can only be hypothesized within theoretical models, which may have the power to explain as well as to predict. One way of developing models is to infer possible processes underlying the output data. This is difficult in general, since each process of human comprehension is embedded in the whole of the system, and cannot be isolated. However, some specific output of those processes are open to study in aphasic subjects, because a lesion may operate a selective deWe thank F. Newcombe, who encouraged us to study the case and provided useful suggestions. Address reprint requests to F. Michel. 212 0093-934X/83/020212-12$03.00/0 Copyright All rights
0 1983 by Academic of reproduction in any
Press, Inc. form reserved.
213
DEEP DYSPHASIA
struction: one process being deficient, others that may be hidden in the behavior of the fully competent subject come to be revealed. As an example, the extreme difficulty for deep dyslexics (Coltheart, 1980) to use grapheme to phoneme conversion has made it possible to witness a visual-semantic interface that is hardly accessible to study in normal subjects. Most investigations have been restricted to the patients’ skills in reading aloud single items. As simple as such a reading task may appear, it nevertheless implies some understanding mechanisms, as testified, for instance, by the occurrence of semantic paralexias or the inability of the patient to read aloud pronounceable nonwords. Although it has not been observed frequently, a syndrome which is analogous to deep dyslexia in the auditory modality does exist (Cruze’s case and the present case in Morton (1980); Goldblum’s two cases (1981); Poncet, personal communication). Such a syndrome could be called “deep dysphasia.” While we suggest this label, we do not mean to describe a new type of aphasia. Rather, we use it as a tag in order to epitomize a cluster of repetition disturbances that parallels the reading disturbances observed in deep dyslexia. The present paper is a theoretically oriented presentation of a case already published (Michel, 1979; Lavorel, 1980). Our theoretical approach is based on the following hypotheses about the mechanism of language comprehension (Deloche and Andreewsky, 1982): 1. A “preunderstanding” process, retrieving memorized information, is a prerequisite for the understanding process proper. 2. If comprehension is restricted to this preliminary phase, it will reveal some properties of this retrieving. An Artificial Intelligence (A.I.) model of such a restricted understanding does simulate the behavior of “deep dysphasics” or deep dyslexics. I. THE PREUNDERSTANDING
HYPOTHESIS
The traditional understanding of language and meaning is based on a simple correspondence between meaning of words and meaning of sentences. It can be summarized as follows: 1. Sentences make statements about the world. 2. The content words of a sentence denote objects, their properties, or relationships. 3. What a sentence says about the world is a function of the words it contains, and the structures into which they are combined. But this simple correspondence does not work: there is no simple lexical basis for interpreting a sentence, even as ordinary as “can you give me the salt?” An action (passing the salt), rather than an answer such as “yes” or “no,” is more in keeping with the speaker’s expectation. Such an
214
MICHEL
AND
ANDREEWSKY
action implies an interpretation, grounded on both lexical knowledge (salt is required, not pepper) and social knowledge. More generally, “in any situation when we are interpreting language, we begin with a system of understanding (grounded on stored world knowledge) that provides a basis upon which to generate an interpretation. This preunderstanding in turn arises and evolves through interpretation. This circle, in which understanding is necessary for interpretation, which in turn creates understanding, is called the “hermeneutic circle” (Winograd, 1980). The following examples are given to illustrate this conceptual circle and clarify the argument about nonsimple correspondence between words and sentence meanings: For: (1) Francois studies English. “Studies English” means something like “going to school,” or “watching some audiovisual device to learn how to master this foreign language.” For: (2) Chomsky studies English. What we know about Chomsky’s scientific activities leads us to interpret “studies English” as “working on the properties of (Chomsky’s) mother tongue.” For: (3) Reagan studies English, “English” is opposed to “American.” Each interpretation of “studies English” is a function of each sentence the two words belong to, hence the circle. To avoid this circle world knowledge is obviously required (here about Francois’s, Chomsky’s, or Reagan’s linguistic familiarity and interest in English). How can we retrieve from our memories those elements of knowledge which are useful for the understanding of a given sentence? How can we access information which is referred to by a sentence before we completely understand the sentence? Most AI. systems for natural language processing involve world knowledge for the handling of sentences. This knowledge is represented, according to each Al. system, as “frames” (Minsky, 1975), or texts and scripts in natural language, or data base and procedures, etc. (Winograd, 1980; Schank, 1975; Small, 1980). Selecting its relevant parts, for a given sentence, provides a “basis upon which to generate an interpretation” precisely defined for each A.I. system and depending on the application (question answering, automatic translation, automatic indexation, automatic information retrieval, speech recognition). The simplest relevant information formally derivable from a sentence is its content words. And matching these content words with key words points to specific stored information. With such formal matching processes, AI. systems can automatically retrieve relevant knowledge (Andreewsky, 1980). It only requires a formal morphological processing of words.
DEEP
21s
DYSPHASIA
We will present some comments on such retrieval logical requirement. thus introducing its relevance for a natural cognitive model of sentence understanding. 1. The content words of a given sentence must be determined; this requires a syntactic disambiguation (“he will leave a can” versus “he can leave a will”). 2. Content words are normalized before lexical matching can take place, just as in a dictionary search bound morphemes are trimmed off to leave only lexical morphemes. 3. The normalized content words are partitioned into semantic classes of equivalence; the same stored information being relevant for two synonyms (or more generally for some words semantically related), these words are equivalent in the logic of the system. II. THE CLINICAL
CASE
The patient is 45 years old, right handed, and college educated. He is a former head steward for an ocean liner company. After having fallen down a staircase, he was found to be severely aphasic although he did not show other neurological disturbances. It is noteworthy that the patient was not comatose after this closed head trauma, and that the clinical picture he exhibited afterward did not resemble the sequellae of global deteriorations often seen in such cases. The CT scan revealed two left hemisphere lesions, a small one in the anterior part of the frontal lobe, and another one in the middle part of the temporal lobe. At first, oral comprehension was extremely limited and verbal expression was restricted to a fluent but unintelligible jargon. In contrast, silent reading seemed to be well preserved and written expression was fluent and informative, although often incoherent or at least grammatically incorrect. The linguistic deficit improved somewhat over the first 2 years after the accident, but during the next 4 years, it remained stable. The patient has always been cooperative and most of the observations have been cross-verified by multiple examinations. Apart from his linguistic deficit, the patient does not present any motor or sensory deficit; he has a normal visual field and an average audiogram. His performance IQ is 107. His memory is excellent. His psychiatric status is evidently sound. He nevertheless suffers from epilepsy (two grand ma1 attacks for the last 4 years). He is retired and is currently writing his biography as a sort of occupational therapy. The patient, who appears to like talking, as he did before his stroke, now has a slow, cautious speech. He often speaks syllable by syllable. While planning his utterances, he does his utmost to correct his numerous phonemic paraphasias and to approximate a correct phonological sequence. Except for ready-made sentences, he is not capable of uttering long sentences and often relies on mime and gesture to be understood.
216
MICHEL
AND
ANDREEWSKY
His comprehension of oral language is limited (this will be detailed later on). An impairment of phonemic discrimination may be ruled out as the patient correctly performs the task of selecting a stimulus uttered by the examiner from a written list of 4 words, the target item differing from the other by only one phoneme, e.g.: roule, mode, bode, foule. Repeated testing with a dichotic presentation of word pairs has shown from the beginning a right ear extinction which remains identical after 8 years of follow-up. Auditory evoked potentials, recorded at three different sessions, appear normal and symmetrical. Contrasting with the gross deficit in oral language, there is an outstanding preservation of written language. Reading aloud is very slow and often phonemically inaccurate; but the patient will correctly spell a word that he has difficulty uttering. Silent reading is, by contrast, unimpaired. When offered four words, even rare ones, related in meaning to a stimulus word, he easily selects the semantically closest. He is quick at pointing out written words from specific categories, such as adjectives, foreign words, or nonsense words. He is quite good at translating written English words into French. He performs well in Token-Test style written orders, e.g., “Show the line of tokens where the red is between the green and the blue,” or “Show the line where there are neither green nor blue chips.” When given meaningful as well as absurd sentences in a list, he easily identifies absurd ones like: “the patient heals the doctor.” To illustrate his excellent understanding of written sentences one may cite the following example. After reading: “the pedestrian crushed the truck,” he wrote in the margin: “King-Kong”! Tachistoscopic reading is equally normal in the left and the right parafoveal areas, as well as in the fovea1 area. The only errors the patient makes occur with very brief duration exposures and are visual (e.g., “jcune” for “&me”) but these errors are no more frequent than for a normal subject with our experimental conditions. No semantic paralexias were observed, even with very fast exposures. With dicampic presentation, i.e., simultaneous to both fields, left visual field items are often neglected. From the onset of his aphasia, the patient has been writing fast with above-average penmanship, finding his words easily, even when they were not frequent ones. Spelling was normal. However, the grammar was most often incorrect and he wrote sentences like: “A road in which it passed in a bridge.” Grammar has since improved considerably, although there are still errors with passive forms, prepositions, subordinates, and connectors. He will now write sentences like: “Children give a hot bath only the evenings” or “Both have tied up by a rope.” Thus his written production is not agrammatic, but rather dyssyntactic or paragrammatic (See Lavorel, 1980).
217
DEEP DYSPHASIA TABLE INFLUENCE
Nouns (N = 180)
I
OF WORD CATEGORIES ON REPETITION
Abstracts words (N = 50)
Adjectives (N = 50)
Verbs (N = 50)
Function words (N = 50)
Nonsense words (N = 50)
Percentage Exact repetition Semantic paraphasias No response Random response
60
6
54
20
6
0
24 12
25 50
24 22
27 33
6 68
90
4
19
0
20
20
10
Nore. Percentages of different types of repetition of single words during various testing sessions. “Exact” repetition includes cases where the repetition is phonemically inaccurate but similar to the stimulus. Random responses are words which have no phonemic or semantic relation to the stimulus words.
When he writes, his lexicon is immediately available and rather rich. Written naming of pictures is easily achieved, even when the represented objects or scenes are closely related semantically (e.g.: “coach, waggon, cart, barrow”). His short-term memory for written words is excellent (7 words out of 10 after 1 min). Thus, it is easier and more reliable to communicate with the patient through written language. We do not claim that our patient is aphasic for oral language and normal for written language; rather, we are simply stating that reading and writing are a much better channel for communication than listening and speaking. This rare preservation of a rather good written language makes it possible to assess his linguistic competence level. Thus the failures that we will describe in his comprehension of oral speech will not be explained in terms of a deficient linguistic ability, but rather as a problem that is specific to the oral modality. Indeed, for each task studied (such as repetition, syntactic disambiguation, lexical decision), his reading or his writing provided us with excellent means of determining the nature and the severity of errors in a differential approach. III. THE PREUNDERSTANDING SYSTEM AT WORK IN THIS “DEEP DYSPHASIC” PATIENT
While interpreting sentences, the natural cognitive system faces a retrieval problem as A.I. systems. What could be the behavior of a disrupted understanding system that would only keep a “preunderstanding” processor? 1. The knowledge retrieved for each sentence would correspond to
218
MICHEL
AND
ANDREEWSKY
“what is likely to occur,” i.e., the usual situation referred to by the sentence content words. So does our deep dysphasic subject: “The patient takes care of the doctor” is “preunderstood” as “Doctor . . . take care . . . patient.” Like Broca’s aphasics (Zurif and Blumstein, 1978) our patient mostly understands “what is likely to occur.” In the same way he will accept but reorder sentences like: “the river flows over the bridge” as “the bridge . . . over the river.” But he often fails to repeat or to write sentences in which the grammatical structure critically determines a logical relationship between content words. He even fails in the most simple multiple-choice task, e.g., part I of the Token-Test, where he cannot rely on any pragmatic knowledge. 2. Content words are the only input data for the “preunderstanding” process. Our patient has relatively few problems repeating concrete words (apart from the semantic substitutions that we will study later). For example, he easily repeats, although with phonemic errors, words like tree, billet, pencil, fabric, write . . . while most of the time he is not able to repeat abstract words with the same frequency in French language, such as abuse, union, hazard, usage, crisis . . . He is usually unable to repeat syllables, letter names, and fails completely with nonsense words. Although he can repeat single digits, he fails to repeat two-digit numbers (beyond 16) and is completely lost with threedigit numbers. Auditory confusions are very scarce and we have rarely observed auditory phonemic errors. We did not observe any false recognition of nonsense or foreign words. It seems that the patient either gets the meaning immediately or fails to respond at all. The grammatical category of the stimulus word greatly influences the success the patient has with repetitions. The patient performs best with nouns and adjectives, and verbs when given the infinitive form. He, is rarely able to repeat pronouns and adverbs and gets even worse with conjunctions and prepositions (Appendix I). 3. The roots of verbs (such as “write” for “writing”) or of nouns (such as “child” for “children”) would be the only form that the system considers, after the normalization process. Our patient does not repeat grammatical morphemes or suffixes (i.e., “eaten” becomes “to eat”). He also makes derivational errors and utters nouns for verbs: “gardening” and “gardener” become “garden” (in french “jardin’‘-garden-is only a noun). 4. Just as an A.I. retrieval system processes words using semantic classes of equivalence, patients with deep dyslexia (Coltheart, Patterson, & Marshall, 1980) come out with frequent semantic paralexias, where
DEEP
DYSPHASIA
219
each item read aloud has a semantic relationship with the target word (both being in a same class of equivalence). A similar phenomenon is observed in our “deep dysphasic” patient. One of the most characteristic features of his repetition is the large number of semantic “errors” made when he is given single words to repeat. Each time we tested repetition in our patient, we emphasized that his task was to simply repeat and not indulge in giving free associations. We tested this patient time and again: semantic paraphasias are not an occasional, but rather a stable characteristic of his repetition (see Appendix II). A complete list of more than 500 words repeated at different sessions is available. Oral stimulus balloon beggar kernel roast red independence to run 5 to 7 Brejnev Beethoven 1789
Repetition kite tramp shell chicken yellow elections fast from time to time Kissinger. . .? they are many. . *Austrian?. 1870
. .Bach?. . .
Semantic substitutions are most frequent with color names, with days of the week, with numbers, and with proper names. We have frequently observed that when the patient tries to repeat the stimulus word exactly, he is so burdened by phonemic inaccuracies that he prefers to write the word. By contrast, semantic paraphasias are usually given at once and, interestingly enough, in a correct phonemic form. It is tempting to say that in absence of preunderstanding, when the patient tries to “copy” a phonic form, paraphasic jargon is expected. The patient may be vaguely aware that he is not repeating exactly. For example: -Observer: planet -Patient: moon -0.: Is it right what I said? -P.: No, I don’t think so -0.: I said planet -P.: Mars. . . no. . . it is not that. . . But often he is convinced that he has just repeated the target word. It is noteworthy that he does not respond with antonyms.
220
MICHEL
AND
ANDREEWSKY
His semantic “errors” are sometimes difficult to decipher, because he expresses personal associations or connotations: “blank” (in French: “trou”) is repeated “gardenal,” by which the patient reminded his therapist that he is taking anticonvulsant drugs to avoid loss of consciousness (a blank) during seizures. It has already been mentioned that phonemic paraphasias often distort repetitions. We therefore took advantage of the patient’s writing ability to study his performance in dictations. For isolated words semantic errors are frequent. For example, when orally asked to write the word: “brain,” the patient wrote “heart, liver, lungs. . .,” while making it clear by mime, gesture, and the three dots after the written words, that he was not sure of his answer. Then, when asked to point precisely at the body part involved, he made a vague movement with his hand over his body. 5. An A.I. information retrieval system must be able to perform syntactic parsing of given sentences to select their content words. Andreewsky and Seron (1975) showed that a patient, obviously unable to understand syntactical cues in sentences or to use function words either in oral or in written language, nevertheless performed an implicit syntactic parsing (as shown by the oral reading of syntactically ambiguous words in sentences). Our patient also performed implicit syntactic disambiguation when repeating sentences (see Appendix III) such as: -0. Dantonf~t (dat5fy d a) un grand homme (Danton was a great man). -P. R. . . (son nom de famille) Ctait formidable. . , mon grand-p&e . . . 1789, 1870, 1890? (R. . . (patient’s family name) was terrific. . . my grand-father. . . 1789, 1870, 1890?). -0. Duns tonfut (datsfy d a), il y a du bon vin (In your vat, there is some good wine). -P. Boire un coup (Have a drink). To summarize, the behavior of our patient corresponds to five predictions that could be made on the basis of a model of the understanding mechanism, restricted to its preunderstanding process. CONCLUSION
It is difficult to place this patient’s deficits within a classical neuropsychological framework. Despite his massive deficit in spoken language, one cannot speak of a classical Wernicke aphasia when the patient reads and writes so easily. In some respects, his deficit is similar to that encountered in verbal deafness, especially when one considers his remarkable conservation of written language compared to his poor performance in oral language comprehension. But the deficit differs notably from verbal deafness in several aspects. First, the patient expresses himself like a fluent aphasic,
DEEP
221
DYSPHASIA
without neologisms or jargon anymore, but with numerous phonemic paraphasias. Although his oral comprehension is extremely poor, his phonemic discrimination, when listening, is good. Finally, his oral comprehension depends on features such as the grammatical category of the words, whereas in verbal deafness, this is not the case. Labeling the case as conduction aphasia may seem more appropriate. But if the patient shows an expressive disorder that recalls the paraphasic approximations of conduction aphasics, oral comprehension is too poor for such classification. Also missing are writing disorders, naming difficulties (at least in writing), and praxic problems so frequently encountered in conduction aphasia. From a physiopathological point of view, a parsimonious explanation could be that the patient’s immediate memory for acoustic patterns is too limited in duration, and because the auditory trace is decaying fast, only the semantic associations remain available. This problem of a very transient memorization is specific to the auditory input channel, since we have observed that the patient has no problem in remembering written words. The conservation of meaning in spite of a deficit of the shortterm memory for speech sounds is the opposite of what is currently observed in some severe cases of echolalia. Interpreted in the framework of the last version of the logogen model (Morton and Patterson, 1980), the deficits should mainly result from disruptions of both the (nonlexical) phoneme-phoneme route and the connection from auditory input logogens to output logogens. The former disruption accounts for the patient’s inability to repeat nonwords, and more generally to grasp phonological features, and the latter for the fact that he never repeats a word without some comprehension. However, the effects of item dimensions such as part of speech are not accounted for by the logogen system. No explanation is suggested for preserved/ impaired skills in handling sentences, such as implicit syntactical parsing or understanding strategies grounded on pragmatic cues related to common sense knowledge. The features of our patient’s repetition quite comparable to those of deep dyslexics when they read aloud, are best interpreted as the output of the understanding mechanism restricted to its preunderstanding process. They constitute in this framework a meaningful semiological pattern, which could be called “deep dysphasia.” APPENDIX Examples Concrete
of Semantic
I Paraphasias
nouns
twin --- baby; pebble --- building; sideboard --- divan; beggar --- tramp; neighbor --- friend; kernel --- peach;
MICHELANDANDREEWSKY
222
faggot --- tree; sapin --- tree; sapin --- oak; eyebrow --- eyelid; heel --- shoe; boss --- director; gallop --- hippism . . . horses; colt --- horse; cow --- goat; biche --- chick; she-wolf --- fox; costume --- jacket; rake --- tool; volcan --- lunar; geranium --- hortensia; faggot --- peasant . . . country . . . tree. Abstract
nouns
departure --- holidays; Monday --- Tuesday; afternoon --- today; color --- flag; fascism --- communism; constitution --- state; project --- next days; February --- July; holidays --- castle. Adjectives
rapide --- now; red --- yellow; black --- dress; blond --- nice; hard-working --- good marks. Verbs
to lay --- to sleep; to boil --- knife; to dirty --- dust; to throw --- driven; to run --- fast. Proper
Names
Brejnev --- russian; Brejnev --- Kissinger; Mitterand Juliette --- Veronique; Peter --- Frederic; Beethoven Chopin --- romantic; Birmania --- Africa.
--- Marchais; --- Bach;
Odds
miaou --- cat; cock-a-doodle-doo --- hen; 14-18 --- German; 1515 --- 1870.
APPENDIX
II
Syntactic Disambiguation We give, as an example, some of his most recent performance (July 1980) with dictated sentences (originally randomized, but given here in pairs) containing homophonic strings: 0. Tant qu’il y aura des hommes (French translation of the movie: From here to eternity); P. Film, roman, guerre, Japon, 1944, bateaux americains . . . Pearl Harbor (movie, novel, war, Japan, 1944, American fleet . . . Pearl Harbor); 0. Temps prevu pour demain (Tomorrow’s weather forecast); P. Demain, le temps Ctait bien mauvais (Tomorrow the weather was rather bad).
DEEP DYSPHASIA
223
0. 11 faut redescendre du car (You must get off the bus); P. Vers la destination, vous prenez le bus (Toward the destination, you take the bus). 0. II s’abrite car il pleut (He takes shelter because it is raining); P. Tu viens vers un refuge . . . la pluie a torrents (You take refuge . . . pouring rain). 0. DPtente internationale (International detente); P. 0. N. U. (U. N.). 0. Des tantrs et des oncles (Aunts and uncles); P. Les fret-es . . . les cousins (Brothers . . . cousins).
REFERENCES Andreewsky, E., & Seron, X. 1975. Implicit processing of grammatical rules in a classical case of agrammatism. Cortex. 11, 379-390. Andreewsky, E., Deloche, G., & Desi, M. 1980. The procedural brain. An artificial intelligence approach to aphasia-some neurolinguistic issues. In D. A. Lindberg & S. Kaihara (Eds.), Mrdinfo 80, Vol. 2, 1281-1284. Coltheart. M., Patterson, K., & Marshall, J. C. 1980. Deep d~slcxicl. London: Routledge & Kegan. Deloche, G., & Andreewsky, E. 1982. From neuropsychological data to reading mechanisms. Internutionui
Joumul
of Psyc~hology.
17, 259-279.
Goldblum, M. C. 1981. Un equivalent de la dyslexie profonde dans la modalite auditive. In Etrrde.c Neurolinguistiques. Numero special de la Revue Grammatica, VII, I, l57177 (Service des Publications de I’UniversitC de Toulouse-Le Mirail). Lavorel. P. 1980. A propos d’un cas d’aphasie de Wemicke: mise en question de I’opposition paradigme-syntagme. La Linguistiyw. 2, 43-66. Marshall, J. C., & Newcombe, F. 1966. Syntactic and semantic errors in paralexia. Ne,rropsychologia,
4, 169-176.
Marshall, J. C., & Newcombe, F. 1973. Patterns of paralexia: A psycholinguistic approach, Journul
Michel,
oj- Psycholinguistic
Reseurch,
2, 175-199.
F. 1979. Preservation du langage Ccrit malgre un deficit majeur du langage oral. Lyon MPdicul, 241, 141-149. Minsky, M. 1975. A framework for representing knowledge. In R. Schank (Ed.). Throreticul issr4es in nuttrrul lungrtuge processing. Cambridge: MIT Press. Pp. IIX-130. Morton, J. 1980. An analogue of deep dyslexia in the auditory modality. In M. Coltheart, K. Patterson, & J. Marshall (Eds.). Deep dyslexiu. London: Routledge & Kegan. Pp. 189-196. Morton. J.. & Patterson, K. 1980. Little words-No! In M. Coltheart. K. Patterson, & J. Marshall (Eds.). Deep dsslexiu. London: Routledge & Kegan. Pp. 270-285. Schank, R. C. 1975. Using knowledge to understand. In Theor-eticul issrres in nutur.u/ lunguuge processing. Cambridge: MIT Press. Pp. I3 l-135. Small, S. 1980. Word expert pursing: A theory of distributed word-bused nuturu; lunglruge understanding. Ph.D. Thesis. University of Maryland. Winograd, T. 1980. What does it mean to understand language? Cogniti\se Science. 4, 209241. Zurif, E. B., & Blumstein, S. 1978. Language and the brain. In Halle, Bresman. & Miller (Eds.), Linguistic theoc trntl p.s~cho/o,qic~tr/ rettlity. Cambridge: MIT Press. Pp. 227263.