Robertson – Cognitive rehabilitation
62 Hamilton, R.H. and Pascual-Leone, A. (1998) Cortical plasticity associated with Braille learning Trends Cognit. Sci. 2, 168–174
Opinion
damage: H2150 positron emission tomographic activation study Arch. Neurol. 55, 561–568
63 Rossetti, Y. et al. (1998) Prism adaptation to a rightward optical deviation rehabilitates left hemispatial neglect Nature 395, 166–169
70 Donoghue, J.P. (1995) Plasticity of adult sensorimotor representations Curr. Opin. Neurobiol. 5, 749–754
64 Paolucci, A. et al. (1996) Facilitatory effect of neglect rehabilitation
71 Robertson, I.H. et al. (1997) Motor recovery after stroke depends on
on the recovery of left hemiplegic stroke patients: a cross-over study
intact sustained attention: a two-year follow-up study Neuropsychology
J. Neurol. 243, 308–314
11, 290–295
65 Karnath, H.O., Christ, K. and Hartje, W. (1993) Decrease of contralateral
72 Robertson, I.H. and Manly, T. (1999) Sustained attention deficits in time
neglect by neck muscle vibration and spatial orientation of trunk midline
and space, in Attention, Space and Action: Studies in Cognitive
Brain 116, 383–396
Neuroscience (Humphreys, G.W., Duncan, J. and Treisman, A., eds),
66 Vallar, G., Rusconi, M.L. and Bernardini, B. (1996) Modulation of neglect hemianesthesia by transcutaneous electrical stimulation J. Int. Neuropsychol. Soc. 2, 452–459
pp. 297–310, Oxford University Press 73 Feeney, D.M. (1997) From laboratory to clinic: noradrenergic enhancement of physical therapy for stroke or trauma patients Adv.
67 Rubens, A.B. (1985) Caloric stimulation and unilateral visual neglect Neurology 35, 1019–1024
Neurol. 73, 383–394 74 Plaut, D. (1996) Relearning after damage in connectionist networks:
68 Cappa, S. et al. (1987) Remission of hemineglect and anosognosia during vestibular stimulation Neuropsychologia 25, 775–782
towards a theory of rehabilitation Brain Lang. 52, 25–82 75 Murre, J.M.J. and Robertson, I.H. (1995) Self-repair in neural networks: a
69 Pizzamiglio, L. et al. (1998) Recovery of neglect after right hemispheric
model for recovery from brain damage Eur. J. Neurosci. (Suppl.) 8, 155
Parallel constraint-based generative theories of language Ray Jackendoff A re-evaluation of the goals and techniques of generative grammar since the mid-1960s suggests that its mentalistic/biological program for describing language is still sound and has been borne out by subsequent developments. Likewise, the idea of a generative system of combinatorial rules has led to a tremendous expansion of our understanding of linguistic phenomena. However, certain fundamental features of the versions of generative grammar based on Chomsky’s work prevent the theory from making deep liaisons with related fields such as language processing and neuroscience. Perhaps the most prominent of these is the assumption that all creative aspects of language stem from syntactic structure. In this article, I propose a model of generative grammar that generalizes features of several, alternative, non-Chomskyan generative frameworks. In this model, language is seen as composed of three independent generative components (phonological, syntactic, and semantic/conceptual structure), whose respective structures are placed in correspondence by ‘interface components’. Besides being able to incorporate a host of purely linguistic facts, this view leads to a more direct relationship between the theory of grammar and the theory of lexical and grammatical processing.
S
ince the mid-1960s, the predominant paradigm for the scientific study of language has been the theory of generative grammar, pioneered by Noam Chomsky. Chomsky’s 1965 book Aspects of the Theory of Syntax1 brought generative grammar to the attention of a wide range of disciplines, from cognitive psychology to philosophy to literary criticism; and generative grammar was instrumental in the emergence of cognitive science during the 1970s. Over the succeeding decades, Chomsky’s framework has undergone substantial changes, but often to the detriment of communication with
adjoining disciplines. In addition, the emergence of the tools of cognitive neuroscience has made the increasingly technical work in generative grammar seem to many researchers to be less and less relevant to the study of the brain. As we will see, some aspects of Chomsky’s 1965 theory that have not changed over the years are as valid as they were 30 years ago. But in retrospect some of them have perhaps proven to be mistaken; and these are in large part the factors that have driven generative grammar away from its neighboring disciplines. More positively, over the past 20 years
1364-6613/99/$ – see front matter © 1999 Elsevier Science Ltd. All rights reserved.
PII: S1364-6613(99)01374-1
Trends in Cognitive Sciences – Vol. 3, No. 10,
October 1999
R. Jackendoff is at the Volen Center for Complex Systems, Brandeis University, Waltham, MA 02454, USA. tel: +1 781 736 3624 fax: +1 781 736 2398 e-mail: jackendoff@ brandeis.edu
393
Opinion
Jackendoff – Models of generative grammar
there have developed a number of alternative frameworks for generative grammar with important features in common. This article will briefly describe a generalized version of these frameworks developed in my own work2 that permits better contact with research in psycholinguistics and neuroscience. The mentalistic/biological program What set Aspects apart from Chomsky’s earlier book, Syntactic Structures3, as well as from most American structural linguistics of the period, was that it situated the study of language firmly in the study of the mind: in Aspects Chomsky explicitly took his linguistic theory to be an explication of the knowledge that makes it possible for language users to produce and understand sentences of their language. Thus language ability must be understood in the context of human abilities in general. This commitment to a mentalistic theory of language has remained in all subsequent versions of generative grammar. As Chomsky established in Syntactic Structures and subsequent work4, the number of different sentences available to the language user is unlimited, and the range of possibilities cannot be characterized in terms of the transitional probabilities of words in sentences, nor in terms of a finite-state Markov device. Thus linguistic ability must be characterized in terms of a system of combinatorial rules or principles that ‘generate’ or ‘derive’ an infinite set of possible sentences from a finite vocabulary; this system is called a ‘generative grammar’. The principles of grammar are in general inaccessible to the consciousness of language users, who experience themselves simply as effortlessly speaking and understanding, without asking how they do it; the linguist’s formal written grammar is meant to model these unconscious principles as closely as possible. Again, these commitments to the existence of a generative grammar in the mind of the speaker and to the utility of linguists’ modeling such a grammar symbolically have remained unchanged over the years. This is a point where many researchers in cognitive neuroscience part company with generative grammar. Since the mid-1980s, there has been a concerted attack on the idea that mental processes should be characterized in terms of symbolic rules5. The present article is not the place to argue with such a position. Suffice it to say that it is impossible to conduct linguistic research of the usual sort, such as characterizing Icelandic case-marking, affixal structure of Turkish nouns, or alternations in Dutch verb position, without assuming the traditional symbolic format for linguistic theory. There is no viable alternative at present. But Chomsky went beyond positing a mentally realized generative grammar. Structural linguistics of the 1940s and 1950s was very much preoccupied with the possibility of developing ‘discovery procedures’ – mechanical, hence ‘objective’, principles by which a linguist could concoct a correct grammar on the basis of a corpus6,7. Chomsky, however, points out that it is no mean feat for linguists to arrive at any halfway satisfactory grammar, with or without mechanical procedures. On the other hand, he observes that language users have not arrived at their internalized generative grammars by magic: somehow, every normal child manages to acquire language by the age of ten or so, apparently from scratch – and certainly without the benefit of
studying decades of research in linguistic theory. Thus, he claims, children must have a ‘discovery procedure’ that enables them to do better than linguists. This ‘procedure’ cannot be learned, as it forms the basis for learning; therefore it must be innate. Chomsky sets as the most important goal of linguistic theory that of understanding the child’s predisposition to learn language, which he calls ‘Universal Grammar’. While acknowledging that some aspects of language learning might be accounted for in terms of general cognitive capacities – what we might call ‘Native Intelligence’ – Chomsky contended (and still contends) that there is a human cognitive specialization for language acquisition, and it is this specialization to which the term Universal Grammar has been applied. It is here, of course, that generative linguistics makes its most intimate contact with biological issues: in what sense can particular principles for learning such as Universal Grammar be coded in the human genome? And it is here that the greatest number of researchers have taken issue with Chomsky’s analysis of the situation8,9. Again, this is not the place to discuss these disputes, which have often been more violent and sectarian than the scientific issues have warranted. I suggest that some of the heat of the discussion can be abated by considering the issues of Universal Grammar as matters of degree, in the following way: •
•
•
How can any sort of animal behavior (such as sexual selection, childrearing, understanding and producing facial expressions, conducting exchanges of reciprocal altruism) be teased apart into innate and learned components? How can the genome code any such innate component of animal behavior such as to guide brain development appropriately? To what extent is the human ability to learn language guided by such an innate component?
Using the term ‘Universal Grammar’ for this innate component, • • •
•
To what extent does language acquisition rely on fairly general ‘Native Intelligence’? To what extent does it rely on a cognitive specialization, Universal Grammar? To what extent is Universal Grammar qualitatively like other cognitive specializations, and to what extent is it qualitatively distinct from all other cognitive specializations in humans and other animals? Exactly how rich does Universal Grammar have to be?
Stating the issues this way, one can locate a variety of positions, from Fodor’s extreme innatism10 to Skinner’s unrepentant behaviorism11, along a nuanced spectrum of possibilities. One can also see the contradiction in an argument that denies an innate language-learning capacity to humans on the grounds that it could not be coded in the genome, yet allows that animals might have complex cognitively guided ‘instincts’8,12. Whatever the opinions of those outside linguistics, generative linguists have for three decades found it extremely useful to assume that there is a non-trivial Universal Grammar, and to pursue how this assumption could help
394 Trends in Cognitive Sciences – Vol. 3, No. 10,
October 1999
Jackendoff – Models of generative grammar
explain both language acquisition and the details of adult languages. Again, this part of the Chomskyan program has remained constant; what has changed are hypotheses about the principles involved in Universal Grammar. Multiple generative components Four fundamental and interrelated propositions have remained unchanged in Chomsky’s continuing revision of the Aspects version of Universal Grammar13,14 (see Fig. 1). These should look familiar to readers who have had any contact with generative grammar. (1) The infinite generative capacity of language resides in the syntactic component of the grammar. Phonology (the sound system of language) and semantics (the meaning system) are purely ‘interpretive’. That is, syntax creates sentence structure, from which sound and meaning are then ‘read off’. (2) Lexical items (words) get into sentences by virtue of being ‘inserted’ into syntactic structures (in the most recent version, the Minimalist Program, by virtue of being ‘merged’ into partial syntactic structures). (3) The fact that the same semantic relation can be expressed by different syntactic means (e.g. active and passive sentences) is a consequence of moving constituents in the course of a syntactic derivation, creating a disparity between ‘Deep’ and ‘Surface’ structure (or, in later versions, between Phonetic Form and Logical Form). (4) The goal of theoretical linguistics is to study ‘competence’, the abstract form of linguistic knowledge, and this should be idealized away from considerations of ‘performance’, the way this knowledge is put to use in actual language behavior. All of these propositions are, I think, open to question in light of what has been learned about language since 1965. Let us take them in order. At the time of Aspects, it seemed possible to view the rules of phonology (for instance the principles that assign stress and that govern the alteration of vowel pronunciation in related words such as ‘parameter’ and ‘parametric’) as ‘low-level’ rules that fall at the end of the derivation. First syntax gets the words in the right order, then phonology tinkers with the way they are pronounced. This was the view in the landmark work of generative phonology, Chomsky and Halle’s Sound Pattern of English15. However, something of a revolution took place in phonology in the mid-1970s: it was realized that phonology has its own autonomous combinatorial organization, independent of that contributed by syntax. This organization governs such phenomena as the assembly of speech sounds into syllables, the hierarchical nature of stress patterns, and the grouping of words into intonational phrases16,17. The last of these provides the most graphic illustration of the relative independence of phonology from syntax (see Box 1). The consequence is that phonology and syntax are properly viewed as independent generative systems, each of which characterizes an unlimited set of structures. However, in addition, these structures cannot exist in isolation from each other. Rather, they must be linked by what I have termed ‘correspondence rules’ or an ‘interface component’, which specifies how each part of syntactic structure is connected to each part of phonological structure2.
A
Phrase structure rules
Opinion
Lexicon Semantics
Deep structure
Transformational component Surface structure Phonological component Phonetics
B
Phrase structure rules
Lexicon
D-structure Transformational component S-structure Phonological component Phonetic form
C
Logical form
Semantics
L e x i c o n Merge and movement Spell-out Phonetic form
Covert movement Logical form
Semantics trends in Cognitive Sciences
Fig. 1. Successive forms of Chomsky’s generative grammars. The form of Universal Grammar has been revised by Chomsky, from (A) ‘Standard Theory’ (1965), through (B) Government and Binding Theory (1981), to the latest architecture (C) Minimalist Program (1995). In each of these architectures, the lexicon plays a role in the formation of initial syntactic structures, which then undergo processes of movement (deformation). One stage in the syntactic derivation is sent off to be interpreted by the semantic component; a different stage is sent off to be interpreted phonologically.
Turning to semantics, at the time of Aspects there was no semantic theory of any consequence that made contact with work in linguistics. Chomsky, while alluding to the early work of Katz, Fodor and Postal18,19 on integrating semantics into generative theory, was on the whole guardedly pessimistic about the prospects of useful research in semantics. And so he has remained. Meanwhile, though, over the past decades, a number of substantive approaches to linguistic semantics have developed, including major movements such as Formal Semantics20,21, Cognitive Grammar22,23, as well as my own Conceptual Semantics24,25 and related views26–28. Despite fundamental differences among these approaches, they all agree that the meanings of linguistic expressions have a complex combinatorial structure, and that this structure cannot be derived from syntactic structure. Meanings are not built of elements such as nouns, verbs and determiners, but rather from conceptualized objects, events and quantifiers. Where syntax has structural relations such as ‘subject-of’ and ‘object-of’, between a noun phrase and a verb, semantics has a family of
395 Trends in Cognitive Sciences – Vol. 3, No. 10,
October 1999
Opinion
Jackendoff – Models of generative grammar
Box 1. Independence of intonational constituency from syntactic constituency Consider the following sentence: ‘Sesame Street is a production of the Children’s Television Workshop.’
ways, which do not differ in meaning. This time the bracketing indicates intonational groups. [Sesame Street is a production of ] [the Children’s Television Workshop]
Its syntactic structure can be indicated approximately by the following bracketing: [Sesame Street] [is [a production [of [the Children’s Television Workshop]]]] This bracketing says that the sentence is divided into subject (Sesame Street) and predicate (the rest). In turn, the predicate consists of the verb is and a predicate noun phrase (the rest). This predicate noun phrase consists of a determiner (a), a noun (production), and a prepositional phrase (the rest). The prepositional phrase consists of a preposition (of ) and a noun phrase (the rest). Thus the structure as a whole is deeply hierarchical. But now consider how this sentence is pronounced. Here are two possible
[Sesame Street] [is a production] [of the Children’s Television Workshop] Here we see flat (non-embedded) structures, both of which cut across the syntactic bracketing. Thus there is no way to derive the intonation directly from the syntax, using standard Chomskyan principles of movement and deletion. Similarly, the following much-used example exhibits the relentlessly rightembedded syntactic bracketing marked in (a), and the flat and incommensurate intonational bracketing marked in (b). (a) [This] [is [the cat [that ate [the rat [that ate [the cheese]]]]]] (b) [This is the cat] [that ate the rat] [that ate the cheese]
conceptual relations such as ‘Agent-of’, ‘Patient-of’, and ‘Goalof’ between an individual (expressed by an NP) and an action (expressed by a verb – or by a nominal such as gift, as in ‘Fred’s gift of a piano to Louise’). We can draw a conclusion completely parallel to that for the syntax-phonology connection: semantics is an independent generative system from syntax. Meaning therefore cannot simply be ‘read off’ syntactic trees. Rather, it is necessary for grammar to include a ‘correspondence rule’ or ‘interface’ or ‘linking’ component that encodes the relationship between the parts of a sentence’s syntactic tree and the parts of the sentence’s meaning. Hence, overall, we come to see the grammar as consisting of multiple generative components connected by interface components, rather than as a single generative component plus two interpretive components (see Fig. 2; and compare to the Chomskyan architectures in Fig. 1). It is important to see that a parallel architecture is still a generative grammar in all respects: it is still a formal model of the language user’s knowledge, and it presents the same problems of acquisition. The difference is only in the arrangement of components. Many of the non-Chomskyan variants of generative grammar have adopted an organization along the lines of those shown in Fig. 2, with multiple generative components. Within phonology itself, independent components (or tiers) are now routinely proposed in order to separate out stress and prosody from segmental (phonemic) aspects
Phonological formation rules
Syntactic formation rules
Semantic formation rules
Phonological structures
Syntactic structures
Semantic structures
PS–SS interface rules
SS–CS interface rules trends in Cognitive Sciences
Fig. 2. The parallel architecture. Phonological, syntactic and semantic structures are constructed by independent principles (‘formation rules’) and brought into correspondence by interface rules.
of sound patterns16. Lexical–Functional Grammar29,30 proposes two independent components within syntax, one to deal with the familiar organization of phrase structure, one to explicitly encode separately grammatical functions such as subject-of and object-of. Autolexical Syntax31 proposes separate generative components for syntax and morphology (word structure). Role and Reference Grammar32,33 proposes two generative components within syntax (phrasal syntax and morphosyntax), cross-correlated with two generative components within semantics (argument structure and focus structure). Each of these proposals posits ‘correspondence rules’ or ‘association rules’ to link up the independent structures. Thus the idea of parallel generative structures linked by interface components has become widespread in generative grammar. Under this conception of grammar, one of the crucial questions of Universal Grammar becomes, ‘What are the independent generative components of language and how do they interface with each other?’ In a sense, Fig. 2 represents a minimal decomposition of the system; each of the theories mentioned in the preceding paragraph proposes a further decomposition of one or more subsystems. The role of the lexicon In a parallel architecture, the information content of a sentence is distributed over three (or more) independent structures plus the links among them. For example, Fig. 3 represents the structure of the phrase the cat. Absent from Fig. 3 are the words the and cat at the bottom of the syntactic tree, as in the more familiar format shown in Fig. 4. There is a principled reason for this. From the point of view of syntax, all that matters about cat is that it is a singular count noun. Its syntactic behavior is identical to that of mouse, giraffe and armadillo, not to mention banana, chair and computer. By using the notation of Fig. 3, the theory is making explicit that syntax does not care about phonological and semantic differences that have no effect on syntactic behavior. A structure like that in Fig. 4 does however make sense in the context of a Chomskyan architecture (Fig. 1), in which the choice of words for a sentence must be made in syntax, the sole generative component. In this approach, words consisting of phonological, syntactic and semantic information must be inserted in their entirety into syntactic
396 Trends in Cognitive Sciences – Vol. 3, No. 10,
October 1999
Opinion
Jackendoff – Models of generative grammar
structures. However, their phonological and semantic structures are irrelevant to syntax: these are made use of only when the syntactic structure is ‘read off’ by the interpretive components. In the parallel architecture of Fig. 2, a different view of words emerges. A word still has phonological, syntactic and semantic information (how could it not?), but each kind of information appears only in its own proper component. The word cat is distributed in Fig. 3 as the phonological structure /kæt/, the singular count noun, and the semantic category of feline animals. At first this flies in the face of common sense: how could a word be scattered all over three structures? The answer is that the word is the association of these three pieces of structure in long-term memory. This association finds its way into the notation of Fig. 3 as the subscript b that links parts of the three larger structures together. The upshot is a quite different conceptualization of the role of words in grammar. In a Chomskyan architecture, a word is a package inserted into and carried around by derivations; parts of this package are unpacked and passed on to other components. In a parallel grammar, by contrast, a word is part of the linkage among grammatical structures; it actively contributes information that helps place the individual generative components in correspondence with each other. In addition, if a word is a long-term association among scattered representations rather than a unified package, it is not surprising that aspects of words’ characteristics are found spread across functionally appropriate parts of the brain34,35.
A
B Phonological structure
Syntactic structure
Phrasec
NPc
Cla
Wordb
σ
σ
Nb
Deta
count sing
/ðə/
C Semantic structure TOKEN
Thing
INSTANCE OF [TYPE: CAT]b [DEF]a c trends in Cognitive Sciences
Fig. 3. Structure of ‘the cat’ in parallel architecture. The phonological structure (A) consists of a phonological phrase comprising a one-syllable clitic pronounced ðə and a onesyllable word pronounced kæt. The syntactic structure (B) is a noun phrase consisting of a determiner and a singular count noun. The semantic/conceptual structure (C) consists (in this formalization) of the conceptualization of an instance of the category of feline animals (notated provisionally just as CAT), and marked definite (i.e. assumed by the speaker to be known to the hearer from previous discourse or world knowledge). In addition, the constituents of these structures are linked by subscripts (one could use ‘association lines’ instead): the clitic is linked to the determiner and to the definiteness feature, the word is linked to the noun and to the category of felines, and the entire phonological phrase is linked to the entire noun phrase and the entire conceptual constituent.
A more forgiving version of the competence–performance Because neither of these processes at all resembles the inherdistinction ent directionality of the Chomskyan competence model, it is It was useful for Chomsky to make a distinction between the difficult to understand how the competence model is put to study of competence, the knowledge of linguistic structures, use in language processing. Chomsky therefore resolutely and performance, the mechanisms of language processing reminds us to see the relation between competence and perthat put this knowledge to use. Only by thinking in terms formance as extremely abstract. of linguistic structures rather than in terms of processing has The situation works out differently in a parallel archiit been possible to achieve the manifold advances in our untecture. As we have already begun to see, a great deal of the derstanding of grammatical organization over the past four task of a parallel architecture lies in establishing the relation decades. However, the Chomskyan architectures through among the independent structures, the years have forced the theorist to through the interface components. make the competence–performance disNotice, however, that an interface comtinction more rigid than desirable. Here NP ponent is inherently non-directional. is why. In a Chomskyan architecture for For instance, the principles connecting competence, the decisions governing the N Det intonation to syntactic structure do not structure of a sentence come entirely say to derive one from the other; they from the syntactic component; various the cat say only that this intonation pattern and constituents are built up and moved that syntactic string may be correlated. around in syntax, and finally the sound trends in Cognitive Sciences Similarly – since words are part of the and the meaning are ‘read off’ the resulting Fig. 4. Structure of ‘the cat’ in familiar notation. Phonological interface components – the word cat structure. The derivation is therefore ininformation (‘the’ and ‘cat’) is does not say to derive the meaning herently directional, going outward from mixed in with syntactic structure. [CAT] from the sound /kæt/ or vice syntax to sound and meaning. versa; it just says that the two may be By contrast, consider what a perforlinked when each appears in its appropriate structure. mance model must account for. The process of language Thus the competence model in a parallel architecture perception has to start with a phonetic structure delivered is non-directional. This means that it may be put to work by the auditory system, and derive from it a meaning, using directly in a performance model. For instance, in language syntactic structure along the way. The process of language perception, suppose the auditory system delivers a phonetic production has to start with a meaning to be expressed, and string /kæt/. The phonology can now send a call to the lexiderive from it a phonetic structure to be delivered to the con: ‘Does anyone there sound like this?’ And the word cat motor system, using syntactic structure along the way.
397 Trends in Cognitive Sciences – Vol. 3, No. 10,
October 1999
Opinion
Jackendoff – Models of generative grammar
the transformation, provided an account of these relations. The claim was that the sentence forms we see in (1) and (2) are Surface Structures. But behind them is an unheard Deep Structure, in which elements in similar semantic relations are in similar syntactic positions. So, for example, in the Deep Structure of (1a), the city is the object of destroy and the bombs is the subject, just as in (1b); in the Deep Structure of (2a), which city is the direct object of damage, just as that city is in (2b). The syntactic derivation transforms Deep Structure into Surface Structure by moving phrases around (as well as by inserting meaningless grammatical particles such as was, by and did ). The principles for transtrends in Cognitive Sciences forming Deep Structure into Surface Fig. 5. Transformational versus linking approaches to semantic relations. Semantic relations are a conseStructure are of course part of the gramquence of linking syntactic structures to semantic roles through different routes. Two approaches are contrasted mar of the language – and part of what the here: (A) a transformational approach; (B) a linking approach. child has to acquire. What makes the derivation from Deep to Surface Structure more interesting is that transformations ‘raises its hand’ – or becomes activated. In turn, cat activates can be concatenated. For instance, (3a) must be derived by first the syntactic structure [singular count noun] and the seperforming the passive, then moving ‘by which bombs’ to the mantic structure [CAT], which the syntactic and semantic front to form the question. (3b) illustrates the derivation: parsers can try to integrate into the sentence under construction in working memory. (3a) By which bombs was the city damaged most heavily? Similarly, in language production, suppose one wants (3b) (Deep Structure): which bombs damaged the city to say something about a feline animal. The semantics sends most heavily; a call to the lexicon: ‘Does any of you mean this?’ And the (perform passive): the city was damaged most heavily word cat raises its hand. In turn, cat activates [singular count by which bombs; noun] and /kæt/, which the syntactic and phonological parsers (perform question formation): by which bombs was can try to integrate into the sentence under construction in the city damaged most heavily (= Surface Structure). working memory. In the course of the changes in Chomskyan theory (Fig. 1), There is nothing special about this conception of language the semantic functions ascribed to Deep Structure have been processing: I imagine it is the way anyone would naturally reassigned to a level called Logical Form (LF), but the basic think about it. What is different, though, is that the ‘comidea has remained: two distinct levels of syntactic structure petence model’ is configured in such a way that the role of link to semantics and to phonology, and these levels are conlexical items mirrors much more directly the information nected by sequences of syntactic movements. The most intenneeded by the processor. sive research in generative grammar over the years has been directed towards constraining transformations and the deriConstraint-based characterization of structure vations they produce, in such a way as to make them learnable. One of the most appealing aspects of early generative grammar However, it became clear early on that transformations was the hope it offered for understanding meaning through are not being performed in the course of sentence processthe notion of Deep Structure. For example, we understand ing36. Thus in this respect too, the theory of competence the event described in the passive sentence (1a) as similar to that described in the active sentence (1b), and not like that came to be divorced from considerations of performance. described in the active sentence (1c). Yet on the surface, (1a) Both on grounds internal to competence theory and on looks more like (1c) than like (1b). grounds related to performance, many variant generative theories have rejected the Deep/Surface Structure distinction (1a) The city was destroyed by the bombs. as the key to explaining semantic relations among sen(1b) The bombs destroyed the city. tences29,31,33,37,38. Instead, these semantic relations are a conse(1c) The city destroyed the bombs. quence of linking syntactic structures to semantic roles directly Similarly, we understand ‘which city’ in (2a) to play the same through alternate routes (see Fig. 5). So, for instance, inrole in the described event as ‘that city’ plays in (2b), despite stead of treating the extra was and by in the passive (1a) as their being in entirely different places in the sentence. relics laid down in the process of transforming one syntactic structure into another, they are treated as markers in syntax (2a) Which city did the bombs damage most heavily? that signal a reversed linking of syntactic positions to se(2b) The bombs damaged that city most heavily. mantic roles. Similarly, instead of a special rule that moves The major formal innovation of early generative grammar, question phrases to the front of a sentence, there is a special
398 Trends in Cognitive Sciences – Vol. 3, No. 10,
October 1999
Jackendoff – Models of generative grammar
Opinion
Fig. 6. Phrase-structure rules versus unification of structure fragments. (A) Phrase-structure rules construct trees by successively using rules to expand symbols at the bottom of a tree. Here is a sample phrase-structure grammar (where S = subject; Det = determiner; N = noun; NP = noun phrase; V = verb; VP = verb phrase). (B) A unification grammar can be construed in either of two possible ways. First, it can construct trees by ‘clipping together’ tree fragments that coincide with each other. Alternatively, it can use tree fragments to ‘check off’ or ‘license’ parts of a tree under examination (the dark shaded rectangle signifies licensing by fragment 19, the circle by fragment 29, and the triangles by fragment 39).
rule of linking that says that question phrases at the front are linked to a role left unlinked elsewhere in the sentence. This alternative treatment in terms of linking may seem equivalent to the standard transformational treatment in terms of movement, and for a very first approximation it is. However, the linking approach faces immediate difficulty with sentences like (3a), where movement transformations concatenate so elegantly. This difficulty was first solved in the mid-1970s by Michael Brame39 and Joan Bresnan40; space limitations here preclude describing the solution. Over the years, considerable evidence has accumulated that in fact supports the linking approach; again, the scope of the present article precludes describing it. For present purposes, the main point is that the linking approach leads to a competence theory that is more immediately adaptable into processing models. For one thing, more of the power of the grammar comes to be invested in the
linking rules, which are part of the interface component between syntax and semantics, and therefore are non-directional (as seen in the previous section). In addition, the inherent directionality of the Deep-to-Surface Structure mapping is eliminated in favor of a single level of syntactic structure, which can be described non-directionally as well. Instead of thinking of a syntactic structure as the successive expansion of an initial symbol S into a ramified constituent structure, we can think of it as an assembly of bits and pieces of structure stored in memory. These bits and pieces are the rules of grammar, and the main formal operation is that of clipping the pieces together by a process called ‘unification’41 (see Fig. 6). Unification is explicitly the primary combinatorial operation in all the constraint-based theories of grammar referred to above. Under this construal of the competence theory, the grammar consists precisely of the fragments of structure stored in long-term memory, and the job of the processor is to
399 Trends in Cognitive Sciences – Vol. 3, No. 10,
October 1999
Opinion
Jackendoff – Models of generative grammar
unify them in a way consistent with the phonetic input (in perception) or with the intended message (in production). Conclusion Summing up, it is possible to construct a version of generative grammar in which propositions 1–4 above are replaced by the following: (19) Multiple components of the grammar are responsible for different aspects of the infinite generative capacity of language. A sentence is created by linking a phonological structure, a syntactic structure, and a semantic structure. (29) Lexical items are distributed across the multiple structures of a sentence, and play an active role in linking the structures. (39) The fact that the same semantic relation can be expressed by different syntactic means is a consequence of multiple ways of linking semantic structure to a single level of syntactic structure. (49) The competence–performance distinction, although methodologically and heuristically useful, is not of great theoretical significance.
11 Skinner, B.F. (1971) Beyond Freedom and Dignity, Knopf 12 Deacon, T. (1997) The Symbolic Species, W.W. Norton 13 Chomsky, N. (1981) Lectures on Government and Binding, Foris 14 Chomsky, N. (1995) The Minimalist Program, MIT Press 15 Chomsky, N. and Halle, M. (1968) The Sound Pattern of English, Harper & Row 16 Kenstowicz, M. (1994) Phonology in Generative Grammar, Blackwell 17 Ladd, D.R. (1996) Intonational Phonology, Cambridge University Press 18 Katz, J. and Fodor, J. (1963) The structure of a semantic theory Language 63, 170–210 19 Katz, J. and Postal, P. (1964) An Integrated Theory of Linguistic Descriptions, MIT Press 20 Chierchia, G. and McConnell-Ginet, S. (1990) Meaning and Grammar: An Introduction to Semantics, MIT Press 21 Lappin, S., ed. (1996) The Handbook of Contemporary Semantic Theory, Blackwell 22 Langacker, R. (1987) Foundations of Cognitive Grammar (Vol. 1), Stanford University Press 23 Lakoff, G. (1987) Women, Fire, and Dangerous Things, University of Chicago Press 24 Jackendoff, R. (1983) Semantics and Cognition, MIT Press 25 Jackendoff, R. (1992) Languages of the Mind, MIT Press 26 Pustejovsky, J. (1995) The Generative Lexicon, MIT Press 27 Bierwisch, M. and Lang, E. (1989) Dimensional Adjectives: Grammatical Structure and Conceptual Interpretation, Springer-Verlag 28 Pinker, S. (1989) Learnability and Cognition: The Acquisition of Argument Structure, MIT Press
This sort of competence model, moreover, bears resemblance to the organization of visual cognition, where full understanding of a visual scene is a consequence of linking the contribution of numerous specialized but interacting areas. Thus it offers attractive prospects for integrating the theory of language with more general theories of brain architecture.
29 Bresnan, J., ed. (1982) The Mental Representation of Grammatical Relations, MIT Press 30 Dalrymple, M. et al., eds (1995) Formal Issues in Lexical–Functional Grammar, CSLI Publications 31 Sadock, J. (1991) Autolexical Syntax, University of Chicago Press 32 Foley, W. and Van Valin, R. (1984) Functional Syntax and Universal Grammar, Cambridge University Press 33 Van Valin, R. and LaPolla, R. (1997) Syntax: Structure, Meaning, and Function, Cambridge University Press
References
34 Damasio, A.R. and Tranel, D. (1993) Nouns and verbs are retrieved with
1 Chomsky, N. (1965) Aspects of the Theory of Syntax, MIT Press
differently distributed neural systems Proc. Natl. Acad. Sci. U. S. A. 90,
2 Jackendoff, R. (1997) The Architecture of the Language Faculty, MIT Press 3 Chomsky, N. (1957) Syntactic Structures, Mouton
4957–4960 35 Pulvermüller, F. (1999) Words in the brain’s language Behav. Brain Sci.
4 Chomsky, N. and Miller, G.A. (1963) Introduction to the formal analysis of natural languages, in Handbook of Mathematical Psychology (Vol. II) (Luce, R.D., Bush, R.R. and Galanter, E., eds), pp. 269–322, John Wiley & Sons 5 McClelland, J. and Rumelhart, D. (1986) Parallel Distributed Processing, MIT Press
22, 253–279 36 Fodor, J., Bever, T. and Garrett, M. (1974) The Psychology of Language, McGraw-Hill 37 Pollard, C. and Sag, I. (1994) Head-Driven Phrase Structure Grammar, University of Chicago Press
6 Bloch, B. (1948) A set of postulates for phonemic analysis Language 24, 3–46
38 Goldberg, A. (1995) Constructions: A Construction Grammar Approach to Argument Structure, University of Chicago Press
7 Harris, Z. (1951) Methods in Structural Linguistics, University of Chicago Press
39 Brame, M.K. (1978) Base Generated Syntax, Noit Amrofer 40 Bresnan, J. (1978) A realistic transformational grammar, in Linguistic Theory and Psychological Reality (Halle, M., Bresnan, J. and Miller, G.,
8 Elman, J. et al. (1996) Rethinking Innateness, MIT Press 9 Tomasello, M. (1995) Language isn’t an instinct (Review of Steven Pinker, The Language Instinct) Cognit. Dev. 10, 131–156 10 Fodor, J.A. (1983) Modularity of Mind, MIT Press
eds), pp. 1–59, MIT Press 41 Shieber, S. (1986) An Introduction to Unification-Based Approaches to Grammar, CSLI Publications
Coming soon to Trends in Cognitive Sciences… • • • • • • •
Function, goals and intention: children’s teleological reasoning, by D. Keleman Gesture’s role in communication, by S. Goldin-Meadow Facial attractiveness, by R. Thornhill and S.G. Gangestad Social cognition and the human brain, by R. Adolphs Acquiring generic knowledge, by S. Prasada A usage-based account of child language development, by M. Tomasello Successful intelligence: finding a balance between extreme positions, by R.J. Sternberg
400 Trends in Cognitive Sciences – Vol. 3, No. 10,
October 1999