Lexical Conceptual Structure 69 Procter P (1978). Longman dictionary of contemporary English. Longman Group Ltd. Harlow: UK. Pustejovsky J (1995). The Generative Lexicon. Cambridge, MA: MIT Press. Pustejovsky J, Hanks P & Rumshisky A (2004). ‘Automated induction of sense in context.’ In Proceedings of the 20th International Conference of Computational Linguistics, COLING-2004 2, 924–930. Resnik P (1998). ‘WordNet and class-based probabilities.’ In Fellbaum C (ed.) WordNet: An electronic lexical database. 239–264. Resnik P & Yarowsky D (2000). ‘Distinguishing systems and distinguishing senses: new evaluation methods for word sense disambiguation.’ Natural Language Engineering 5(2), 113–133. Sag I, Baldwin T, Bond F et al. (2002). ‘Multiword expressions: a pain in the neck for NLP.’ In Proceedings of the Third International Conference on Intelligent Text
Processing and Computational Linguistics (CICLING 2002). Mexico City: Mexico. 1–15. Schu¨tze H (1998). ‘Automatic word sense discrimination.’ Computational Linguistics 24(1), 97–123. Smadja F (1993). ‘Retrieving collocations from text: Xtract.’ Computational Linguistics (Special Issue on Using Large Corpora) 19(1), 143–177. Soanes C & Stevenson A (eds.) (2003). The Oxford Dictionary of English. Oxford University Press.
Relevant Websites http://mwe.stanford.edu – The Multiword Expression Project. http://www.lsi.upc.es/nrigau/meaning/meaning.html – The MEANING project.
Lexical Conceptual Structure J S Jun, Hangkuk University of Foreign Studies, Seoul, Korea ß 2006 Elsevier Ltd. All rights reserved.
Introduction The lexical conceptual structure (LCS) or simply the conceptual structure (CS) is an autonomous level of grammar in conceptual semantics (Jackendoff, 1983, 1990, 1997, 2002), in which the semantic interpretation of a linguistic expression is explicitly represented. Jackendoff’s (1983) original conception is to posit a level of mental representation in which thought is couched (cf. the language of thought in Fodor, 1975). CS is a relay station between language and peripheral systems such as vision, hearing, smell, taste, kinesthesia, etc. Without this level, we would have difficulty in describing what we see and hear. There are two ways to view CS in formalizing a linguistic theory. One is to view CS as a nonlinguistic system that serves as an interface between meaning and nonlinguistic modalities. Then, we need another level of representation for meaning (cf. Chomsky’s [1981, 1995] LF); and CS is related to the linguistic meaning by pragmatics as shown in Figure 1. This is the view of Katz and Fodor (1963), Jackendoff (1972), Katz (1980), and Bierwisch and Schreuder (1992). The alternative conception is to view CS as the semantic structure. The linguistic meaning as well as nonlinguistic information compatible with sensory and motor inputs is directly represented in CS. CS is related with other linguistic levels such as syntax
and phonology by correspondence rules, and therefore CS is part of the lexical information (hence called LCS) as shown in Figure 2. This is the current view of conceptual semantics. One argument that supports the latter view comes from generic judgment sentences. In the standard view of linguistic meaning, judgments of superordination, subordination, synonymy, entailment, etc., are linguistic. We judge that ‘bird’ and ‘chicken’ make a superordinate-subordinate pair; that in some dialects ‘cellar’ and ‘basement’ are synonymous; and that ‘Max is a chicken’ entails ‘Max is a bird.’ Linguistic judgments of this sort are formalized in theories such as meaning postulates (Fodor, 1975) and semantic networks (Collins and Quillian, 1969). Jackendoff (1983) points out one problem in formalizing these judgments from a purely linguistic perspective: judgments of superordination and subordination, for instance, are directly related to judgments of generic categorization sentences such as ‘A chicken is a bird.’ The judgment about generic categorization is, however, not entirely linguistic or semantic, in that it behaves creatively enough to include ambiguous cases such as (1) below. (1a) (1b) (1c) (1d)
A piano is a percussion instrument. An australopithecine was a human. Washoe (the chimp)’s sign system is a language. An abortion is a murder. (Jackendoff, 1983: 102)
We make generic categorization judgments about (1) not on the basis of meaning postulates or semantic networks but on the basis of our factual, often
70 Lexical Conceptual Structure
Figure 1 CS as a nonlinguistic system (adapted from Jackendoff R (1983). Semantics and cognition. Cambridge, MA: MIT Press, 20, with permission).
Figure 2 CS as part of the linguistic system (adapted from Jackendoff R (1983). Semantics and cognition. Cambridge, MA: MIT Press, 21, with permission).
political, world knowledge. For instance, our judgment about (1d) is influenced by our political position, religion, and knowledge about biology. This is analogous to Labov’s (1973) dubious ‘cup-bowl’ judgment, which obviously resorts to nonlinguistic encyclopedic knowledge as well as the linguistic type system. CS is, by definition, the level that represents encyclopedic knowledge as part of our thought. Hence, we should refer to CS to make generic categorization judgments about (1). Jackendoff’s (1983) puzzle is summarized, as follows. We make judgments of semantic properties such as superordination and subordination at the level of semantic structure. We make generic categorization judgments at the level of CS as shown by (1). If the semantic structure were separated from CS, we would fail to catch the obvious generalization between the superordinate-subordinate judgment and
the generic categorization judgment. If, by contrast, CS were the semantic structure, we would have no trouble in accounting for the intuitive identity between the two judgments. Therefore, CS is the semantic structure. For more arguments to support the view that CS is the semantic structure, see Jackendoff (1983: Ch. 6).
Overview of Conceptual Semantics Autonomy of Semantics
A central assumption in conceptual semantics is the autonomy of semantics. In Chomsky’s view of language, syntax makes an autonomous level of grammar, whereas phonology and semantics merely serve as interpretive components (PF and LF). Jackendoff (1997) criticizes this view as syntactocentric, and
Lexical Conceptual Structure 71
provides convincing arguments to support his thesis that phonology and semantics as well as syntax make autonomous levels of grammar. We find numerous pieces of evidence for the autonomy of semantics in the literature of both psycholinguistics and theoretical linguistics. Zurif and Blumstein’s (1978) pioneering work shows that Wernicke’s area is the center of semantic knowledge in the brain in comparison with Zurif, Caramazza and Myerson’s (1972) previous finding that Broca’s area is the center of syntactic knowledge. Swinney’s (1979) classical work on lexical semantic priming shows that lexical semantics is independent of the grammatical contexts like the movement chain in a sentence. Pin˜ ango, Zurif, and Jackendoff (1999) report more workload for the online processing of aspectual coercion sentences (e.g., John jumped for two hours) than for the processing of syntactically equivalent noncoerced sentences (e.g., John jumped from the stage). Semantic categories are not in one-to-one correspondence with syntactic categories. For instance, all physical object concepts correspond to nouns, but not all nouns express physical object concepts; e.g., earthquake and concert express event concepts. All verbs express event/state concepts, but not all event/state concepts are expressed by verbs; e.g., earthquake and concert are nouns. Contrary to Chomsky’s (1981) theta criterion, we have plenty of data that shows mismatch between syntactic functions and thematic roles. For instance, the semantic interpretation of buy necessarily encodes both the transfer of money from the buyer to the seller and the transfer of the purchased entity from the seller to the buyer. Among the three semantic arguments, i.e., the buyer, the seller, and the purchased object, only the buyer and the purchased entity are syntactic arguments (e.g., John bought the book). The seller is syntactically expressed as an adjunct (e.g., John bought the book from Jill). Moreover, the buyer plays the source role of money and the target role of the purchased entity simultaneously; the seller plays the source role of the purchased entity and the target role of money simultaneously. In short, the buyer and the seller have multiple theta roles even though each of them corresponds to one and only one syntactic entity. A simple semantic distinction often corresponds to many syntactic devices. For instance, telicity is expressed by such various syntactic devices as choice of verb (2a), choice of preposition (2b), choice of adverbial (2c), choice of determiner in the subject NP (2d) and in the object NP (2e), and choice of prepositional object (2f) (Jackendoff, 1997: 35).
(2a) John destroyed the cart (in/*for an hour). John pushed the cart (for/*in an hour). (2b) John ran to the station (in/*for an hour). John ran toward the station (for/*in an hour). (2c) The light flashed once (in/*for an hour). The light flashed constantly (for/*in an hour). (2d) Four people died (in/*for two days). People died (for/*in two days) (2e) John ate lots of peanuts (in/*for an hour) John ate peanuts (for/*in an hour). (2f) John crashed into three walls (in/*for an hour) John crashed into walls (for/*in an hour)
! Telic ! Atelic ! Telic ! Atelic ! Telic ! Atelic ! Telic ! Atelic ! Telic ! Atelic ! Telic ! Atelic
To sum up, the mapping between syntax and semantics is not one-to-one; rather, it is one-to-many, many-to-one, or at best many-to-many. The mapping problem is not easy to explain in the syntactocentric architecture of language. The overall difficulty in treating semantics merely as an interpretive component of grammar along with a similar difficulty treating phonology as an interpretive component (cf. Jackendoff, 1997: Ch. 2) leads Jackendoff to propose a tripartite architecture of language, in which phonology, syntax, and semantics are all independent levels of grammar licensed by phonological formations rules, syntactic formation rules, and conceptual/ semantic formation rules respectively, and interfaced by correspondence rules between each pair of modules, as shown in Figure 3. Lexical Conceptual Structure
Conceptual semantics assumes striking similarities for the organization of CS with the structural organization of syntax. As syntax makes use of syntactic categories, namely syntactic parts of speech like nouns, adjectives, prepositions, verbs, etc., semantics makes use of semantic categories or semantic parts of speech such as Thing, Property, Place, Path, Event, State, etc. As syntactic categories are motivated by each category member’s behavioral properties in syntax, semantic or ontological categories are motivated by each category member’s behavioral properties in meaning. Syntactic categories are combined by syntactic phrase-structure structure rules into larger syntactic expressions; likewise, semantic categories are combined by semantic phrase-structure rules into larger semantic expressions. The syntactic representation is structurally organized, so we can define dominance or government relations among syntactic constituents; likewise, the semantic representation is structurally organized, so we can define grammatically significant hierarchical relations among semantic constituents. Various syntactic phrase-structure rules can be generalized into a rule schema called X-bar syntax (Jackendoff, 1977); likewise, various semantic
72 Lexical Conceptual Structure
Figure 3 The tripartite parallel architecture (reproduced from Jackendoff R (2002). Foundations of language: brain, meaning, grammar, evolution. Oxford: Oxford University Press).
phrase-structure rules can be generalized into a rule schema called X-bar semantics (Jackendoff, 1987b). Ontological Categories Ontological categories are first motivated by our cognitive layouts. To mention some from the vast psychology literature, Piaget’s developmental theory of object permanence shows that infants must recognize objects as a whole, and develop a sense of permanent existence of the objects in question when they are not visible to the infants. Researchers in language acquisition have identified many innate constraints on language learning like reference principle, object bias, whole object principle, shape bias, and so on (cf. Berko Gleason, 1997). For instance, children rely on the assumption that words refer to objects, actions, and attributes in the environments by reference principle. Wertheimer’s (1912) classical experiment on apparent movement reveals that humans are equipped with an innate tendency to perceive the change of location as movement from one position to the other; the apparent movement experiment renders support for the expansion of the event category into function argument structures like [Event GO ([Thing ], [Path ])]. Ontological categories also have numerous linguistic motivations. Pragmatic anaphora (exophora) provides one such motivation. In order to understand the sentence in (3), the hearer might have to pick out the referent of that among several entities in the visual field. If the hearer did not have object concepts to organize the visible entities, (s)he could not pick out the proper referent of the pragmatic anaphora that.
The object concept involved in the semantic interpretation of (3) motivates the ontological category Thing. (3) I bought that last night.
The category Thing proves useful in interpreting many other grammatical structures. It provides the basis of interpreting the Wh-variable in (4a); it supports the notion of identity in the same construction in (4b); and it supports the notion of quantification as shown in (4c). (4a) What did you buy last night? (4b) John bought the same thing as Jill. (4c) John bought something/everything that Jack bought.
Likewise, we find different sorts of pragmatic anaphora that motivate ontological categories like Place (5a), Direction (5b), Action (5c), Event (5d), Manner (5e), and Amount (5f). (5a) (5b) (5c) (5d) (5e) (5f)
Your book was here/there. They went there yesterday. Can he do this/that? It happened this morning. Bill shuffled a deck of cards this way. The man I met yesterday was this tall.
These ontological categories provide innate bases for interpreting Wh-variables, the identity construction, and the quantification, as shown in (6)–(8). (6a) Where was my book? (6b) Where did they go yesterday? (6c) What can he do?
Lexical Conceptual Structure 73 (6d) What happened this morning? (6e) How did Bill shuffle a deck of cards? (6f) How tall was the man you met yesterday? (7a) (7b) (7c) (7d)
John put the book on the same place as Bill. John went the same way as Bill. John did the same thing as Bill. The same thing happened yesterday as happened this morning. (7e) John shuffled a deck of cards the same way as Bill. (7f) John is as tall as the man I met yesterday.
(8a) (8b) (8c) (8d)
John put the book at some place that Bill put it. John went somewhere that Bill went. John did something Bill did. Something that happened this morning will happen again. (8e) John will shuffle cards in some way that Bill did. (8f) (no parallel for amounts)
For more about justifying ontological categories, see Jackendoff (1983: Ch. 3). Conceptual Formation Rules Basic ontological categories are expanded into more complex expressions using function-argument structural descriptions. (9) shows such expansions of some ontological categories. (9a) EVENT ! [Event GO (THING, PATH)] (9b) EVENT ! [Event STAY (THING, PLACE)] (9c) EVENT ! [Event CAUSE (THING or EVENT, EVENT)] (9d) EVENT ! [Event INCH (STATE)] (9e) STATE ! [State BE (THING, PLACE)] (9f) PLACE ! [Place PLACE-FUNCTION (THING)] (9g) PATH ! [Path PATH-FUNCTION (THING)]
The function-argument expansion is exactly parallel with rewriting rules in syntax (e.g., S ! NP VP; NP ! Det (AP)* N; VP ! V NP PP), and hence can be regarded as semantic phrase-structure rules. The semantic phrase-structure rules in (9) allow recursion such as syntactic phrase-structure rules: an Event category can be embedded in another Event category as shown in (9c). We also can define hierarchical relations among conceptual categories in terms of the depth of embedding as we define syntactic dominance or government in terms of the depth of embedding in syntactic structures. The depth of embedding in CS plays a significant role in explaining such various grammatical phenomena as subject selection, case, binding, control, etc. See Culicover and Jackendoff (2005) for more about these issues. Place functions in (9f) may include IN, ON, TOP-OF, BOTTOM-OF, etc. Path functions in (9g) may include TO, FROM, TOWARD, VIA, etc.
Conceptual semantics is a parsimonious theory, in that it makes use of only a handful of functions as conceptual primitives. All functions should be motivated on strict empirical grounds. This is exactly parallel with using only a handful of syntactic categories motivated on strict empirical grounds. Syntactic phrase-structure rules do not refer to unlimited number of syntactic categories. Syntactic categories such as noun, adjective, preposition, verb, etc. are syntactic primitives, and they are motivated by each category member’s behavioral properties in syntax. Likewise, semantic phrase-structure rules refer to a restricted set of semantic or conceptual primitives that are empirically motivated by general properties of meaning. Functions such as GO, BE, and STAY are empirically motivated in various semantic fields. They are the bases for interpreting spatial sentences in (10). (10a) GO: The train traveled from Boston to Chicago. (10b) BE: The statue stands on Cambridge common. (10c) STAY: John remained in China.
These functions also support the interpretation of possession sentences in (11). (11a) GO: John gave the book to Bill. (11b) BE: John had no money. (11c) STAY: The library kept several volumes of the Korean medieval literature.
Interpreting ascription sentences also require GO, BE, and STAY, as shown in (12). (12a) GO: The light turned from yellow to red. (12b) BE: The stew seemed distasteful. (12c) STAY: The aluminum stayed hard.
One interesting consequence of having GO, BE, and STAY in both spatial and nonspatial semantic fields is that we can explain how we use the same verb for different semantic fields. (13a) The professor turned into a driveway. (13b) The professor turned into a pumpkin.
(Spatial) (Ascription)
(14a) The bus goes to Paris. (14b) The inheritance went to Bill.
(Spatial) (Possession)
(15a) John is in China. (15b) John is a doctor.
(Spatial) (Ascription)
(16a) John kept the CD in his pocket. (Spatial) (16b) John kept the CD. (Possession) (17a) The professor remained in the driveway. (17b) The professor remained a pumpkin.
(Spatial) (Ascription)
74 Lexical Conceptual Structure
In (13), the verb turn is used in both spatial and ascription sentences with the GO meaning. How do we use the same verb for two different semantic fields? Do we have to assume two different lexical entries for turn? Conceptual semantics does not pay anything to explain this puzzle. We do not need two different lexical entries for turn to explain the spatial and ascription meanings. We just posit the event function GO for the lexical semantic description or LCS for turn in (13). Both spatial and ascription meanings follow form the LCS for turn, since the function GO is in principle motivated by both spatial and ascription sentences. We can provide similar accounts for all the data in (14)–(17). For more about the general overview of conceptual semantics, see Jackendoff (1983, 1987a, 1990, 2002). X-bar Semantics
Generative linguists in the 1950s and 1960s succeeded in showing the systematic nature of language with a handful of syntactic phrase-structure rules. But they were not sure how the phrase-structure rules got into language learners’ minds within a relatively short period of time; it was a learnability problem. X-bar syntax (Chomsky, 1970; Jackendoff, 1977) opened a doorway to the puzzle. Children do not have to be born with dozens of syntactic categories; children are born with one syntactic category, namely, category X. Children do not have to learn dozens of totally unrelated syntactic phrase-structure rules separately; all seemingly different syntactic phrase-structure rules share a fundamental pattern, namely, X-bar syntax. Jackendoff (1987b, 1990), who was a central figure in developing X-bar syntax in the 1970s, has completed his X-bar theory by proposing X-bar semantics. We have so far observed that CS is exactly parallel with the syntactic structure. Conceptual categories are structurally organized into CS by virtue of semantic phrase-structure rules, as syntactic categories are structurally organized into syntactic structure by virtue of syntactic phrase structure rules. (18) is the basic formation of X-bar syntax. (18a) XP ! Spec X’ (18b) X’ ! X Comp (18c) X ! [ N, V]
Now that CS has all parallel properties with the syntactic structure, all semantic phrase-structure rules are generalized into X-bar semantics along the same line with X-bar syntax as shown in (19). 2
3
Event Thing Place . . . 5 (19) [Entity] !4 Token Type Fð< Entity1 ; < Entity2 ; < Entity3 >>>
(19) provides not only the function-argument structural generalization for all the semantic phrasestructure rules but also shows how major syntactic constituents correspond to major conceptual categories. That is, the linking between syntax and semantics can be formalized as (20) and (21). (20) XP corresponds to [Entity] (21)
X0
< YP < ZP >>
corresponds to
Entity FðE1 ; < E2 ; < E3 >>Þ
where YP corresponds to E2, ZP corresponds to E3, and the subject (if there is one) corresponds to E1. To sum up, the obvious similarity between (18) and (19) enables us to account for the tedious linking problem without any extra cost.
General Constraints on Semantic Theories Jackendoff (1983) suggests six general requirements that any semantic theory should fulfill: expressiveness, compositionality, universality, semantic properties, the grammatical constraint, and the cognitive constraint. First, a semantic theory must be observationally adequate; it must be expressive enough to describe most, if not all, semantic distinctions in a natural language. Conceptual semantics has expressive power, in that most semantic distinctions in a natural language can be represented by CS with a handful of conceptual categories plus conceptual formation rules. What is better is that the expressive power has improved since the original conception of the theory. For instance, Jackendoff (1990: Ch. 7) introduced the action tier into the theory to represent the actor/patient relation aside from motion and location. In (22a), John is the source of the ball and the actor of the throwing event simultaneously; the ball is a moving object, the theme, and an affected entity, the patient, simultaneously. It is quite common for one syntactic entity to bear double theta roles contra Chomsky’s (1981) theta criterion; conceptual semantics captures this by representing the motion/location event in the thematic tier (22b), and the actor/patient relation in the action tier (22c). (22a) John threw the ball. Source Goal Actor Patient (22b) [Event CAUSE ([JOHN], [Event GO([BALL], [Path TO([ . . . ])])])] (22c) [AFF([JOHN], [BALL])]
The action tier not only explains the fine semantic distinction in language but also plays a central role in such grammatical phenomena as linking and case. Besides the action tier, Jackendoff (1991) introduced
Lexical Conceptual Structure 75
an elaborate feature system into CS to account for the semantics of parts and boundaries; Csuri (1996) introduced the referential tier into CS that describes the definiteness of expressions; Jackendoff (2002) introduced the lambda extraction and the topic/ focus tier into CS. All these and many other innovations make the theory expressive enough to account for significant portion of natural language semantics. The second constraint on a semantic theory is compositionality: an adequate semantic theory must show how the meanings of parts are composed into the meaning of a larger expression. Conceptual semantics is compositional, in that it shows how combinatorial rules of grammar compose the meanings of ontological categories into the CS of a larger expression. The third requirement is universality: an adequate semantic theory must provide cross-linguistically relevant semantic descriptions. Conceptual semantics is not a theory of meaning for any particular language. It is a universal theory of meaning; numerous cross-linguistic studies have been conducted with the conceptual semantic formalism. See Jun (2003), for instance, for a review of many conceptual semantic studies on the argument linking and case in languages such as Korean, Japanese, Hindi, Urdu, English, Old English, French, etc. The fourth requirement is semantic properties: an adequate semantic theory should be able to explain many semantic properties of language like synonymy, anomaly, presupposition, and so on. That is, any semantic theory must explicate the valid inference of expressions. CS provides a direct solution to this problem in many ways. The type/token distinction is directly expressed in CS, and explains most semantic distinctions made by the semantic type system. By decomposing verbs such as kill into [CAUSE ([THING], [NOT-ALIVE ([THING])])], conceptual semantics explains how John killed Bill entails Bill is dead. For more about semantic properties, see Jackendoff (1983, 1990, 2002). The fifth requirement is the grammatical constraint: if other things were equal, a semantic theory that explains otherwise arbitrary generalizations about the lexicon and the syntax would be highly preferable. Conceptual semantics is a theory of meaning that shows how a handful of conceptual primitives organize the vast domain of lexical semantics. Conceptual semantics also explains how semantic entities are mapped onto syntactic entities in a principled manner. For instance, the linking principle in conceptual semantics states that the least embedded argument in the CS is mapped onto the least embedded syntactic argument, namely the subject. In (22b & c), [JOHN] is the least embedded argument
in both the action and thematic tiers; this explains why [JOHN] instead of [BALL] is mapped onto the subject of (22a). Jun (2003) is a conceptual semantic work on case; Culicover and Jackendoff (2005) offer conceptual semantic treatments of binding, control, and many other syntax-related phenomena. In short, conceptual semantics is an interface theory between syntax and semantics. The theory has a desirable consequence for the learnability problem, too. Language learners cannot acquire language solely by syntax or solely by semantics. As Levin (1993) demonstrates, a number of syntactic regularities are predicted by semantic properties of predicates. Conceptual semantics makes a number of predictions about syntax in terms of CS. Chomsky’s explanatory adequacy is a requirement for the learnability problem; conceptual semantics is thus a theory that aims to achieve the highest goal of a linguistic theory. The final requirement on a semantic theory is the cognitive constraint: a semantic theory should address interface problems between language and other peripheral systems like vision, hearing, smell, taste, kinesthesia, etc. Conceptual semantics fulfills this requirement, as CS is by definition a level of mental representation at which both linguistic and nonlinguistic modalities converge. Jackendoff (1987c) focuses on the interface problem, and shows, for instance, how the visual representation is formally compatible with the linguistic representation based on Marr’s (1982) theory of visual perception.
Comparison with Other Works Bierwisch and Schreuder’s (B&S; 1992) work is another influential theory that makes explicit use of the term conceptual structure. Conceptual semantics shares two important assumptions with B&S, but there are crucial distinctions between the two theories. First, B&S also assume a separate level of conceptual structure. Their conception of CS is similar to Jackendoff’s conception of CS in that CS is a representational system of message structure where non-linguistic factual/encyclopedic information is expressed. B&S, however, assume that CS strictly belongs to a nonlinguistic modality, and that the linguistic meaning is represented in another level called semantic form (SF). As a result, SF, but not CS, is the object of lexical semantics, and hence LCS does not make much sense in this theory. In the first section of this article, we discussed two possible views of CS; B&S take the former view of CS, whereas Jackendoff advocates the latter view. Second, SF in B&S’s theory is compositional as CS in conceptual semantics. B&S’s lexical decomposition relies on two sorts of elements: constants such as DO,
76 Lexical Conceptual Structure
MOVE, FIN, LOC, etc., and variables such as x, y, z. Constants and variables are composed into a larger expression in terms of formal logic. (23a) illustrates B&S’s SF for enter; (23b) is the CS for the same word in Jackendoff’s theory. (23a) [y DO [MOVE y] : FIN [y LOC IN x]] (23b) [Event GO ([Thing ], [Path TO ([Place IN ([Thing ])])])]
One reason B&S maintain a purely nonlinguistic CS as well as a separate SF is that factual or encyclopedic knowledge does not seem to make much grammatical contribution to language. To B&S, there is a clear boundary where the semantic and the encyclopedic diverge. Pustejovsky’s (1995) generative lexicon (GL) theory is interesting in this regard. GL also assumes lexical decomposition. Pustejovsky’s lexical decomposition makes use of factual or encyclopedic knowledge in a rigorous formalism called the qualia structure. The qualia structure of book, for instance, expresses such factual knowledge as the origin of book as write(x, y) in the Agentive quale, where x is a writer (i.e., human(x)), and y is a book (i.e., book(y)). The qualia structure also expresses the use of the word in the Telic quale; hence, the lexical semantic structure for book includes such factual knowledge as read(w, y), where w is a reader (i.e., human(w)), and y is a book. The factual or encyclopedic knowledge is not only expressed in formal linguistic representations but also plays a crucial role in explaining a significant portion of linguistic phenomena. We interpret (24) as either Chomsky began writing a book or Chomsky began reading a book. Pustejovsky suggests generative devices like type coercion and co-composition to explain the two readings of (24) in a formal theory; i.e., writing or reading is part of the qualia structure of book, and, hence, the two readings of (24) are predicted by formal principles of lexical semantics. (24) Chomsky began a book.
It is far beyond the scope of this article to discuss the GL theory in detail. But the success of the GL theory for a vast range of empirical data shows that the boundary between semantic and encyclopedic or between linguistic and nonlinguistic is not so clear as B&S assume in their distinction between CS and SF.
Suggested Readings For a quick overview of conceptual semantics with one paper, see Jackendoff (1987a). For foundational issues of conceptual semantics, see Jackendoff (1983).
For an overview of language and other cognitive capacities from a broad perspective, see Jackendoff (1987c). Jackendoff (1990) offers a comprehensive picture of conceptual semantics. Jackendoff (1997) is a bit technical, but it is important to set up the parallel architecture of language. For syntactic issues of conceptual semantics, see Jun (2003) and Culicover and Jackendoff (2005). See also: Agrammatism II: Linguistic Approaches; Anaphora, Cataphora, Exophora, Logophoricity; Anatomical Asymmetries versus Variability of Language Areas of the Brain; Constants and Variables; Lexicon, Generative; Formal Models and Language Acquisition; Hyponymy and Hyperonymy; Meaning Postulates; Semantic Primitives; X-Bar Theory.
Bibliography Berko Gleason J (ed.) (1997). The development of language. Boston: Allyn and Bacon. Bierwisch M & Schreuder R (1992). ‘From concepts to lexical items.’ Cognition 42, 23–60. Chomsky N (1970). ‘Remarks on nominalization.’ In Jacobs R A & Rosenbaum P S (eds.) Readings in English Transformational Grammar. Waltham: Ginn and Company. 184–221. Chomsky N (1981). Lectures on government and binding: the Pisa lectures. Dordrecht: Foris. Chomsky N (1995). The minimalist program. Cambridge, MA: MIT Press. Csuri P (1996). ‘Generalized dependencies: description, reference, and anaphora.’ Ph.D. diss., Brandeis University. Collins A & Quillian M (1969). ‘Retrieval time from semantic memory.’ Journal of Verbal Learning and Verbal Behavior 9, 240–247. Culicover P & Jackendoff R (2005). Simpler syntax. Oxford: Oxford Univ. Press. Fodor J A (1975). The language of thought. Cambridge, MA: Harvard University Press. Jackendoff R (1972). Semantic interpretation in generative grammar. Cambridge, MA: MIT Press. Jackendoff R (1977). X-bar syntax: a study of phrase structure. Cambridge, MA: MIT Press. Jackendoff R (1983). Semantics and cognition. Cambridge, MA: MIT Press. Jackendoff R (1987a). ‘The status of thematic relations in linguistic theory.’ Linguistic Inquiry 18, 369–411. Jackendoff R (1987b). ‘X-bar semantics.’ In Pustejovsky James (ed.) Semantics and the lexicon. Dordrecht: Kluwer Academic Publishers. 15–26. Jackendoff R (1987c). Consciousness and the computational mind. Cambridge, MA: MIT Press. Jackendoff R (1990). Semantic structures. Cambridge, MA: MIT Press. Jackendoff R (1991). ‘Parts and boundaries.’ Cognition 41, 9–45.
Lexical Conditions 77 Jackendoff R (1997). The architecture of the language faculty. Cambridge, MA: MIT Press. Jackendoff R (2002). Foundations of language: brain, meaning, grammar, evolution. Oxford: Oxford University Press. Jun J S (2003). ‘Syntactic and semantic bases of case assignment: a study of verbal nouns, light verbs, and dative.’ Ph.D. diss., Brandeis University. Katz J J (1980). ‘Chomsky on meaning.’ Language 56(1), 1–41. Katz J J & Fodor J A (1963). ‘The structure of a semantic theory.’ Language 39(2), 170–210. Labov W (1973). ‘The boundaries of words and their meanings.’ In Bailey C -J N & Shuy R W (eds.) New ways of analyzing variation in English, vol. 1. Washington, DC: Georgetown University Press. Levin B (1993). English verb classes and alternations. Chicago: University of Chicago Press. Marr D (1982). Vision. San Francisco: W. H. Freeman.
Pin˜ ango M M, Zurif E & Jackendoff R (1999). ‘Real-time processing implications of aspectual coercion at the syntax-semantics interface.’ Journal of Psycholinguistic Research 28(4), 395–414. Pustejovsky J (1995). The generative lexicon. Cambridge, MA: MIT Press. Swinney D (1979). ‘Lexical access during sentence comprehension: (re)consideration of context effects.’ Journal of Verbal Learning and Verbal Behavior 18, 645–659. Wertheimer M (1912). ‘Experimentelle Studien u¨ ber das Sehen von Bewegung.’ Zeitschrift fu¨r Psychologie 61, 161–265. Zurif E & Blumstein S (1978). ‘Language and the brain.’ In Halle M, Bresnan J & Miller G A (eds.) Linguistic theory and psychological reality. Cambridge, MA: MIT Press. 229–245. Zurif E, Caramazza A & Myerson R (1972). ‘Grammatical judgments of agrammatic aphasics.’ Neuropsychologia 10, 405–417.
Lexical Conditions P A M Seuren, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands ß 2006 Elsevier Ltd. All rights reserved.
In many schools of linguistics it is assumed that each sentence S in a natural language has a so-called semantic analysis (SA), a syntactic structure representing S in such a way that the meaning of S can be read off its SA in a regular way. The SA of a sentence S is distinct from its surface structure (SS), which corresponds directly with the way S is to be pronounced. Each language has a set of rules, its grammar G, defining the relationship between the SAs and the SSs of its sentences. The SA of a sentence S is often also called its logical form, because the SA exhibits not only the predicate-argument structure of S and its embedded clauses if S has any, but also the logically correct position of tense, quantifiers, negation, modalities, and other possible operators – besides all the meaningful lexical items of the corresponding SS. SAs are thus analytical as regards their structure, not as regards their lexical items. The lexical items of SSs are in place in SAs: in principle, SAs provide an analysis that goes as far as the lexical items and stops there. SAs do not specify lexical meanings. Lexical meanings are normally specified in dictionaries, but dictionaries do so from an SS point of view. However, linguistic theories assuming an SA-level of representation for sentences require that lexical
meanings be specified at SA-level. The difference is that, at SA-level, lexical items are allowed to occur only in predicate positions. A surface sentence like (1a) is represented at SA-level as (1b), written as the linear formula (1c) and read intuitively as (1d): (1a) The farmer was not working on the land. (1b)
(1c) S[V[not] S[V[past] S[V[on] S[V[be] S[V[work] NP[the y S[V[farmer] NP[y]]]]] NP[the x S[V[land] NP[x]]]]]] (1d) It is not so that in the past on the land the farmer was working.
The items not, past, on, be–ing, work, farmer, and land are all labeled ‘V’, which makes them predicates in (1b). In (1a), however, farmer is a noun, the past tense is incorporated into the finite verb form was, not is usually considered an adverb, working is a present participle in the paradigm of the verb work, on is a preposition, and land is again a noun.