Simply disappointing: A response to Crain and Pietroski

Simply disappointing: A response to Crain and Pietroski

Lingua 116 (2006) 69–77 www.elsevier.com/locate/lingua Simply disappointing: A response to Crain and Pietroski Denis Bouchard Departement de linguist...

91KB Sizes 0 Downloads 18 Views

Lingua 116 (2006) 69–77 www.elsevier.com/locate/lingua

Simply disappointing: A response to Crain and Pietroski Denis Bouchard Departement de linguistique, Universite´ du Que´bec a` Montre´al, Suc. Centre-Ville, C.P. 8888, Montreal, Canada H3C 3PB Received 23 May 2004; received in revised form 24 May 2004; accepted 24 May 2004 Available online 14 July 2005

Abstract In a criticism of my paper ‘Exaption and Linguistic Explanation’, Crain and Pietroski argue that Generative Grammar provides deep generalizations concerning the way children acquire competence in languages. I show that their examples of analyses are mere descriptions and that an exaptive approach to language acquisition provides more insightful avenues to explore. # 2005 Elsevier B.V. All rights reserved. Keywords: Acquisition; Generative descritivism; Exaptive explanation

Crain and Pietroski (henceforth C&P) say that the force of the generative program lies with ‘‘the interesting questions it has fostered, the phenomena that have been discovered by taking those questions seriously, and the explanations of such phenomena— concerning both adult competence and the process of language acquisition.’’ I agree with them on the first two points. For instance, it is one of Chomsky’s major contributions that he put to the forefront the question of the link between linguistic competence and its acquisition: once we propose a model of linguistic competence, we must provide a convincing scenario about how it comes to be mastered by a child. Moreover, the formal precision in which the generative proposals are couched struck the interest of many bright individuals and has drawn many to the field. This has led to the study of a wealth of phenomena. E-mail address: [email protected]. 0024-3841/$ – see front matter # 2005 Elsevier B.V. All rights reserved. doi:10.1016/j.lingua.2004.05.002

70

D. Bouchard / Lingua 116 (2006) 69–77

My disappointment is that the level of explanation attained falls quite short: the explanations are engineering solutions which are of roughly the order of complexity of the original problem (Chomsky, 2000:93), usually consisting in descriptive lists. Though I presented a few core cases in ‘Exaption and Linguistic Explanation’ to illustrate this fact and suggested how an alternative exaptive approach is more insightful, C&P dismiss my criticism as focusing ‘‘too narrowly on the theoretical vocabulary used in the latest draft of one chapter devoted to certain details concerning the nature of recursion and transformation in natural language.’’ Moreover, if at times statements err on the descriptive side, they minimize this as a normal first step, ‘‘this is just one chapter in the bigger book.’’ However, as I already pointed out in the paper that they criticise and in my response to my other critics, this is neither simply a first stage nor are the taxonomies of construction-specific elements restricted to any particular theoretical vocabulary: I showed that the weakness of explanation—in particular, resorting to ad hoc lists—runs through the various implementations of Generative Grammar. C&P choose not to mention the specific examples I brought up, and simply repeat the dogma that human languages are transformational, and subject to constraints listed in UG. To ignore problems may comfort them in their convictions, but this is escapism, not science. Let me be very explicit. I challenge anyone to come up with a transformational analysis of adjectival modification that comes anywhere close to the exaptive analysis of Bouchard (2002) in its precision of description and its depth of explanation. I must emphasize once again that my goal is not to criticise any particular linguistic model, but rather to find a more satisfactory level of explanation than ad hoc listed elements. C&P say that I fail to see that a deeper level of explanation has been underlying the various versions of GG and can be found in proposals about universal and unifying principles of syntax and semantics which are claimed to be part of the innate human endowment for language acquisition. Their argumentation revolves around ‘‘the hypothesis that children acquire grammars in accordance with core principles of Universal Grammar, which makes a limited number of options available. Children effectively select from this range of options in response to linguistic experience.’’ I cannot present a full blown alternative account of acquisition in this brief response, but we will see that the way their model establishes the ‘‘range of options’’ available to children makes it equivalent to an ad hoc list just as in the cases I previously discussed, and that, here too, a more satisfactory explanation can be found in an exaptive approach. Nativists inspired by Chomsky typically start off with the observation that known formal languages allow an immense space of logically possible grammatical principles, yet the principles discovered so far are restricted to a very small fraction of this space. Moreover, most of the generalizations expressed by these principles are unexpected, in the sense that in order to learn them, children would have to be exposed to negative data. In the face of this state of affairs, two options are presented. Either children actually do have access to data rich enough for some inferential techniques to determine the grammatical rules of natural languages (e.g., as suggested by Pullum and Scholz, 2002), or ‘‘the grammatical principles that describe the space of possible human languages—and thus constrain the child’s path to language acquisition—are not learned at all’’ (Crain and Pietroski, 2002:166). Nativists consider that the first option is implausible, that speakers ‘‘know much more about language than they could plausibly have learned on the basis of

D. Bouchard / Lingua 116 (2006) 69–77

71

their experience’’ (Crain and Pietroski, 2002:163). So they conclude that ‘‘children know basic grammatical principles largely by virtue of their innate biological endowment’’ (Crain and Pietroski, 2002:165). This is a hasty conclusion, however, since there is a third option which basic scientific methodology asks that we should regard as the initial hypothesis. The approach adopted is overly formalist, so it fails to take into account initial conditions which arise from the logically prior properties of the perceptual and conceptual substances. Yet these properties necessarily impose boundaries within which a child charts a highly circumscribed course in language development. To give a simple example, children learning an oral language ‘‘know’’ by virtue of their physiological make-up that they cannot have sentences with two words produced simultaneously. Therefore, considerations of simplicity and theoretical economy require that we first exhaust the potential explanatory plausibility of these necessary, logically prior properties before falling back on a list of contingent constraints, even more so if the latter are roughly of the same degree of complexity as what they are supposed to explain. Yet working on purported contingent constraints of ‘‘Universal Grammar’’ is pervasive in GG. As I indicated in my response to previous critics, this is uncomfortably close to Descartes’ leap. A second problem with the hasty conclusion is that the absence-of-stimulus argument depends on a particular way of presenting the data. For example, I show in chapter 6 of Bouchard (2002) that in the core case of constraints on long distance dependencies, the induction problem for language acquisition is entirely due to the way the data are presented. We are told that Primary Linguistic Data indicate to the child that wh-phrases may be moved in some languages. This opens a vast space of logical possibilities. However, the actual space of possibilities is restricted: the apparently open domain for movement is closed in certain constructions. Yet children do not make the error of overgeneralizing and never produce constructions which violate the constraint. Since the data don’t provide information about these limits, nativists propose that a UG principle (such as the ECP) must account for them. This standard approach crucially assumes along the lines of Ross (1967) that unbounded dependencies are the unmarked case,1 and children must discover what closes this potentially very open domain. Contrary to what C&P claim, many ‘‘important aspects of Chomsky’s program that have remained essentially unchanged since the 1950s’’ do depend on particular choices in the description of the phenomena. This comes from types of operations recurrently present in the various theoretical notations used over the years. In the case of long distance dependencies, the key idea is that there is an unbridled movement that must be somehow reined in. However, there is another way to look at these dependencies, suggested as early as Koster (1978): the unmarked case is that all dependencies are strictly local, and a learner extends a local domain only if there is positive evidence to do so. The learning task is therefore to discover the conditions under which a locality domain is unlocked. This only requires positive evidence.2 Since evidence that domains should be extended as in Island violations never comes up in the child’s linguistic experience, there is no reason for the learner to make 1

In contemporary terms, successive cyclic movement is expected by the child. In Bouchard (2002), I make the more precise claim that the extensions of domain that do occur can be determined on the basis of elementary notions defined on logically prior properties—Juxtaposition and lexical items, with the latter involved in obligatory selection that makes the extension salient and recoverable. 2

72

D. Bouchard / Lingua 116 (2006) 69–77

the error of trying to extend a domain in that way. Crucially, no negative evidence is required, nor any ‘‘innate’’ device like the ECP to rule out these cases negatively. C&P discuss three cases concerning the way children acquire competence in languages. They claim that these illustrate that the ‘‘evidence supports the conclusion that human languages are transformational, and subject to constraints that (at least apparently) do not reflect basic principles of logic or communication or learnability.’’ However, as we will see, the argumentation—typical of what is found in the generative program—exhibits the same limitations as the example of long distance dependencies. First, it depends on a particular way of describing the phenomena. This comes from a second limitation: it is overly formalist so, as indicated above, it misses alternative explanations. The precepts of their framework limit their inquiry to a point that leads them inadvertently to the adoption of a list of contingent constraints. Consider the first purported piece of evidence showing that human languages are transformational, and subject to specifically linguistic constraints. Speakers of English judge that (1b), but not (1c), constitutes a good interrogative counterpart for (1a).3 (1)

a. b. c.

A unicorn that is eating a flower is walking in the garden. Is a unicorn that is eating a flower walking in the garden? *Is a unicorn that eating a flower is walking in the garden?

C&F present the standard account of the ungrammaticality of (1c), which is based on the assumption that, in the case of questions beginning with an auxiliary, this verb is understood as related to another verb in the sentence. But there are constraints on such relations with another position: an auxiliary verb cannot be understood as related to an embedded position. What is going on here from this point of view is a ‘‘process of moving words around to form questions’’ (Pinker, 1994:40) and the ‘‘explanation’’ consists essentially in answering the questions in (2). (2)

a. b. c.

What is moved? Answer: Tense. What happens if there is more than one Tense? Answer: it is always the Tense of the main clause that moves. What is the position to which Tense moves? Answer: the COMP of the main clause.

The innovation in the GG treatment of examples like (1) is that it centers on formal properties of the system. This has the important consequence that attention is directed not only to what takes place in languages, but also to what we could expect to find given possible formal systems and yet does not actually take place. Thus, all along, generativists have been concerned with contexts where rules don’t apply. This is reflected in studies on acquisition, many of which concern types of errors which do not occur but which we could expect children to make in a model of acquisition by simple analogy. Under the GG view, the reason why children do not make errors as in (1c) is that the rule/constraint which is listed in UG just does not allow that form. 3

I modified their example in order to have the form is twice; the basics of the argument are not affected.

D. Bouchard / Lingua 116 (2006) 69–77

73

Underlying the questions and answers in (2) is the observation that the process of inversion in questions does not operate on strings of words but on phrases. That is why children confronted with a sentence with two tensed auxiliaries do not move the first occurrence of an auxiliary word, but look at phrases and move the AUX after the subject, i.e., they move the Tense of the main clause. Though this particular argument is original to GG, the idea that an important part of syntax is hierarchical in nature is far from new: it has been part of linguistic models for a long time.4 What brings generativists to think about question formation as potentially linear rather than hierarchical is that the formalism being used allows it. This appears clearly in Pinker’s (1994:43) comment: ‘‘One could just as effectively move the leftmost auxiliary in the string to the front, or flip the first and last words, or utter the entire sentence in mirror-reversed order [. . .] The particular ways that languages do form questions are arbitrary, species-wide conventions.’’ This is a setback from already established knowledge, and constraining the transformation to fit the facts is just solving a problem which the approach created. Though the question of acquisition is a very important innovation brought forth by GG, the conception of what is going on in examples like (1) makes the analysis fall short of explaining the facts. To the question why children don’t make the error in (1c), GG answers that that’s just how children are made. This is not taking the question very seriously. The model provides some improvement in the precision of the description of phenomena like (1), but it leaves basic questions unanswered. Why is Tense involved in question formation? Why is it only the Tense of the main clause? Why does having Tense in COMP correlate with a question interpretation? GG does not answer any of these questions: it just provides tools that give a (partial) description of the phenomena. This descriptivism is clearly revealed by the fact that if we imagine a very different state of affairs concerning the way languages form questions, nothing of importance changes in the analysis. For instance, if it was the Det of the direct object that moved to COMP to form questions, all you need to do is change the list of elements that are targeted, since no reason is given why Tense should be involved. If the moved element could come out of embedded clauses, escape hatches could easily be provided. If it moved to a position other than COMP, all you need to do is change the list of landing sites. In short, having Tense in COMP happens to be a way languages form questions and things could easily have been otherwise according to the GG approach, since the ‘‘explanation’’ lies in the rules/constraints/principles which are arbitrarily listed in UG as species-wide conventions. In contrast, an exaptive approach to language immediately brings these ‘why’ questions to the forefront. Consider why placing the tense-bearing V of the main clause in a position outside of IP expresses the illocutionary force of direct interrogation. This effect can be explained by extending a proposal of Ladusaw (1995), following ideas of Frege and Davidson (see also Bouchard, 1998, 2002). A predication is a description of a class of events and this description is the ISSUE about which we must make a judgement. 4 Not only is the factual observation quite old, so is the realization of the importance of syntagmatics for the system. For instance, it is at the heart of Saussure’s system. Moreover, syntagmatics has been argued to derive from an element of the perceptual substance, i.e., linearity (cf. Tesnie`re, 1959 (following Saussure, 1916:170), Kayne, 1994, Bouchard, 2002). This hierarchical aspect of syntax is therefore strongly motivated because it is based on a logically prior property, in accordance with exaptive grammar.

74

D. Bouchard / Lingua 116 (2006) 69–77

The ISSUE is a matter of the whole sentence. In an affirmative sentence, the speaker expresses a positive judgement by placing the ISSUE under the immediate scope of the deictic Tense, i.e., the Tense of the main clause which is determined with respect to the moment of speech. If the deictic Tense is expressed outside of the IP (i.e., in COMP as in (1b)), the ISSUE is presented as being separated from Tense, as not being established: this induces a question interpretation, a request to know whether the ISSUE should be considered established or not. Under this view, it is no accident that Tense in COMP has the illocutionary force of a question. Another important question concerns the fact that there are other ways to form questions. The GG approach has nothing significant to say about this: it just lists the properties. In contrast, the exaptive approach leads to insights about why there are other ways, and what they are. Languages such as Chinese, Japanese, Korean illustrate a second way to express the illocutionary force of interrogation. These languages typically have a Q-particle appearing in Comp (Cheng and Rooryck, 2000), as illustrated for Korean in (3). (3) a. b.

chelswu-ka mues-ul po-ass-ni Chelswu-NOM INDEF-ACC see-PAST-Q What did Chelswu see? Did Chelswu see something?

The particle ni marks the sentence as interrogative. Without ni, this sentence is interpreted as the declarative ‘Chelswu saw something’. The sentence with ni is interpreted either as a yes/no question or as a question bearing on mues-ul. However, the sentence is not ambiguous because there is an intonational difference between the two variants: the intonation peak is on the subject or the verb under the yes/no question interpretation, whereas an intonation peak on mues-ul results in a questioned-phrase interpretation (Cheng and Rooryck, 2000). In the latter case, the scope of mues is determined by which Comp ni appears on. So instead of the positional strategy used by English and French to provide a signifiant for the illocutionary force of question, Korean uses the morphological marking ni on COMP. This marking by a particle is just another option allowed by our physiology as a signifiant. It is therefore fully motivated. What would be unexpected, given the availability of markings, is to have no language using a marking to express questions: this accidental gap would have to be explained by the theory. A third pattern to express the illocutionary force of interrogation is to have a particular rising intonation on the sentence as in the French example (4), without inversion or the question formative est-ce que: (4)

Jean a achete´ un livre? Jean has bought a book?

We saw a possible explanation why placing the tensed V outside of the IP produces the illocutionary force of a question. The fact that a rising intonation can encode this illocutionary force is also motivated. It is quite generally the case across languages that an intonational rise signifies incompleteness, whereas an intonational fall indicates

D. Bouchard / Lingua 116 (2006) 69–77

75

completeness (Vaissie`re, 1995; Lotfi, 2001). For instance, when a speaker enumerates the items of a list, a rising intonation on an item (represented as < in (5)) indicates that the enumeration is not completed, whereas a falling intonation appears on the last item which signals completeness. (5)

a. b.

Il y avait Paul<, son fre`re<, ses sœurs<, et sa me`re. There was Paul<, his brother<, his sisters<, and his mother.

This may explain why an intonational rise is frequently used to signal questions: it indicates that the discourse is incomplete, hence it is a request to complete the information. This brief discussion indicates that we need not stop at simply listing rules, constraints and principles in UG, which is a rather uninsightful way of ‘‘trying to provide the most economical characterization of adult competence, and [. . .] trying to explain how children acquire adult competence given typical experience.’’ A deeper explanation can be achieved regarding why a particular grammatical operation happens to be a way to form questions. Under this view, what is going on in question formation as in (1) is quite different from moving words or phrases around: Tense is being generated outside of the basic clausal structure; this changes its relation to the ISSUE and results in the illocutionary force of a question. There is no reason why children would make an error as in (1c) by simple analogy, since this is not at all analogous to what they are doing. Similar remarks hold for C&P’s example about constraints on the possible readings of pronouns. Grammatical principles such as Principle C of the Binding Theory tie together diverse phenomena like those in (6), (7), and others. (6)

He is hiding WMD in Rumsfeld’s basement.

(7)

I know who he is hiding in the basement.

C&P say that this kind of principle, which expresses regularities across lots of particular examples, is expected under the GG view on acquirability, ‘‘otherwise children would be forced to learn the local grammar piecemeal, further delaying the attainment of a grammar that is (moderately) equivalent to that of adults in the same linguistic community.’’ Moreover, the hypothesis that this kind of principle is part of our innate endowment ‘‘increases the pressure to simplify theories of that endowment.’’ However, both these affirmations are erroneous. Concerning simplicity, Chomsky has often remarked that this is an interesting property of the language system precisely because it is unexpected, given the frequent messiness of biological systems. There is also no reason to expect diverse phenomena to be tied together in GG because each phenomenon could depend on a separate principle listed in UG: this does not force children to learn the local grammar piecemeal since this kind of principle is not learned at all, according to the hypothesis. Exaptive grammar suggests that we go further than simply asking what elements are involved (answer: pronouns) and what configuration is blocking some readings (c-command); these are just initial factual descriptions. To go beyond this descriptivism, we can ask ourselves why pronouns are involved. The answer resides in the nature of pronouns: they are referentially dependent elements, typically on an R-expression.

76

D. Bouchard / Lingua 116 (2006) 69–77

Therefore, the reference of the R-expression must be determined prior to the interpretation of the pronoun. As for the configuration involved in (6) and (7), the pronoun is a subject and a predication applies to it. This plausibly necessitates that the interpretation of the subject be determined prior to applying the meaning of the predication to it. The problem in these examples is that the R-expression on which the pronoun depends is interpreted as part of the predicative phrase which depends on the subject pronoun. The interpretation of the pronoun and the interpretation of the predicative phrase are at cross purposes, so the relevant readings are impossible. This provides a plausible line of argumentation which explains why pronouns are involved, and why a certain configurational relation with the relevant R-expression renders some readings impossible. C&P also discuss properties surrounding the notion of entailment. For reasons of brevity, and lack of expertise, I will not analyze these here. In any event, the seminal contributions on this topic arose in formalist traditions, but other than the one of GG. If interesting questions have been raised and phenomena have been discovered, in this case as in the others discussed above, it is due to the use of precise formal tools, not to the UG tenets of Generative Grammar. Formalization can lead to a search for explanations. As C&P themselves remark, ‘‘given abstract syntax and constraints on transformations, one would like some account of where all that comes from.’’ But the tenets of GG only direct its adherents to the facile option of listing in UG the descriptive statements obtained by formalization. As a result, the search for explanations stops short: external causes deriving from prior conditions of the perceptual and conceptual substances are not sought after because the absence of evidence is evidence for UG. Believers in this form of UG may perversely come to hope that explanations based on external properties will not be found, that there is poverty of evidence, because not finding such explanations strengthens their brand of nativism. To leap to a list of descriptive devices in UG and attach the name ‘‘explanation’’ to this falls short of my expectations: however more precise the descriptions may get, this does not make them more explanatory. I propose to shift to an approach to language that produces causal relations that are informative, that allow us to understand why things happen as they do. This is possible if we shift to an approach that searches among logically prior properties the effects they may have on the language system. This way, we may advance beyond descriptivism.

References Bouchard, Denis, 1998. The syntax of sentential negation in French and English. In: Forget, D., Hirschbu¨hler, P., Martineau, F., Rivero, M.-L. (Eds.), Negation and Polarity: Syntax and Semantics. John Benjamins, Amsterdam & Philadelphia, pp. 29–52. Bouchard, Denis, 2002. Adjectives, Number and Interfaces: Why Languages Vary. Elsevier Science, Oxford. Cheng, Lisa Lai-Shen, Rooryck, Johan, 2000. Licensing wh-in-situ. Syntax 3, 1–19. Chomsky, Noam, 2000. Minimalist inquiries: the framework. In: Martin, R., Michaels, D., Uriagereka, J. (Eds.), Step by Step: Essays on Minimalist Syntax in Honor of Howard Lasnik. MIT Press, Cambridge, Mass., pp. 89–155. Crain, Stephen, Pietroski, Paul, 2002. Why language acquisition is a snap. The Linguistic Review 19, 163–183. Kayne, Richard, 1994. The Antisymmetry of Syntax. MIT Press, Cambridge.

D. Bouchard / Lingua 116 (2006) 69–77

77

Koster, Jan, 1978. Locality Principles in Syntax. Foris, Dordrecht. Ladusaw, William, 1995. Semantic interpretation of negative concord. In: Presented at the Colloquium Ne´gation: Syntaxe et se´mantique, University of Ottawa. Lotfi, Ahmad R., 2001. Iconicity: A Generative Perspective. Ms. Azad University. Pinker, Steven, 1994. The Language Instinct. In: How the Mind Creates Language, William Morrow & Co., New York. Pullum, Geoffrey, Scholz, Barbara, 2002. Empirical assessment of stimulus poverty arguments. The Linguistic Review 19, 9–50. Ross, John Robert, 1967. Constraints on Variables in Syntax. Doctoral dissertation, MIT. Published with revisions as (1986) Infinite Syntax!. Ablex, New York. Saussure, Ferdinand de, 1916. In: de Mauro, Tullio (Ed.), Cours de linguistique ge´ne´rale. Payot, 1967, Paris. Tesnie`re, Lucien, 1959. E´le´ments de syntaxe structurale. Editions Klincksieck, Paris. Vaissie`re, J., 1995. Phonetic explanations for cross-linguistic prosodic similarities. Phonetica 52, 123–130.