Substance, structural analogy, and universals

Substance, structural analogy, and universals

Language Sciences 39 (2013) 15–30 Contents lists available at SciVerse ScienceDirect Language Sciences journal homepage: www.elsevier.com/locate/lan...

2MB Sizes 3 Downloads 93 Views

Language Sciences 39 (2013) 15–30

Contents lists available at SciVerse ScienceDirect

Language Sciences journal homepage: www.elsevier.com/locate/langsci

Substance, structural analogy, and universals John Anderson PO Box 348, Methoni Messinias 24006, Greece

a r t i c l e

i n f o

Article history: Available online 11 April 2013 Keywords: Substance Linguistic categories Modularity Structural analogy Universals Autonomy

a b s t r a c t This contribution presents a view of language as pervasively substantively based. That is, linguistic categories and structures represent extra-linguistic mental ‘substance’ – conceptions and perceptions – in the sense that the latter determine the range of linguistic categories and their structural behaviour. The distribution of basic categories and the relations between them reflects their substance. Moreover, the dimensions of linguistic structure – notably constituency/dependency and linearity are, like the categories themselves, grammaticalizations of substance – respectively cognitive connectedness and salience and the perception of time. These substantively based dimensions are the basis for sub-modularization within syntax and phonology. Perceived similarities in substance underlie analogies between syntactic and phonological structure, and the lack of them discrepancies in structure. The relative generality of categories and structures throughout languages reflects the cognitive/perceptual salience of what they represent. Language universals, in particular, are substance-based rather than being conventionalized, or autonomous. Ó 2013 Elsevier Ltd. All rights reserved.

1. Introduction It is argued in Anderson (2011) that language is representational. Specifically, languages are internalized cultural artefacts that represent properties of the non-linguistic cognition they structure. This linguistic structuring has not been shown to be reliant on specifically linguistic biological determinants, as opposed to general cognitive capacities. The imposing of linguistic structure I shall refer to as grammaticalization; what linguistic structure grammaticalizes can be called its substance (without denying that this cognitive substance may be structured in other ways). Recognition of this status for languages is essential to an understanding of how language works. Some consequences of this recognition are pursued here by addressing three questions, each following on from the preceding, and initially from the assumption that language is a representational artefact.

2. What kind of artefact is language? In the first place, and as is assumed in traditional ‘notional’ grammars, the distribution of the basic categories of syntax is determined by their semic content, the properties of cognition with respect to which they are distinguished. Thus, as is familiar, the category noun is distinguished by denoting, i.e. providing a representation for, a class of concepts perceived as phenomena of a particular type, namely entities. Prototypical entities, which can be represented as a noun, are conceived of as stable, discrete, and concrete; and this determines their distribution. Prototypical nouns would include such English nouns as pebble or dog. But entity status and nounhood are flexible. Thus, concepts that are not stable, such as what is represented by dawn, or discrete, such as what is represented by matter, or concrete, such as what is represented by spirit, can be E-mail address: [email protected] 0388-0001/$ - see front matter Ó 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.langsci.2013.02.005

16

J. Anderson / Language Sciences 39 (2013) 15–30

treated as entities, and so as nouns, with something of the distribution of the prototypical. They thus show, along with other nouns, a distribution that reflects their role as representations of the entities that participate in mental scenes, types of which are denoted by verbs. Scenes are prototypically dynamic, relational, and concrete, and are represented by English verbs like go or murder. They are the nucleus of the predications that represent scenes and they bind together the representations of the entities that participate in them. But again there are peripheral verbs, such as dwell, representing a non-dynamic scene, or minimally relational rain, whose only ‘participant’ (such as might be represented by in Glasgow) is not a defining one (even in the case of Glasgow as a location for rain). It is not a true participant in the scene, but rather a circumstantial, a possible but not a necessary participant, or complement. And of course, as with nouns, there are verbs that represent abstract scene-types, such as interpret. Both nouns and verbs prototypically denote the concrete; abstractions introduce figurative representations – though, of course, the grounds for the figure may have become opaque in particular instances. The membership and syntax of syntactic categories and the nature of the secondary categories that may be attributed to them – say, gender in the case of nouns and aspect with verbs – are reflections of their semic content. But the other aspects of linguistic structure are equally representational of substance, most obviously phonological categories such as consonant and vowel. They grammaticalize properties that belong to human perception of sound. Vowels are perceived as more sonorous than consonants, capable of variations in tone, and as relational: they form the envelope on whose margins consonants are arranged. There is some scope for non-prototypicality here too. Sonorant consonants can be interpreted as structurally syllabic; and, on the other hand, vowels can be voiceless (as in Japanese), that is, minimally sonorous. But phonological representation is only concrete. And this relates to its being a grammaticalization of perceptual substance. The difference in substance of the basic categories of syntax and phonology defines the two planes of language. The latter are sets of representations constructed on the basis of the appropriate alphabet of categories. And the relationship between these sets of representations is the fundamental re-representation that we can associate with language. Re-representation refers to the addition to a representation based on a particular substance, of a representation based on a different substance. The syntax-phonology re-representation, seen from the syntactic point of view, has been referred to as exponence. The nature of this can be most directly observed in the internal structure of the minimal signs stored in the lexicon. The mental lexicon stores signs which associate a phonological representation with individual grammaticalizations, including crucially syntactic categorization, of cognitive concepts. Traditionally, these associated representations have been distinguished as the expression pole and the content pole of the sign. Exponence adds the former pole of the sign to the latter pole, as shown in Fig. 1.

Fig. 1. Planes, poles, and signs.

J. Anderson / Language Sciences 39 (2013) 15–30

17

The content poles, or (minimally) words, contain the categories whose substance-based requirements largely determine syntactic structure. The expression poles, or word forms, are themselves typically complex, as indicated in the figure by the introduction of the segment unit that largely bears the categorizations mentioned above. Language can thus be said to have a ‘double articulation’. However, at this point, I am principally interested in introducing the notion of re-representation, in the form of exponence, and the reverse relationship that adds the content pole to the expression pole. I talk here of re-representation as involving ‘addition’ of a distinctively based representation. This applies also to different aspects of structure internal to each plane. We can think of a language as set of re-representations that, from one perspective, starting from a set of cognitively-based syntactic categorizations, cumulatively adds further representations that introduce properties that enable the final addition, before motoric implementation, of a full phonological representation of our perception of the sound of a particular utterance. Seen from this direction, successive re-representations add to expressivity. Seen from the other direction, re-representation involves the addition of successive interpretations – interpretative re-representations – that approach a representation of cognitive substance. This returns us to substance, for what I am now going to look at is how each re-representation adds a grammaticalization of a distinct mental substance. Let’s look at what re-representation involves in terms of a simple example. We shall firstly encounter the grammaticalization of the kind of cognitive salience we can designate relational primacy; it is grammaticalized as the head-dependent relation (on whose formal properties see e.g. Marcus, 1967, chs. 5 and 6). The dependence/dependency appealed to here is only one variety of ‘dependence’ in a broad sense: the head is a characteristic, relationally salient, and distribution-determining simple element in a linguistic construct consisting of the head and its dependent(s). The most concentrated connection with general cognition is provided by the lexicon. On the basis of this connection there can be selected from the lexicon a set of signs whose collective content serves as the basis of a representation of some potential cognitive content. It is the categorization of these signs that decides what possible combinations there might be, and which drives the erection of hierarchical structure that is projected by the set of signs. Say that one sign is a verb with two complements; the complements are distinguished by the semantic relations they bear to the verb. We can provisionally represent the category, P for predicator, and its valency as in (1): (1)

Relevant syntactic categorization {P/{agent}{neutral}}

Abbreviated exponence read

The braces enclose category labels, and the slash introduces to its right the categories of the valency, enclosed in inner braces. Say we select the name ‘Daisy’, assumed to be identifiable by the hearer, as one complement and that the other is a particular member of the set denoted by ‘book’ that is also assumed to be identifiable by the hearer. The former participant is lexically definite, but the second is syntactically complex, consisting of a referential element, that expounded as the, and the sign that denotes the appropriate set that the referent is a member of. The first element is a functional category that I designate as a determiner, so referential that takes the category of the second as a complement, as represented in (2): (2)

Relevant syntactic categorization {D/{N}}

Abbreviated exponence the

For simplicity, neither of the categorizations in (1) and (2) includes the subcategorization and other information that distinguishes them from other signs that have the same valency. The noun in (3) is, as is prototypical, not complemented: (3)

Relevant syntactic categorization {N}

Abbreviated exponence book

Comparison between (1) and (3) illustrates rather nicely the difference in relationality between prototypical verbs and nouns. This is in accord with their respective substances. What about Daisy? I take this, again provisionally, for the sake of illustration, to expound the same relevant category as the, but, unlike the latter, but like the noun, it is not complemented: (4)

Relevant categorization {D}

Abbreviated expression daisy

We now have four signs, with their categorizations. It is easy to see, given these representations, that the and book can form a unit, which is projected syntactically as represented graphically in (5):

18

J. Anderson / Language Sciences 39 (2013) 15–30

(5)

The discontinuous lines associate the syntactic categorization with the rest of the sign, and the solid line specifies a head-modifier relation: the noun expounded by book is dependent on the determiner that requires a noun complement. We have here the beginning of a hierarchical structure that grammaticalizes cognitive salience: determiners are prototypically complemented, relational – one contribution to salience; and reference, which relates to the extra-linguistic, is more salient than denotation, which is merely an aid to identification. The determiner enables the noun to play a (non-predicative) role in the sentence and determines the distribution of the phrase. We return to categorial salience below. If we take (5) together with (4) we now have two potential verb complements. But in order to realize their potential as satisfiers of verb valency, the determiner phrases must depend on an appropriate semantic relation, for that’s what identifies the complements of verbs. This is illustrated by the representation for the latter part of The book lay on the table in (6):

(6)

The semantic relation locative, belonging to another functional category, is shown as taking a determiner complement; it is the semantic relation that, as head, determines the distribution of the phrase. However, the missing first phrase in (6) apparently lacks a relation in The book lay on the table, despite the valency of the verb – as do both the determiner phrases in (4) and (5). Yet Daisy read the book, in particular, is an acceptable (if rather unexciting) sentence of English, though apparently lacking semantic relations that satisfy the valency of the verb. This is resolved if there is a lexical provision for converting determiners to semantic relations. That is, lexically the determiners in (4) and (5) are expanded by conversion as in (7) and (8):

(7)

(8)

Here agent and neutral are two of the semantic relations that determiners can be converted to, thereby identifying their manner of participation in the represented event. Neutral is the unmarked relation, and its precise semantic character depends on the kind of verb it complements. In The book lay on the table it introduces the located entity, and in Daisy read the book it represents the goal of the action. The conversion introduces a dependency relation between the derived category and the base: as in (6), the semantic relation is the head and so determines the distribution of the item. Both lexical and syntactic heads determine the syntax of the configuration they head. However, the lexical dependencies in (7) and (8) are intrinsically non-linear; the configuration is realized as a single word. In a similar fashion, nouns can be converted into determiners. In this way they acquire referentiality lexically, rather than syntactically as in (5) and (6); by itself a noun merely denotes. Thus in Daisy reads books we can represent books as in (9):

J. Anderson / Language Sciences 39 (2013) 15–30

19

(9)

And another such conversion is associated with the verbs that we have been looking at. This emerges if we compare a sentence like Daisy may read the book with Daisy read the book. The may expounds another functional category, the finite element, and is the head of the sentence:

(10)

Compare this representation with (11):

(11)

Here the verb has been converted in the lexicon to a finite, extending the kind of representation in (6). The saliency of F derives from its status as the guarantee of the potential independence of the predication. Categorial cognitive salience mainly depends on two kinds of relationality: (a) the capacity to impose relations on other categories, and (b) the ability to relate an utterance to the context of utterance. All the functional categories, including determiner and finiteness, are relatively salient in terms of (a); members of the category that bears semantic relation, functors, are particularly strong in view of the number of relation-types imposed and the role of these in relating arguments to predicators. {F}, the head of any sentence, is central in relating that sentence to the context of utterance, in specifying the speech act and deictic parameters. Determiners, referential and possibly deictic, are also (b)-salient. Nouns are minimally relational, verbs are very type-(a) relational for a non-functional category.

20

J. Anderson / Language Sciences 39 (2013) 15–30

Contemplation of (10) and (11) now raises the question of the status of the configuration headed by the other semantic relation, that in (7). Before we address this, I should observe that the syntactic dependency relations in these various representations are, despite what might be suggested by the graphic presentation, not yet linearized: they are not intrinsically non-linear, but they are pre-linear. Linearity, a grammaticalization of, most basically, the perception of time, involves a further re-representation. Thus far we have been looking at the configurational consequences of the categorizations derived from the lexicon, in particular their valency requirements. It is a combination of the categorizations and the dependency relations that determine the unmarked linearities of the words in sentences. Thus I shall in the first place add into the representation in (11) the other configuration satisfying the valency of the verb, in the form of (12), on the understanding that in this case too the apparent linearity is arbitrary:

(12)

Linearization proceeds on the basis of the information in (12). However, there is a further element in the determination of the appropriate linearization. There is a learnable principle of English syntax whereby there is selected the complement of the verb that emerges as initial in such a sentence, its subject. This has been called the subject-selection hierarchy, and it ranks the semantic relations in terms of their eligibility for assuming subject-status. So far we can see that, apparently, agent outranks neutral (as in Daisy read the book) and the latter outranks locative (as in The book lay on the table). This hierarchy is a further grammaticalization, of the usual selection of topic. We have a motivation for choice of subject; what we must now consider is the mechanism that assigns subject status in English. Some light is thrown on this by the full version of (10), given as (13):

(13)

This makes it explicit that it is not P that Daisy comes to precede, but the finite element, whether or not it is a separate word or is a consequence of conversion in the lexicon. Daisy is the subject of F, though a participant of P. The formulation of subject formation I propose depends on the recognition of argument-sharing as a property of just this kind of structure; that is, in very specific circumstances, two semantic relations can govern the same dependent. And the same mechanism can also allow for traditional ‘raising’, and indeed ‘control’ structures (see Section 3). Let us look at the basis for this mechanism.

J. Anderson / Language Sciences 39 (2013) 15–30

21

Unlike P in (13), the F is not complemented by a semantic relation. The assumption here is that at least the neutral relation must be present with the relationally central F and P: it is a default, in the absence of neutral in the valency of the F or P. Thus in It is raining/It rained the verb lacks a participant valency, and it is one of these default neutrals that is realized as the expletive. More precisely, the latter expounds an uncomplemented D that satisfies the valency of the neutral of P, since that relation, like any other semantic relation, normally requires a determiner. However, the F component of rain, also lacking a valency, also must lack a semantically-motivated D; hence the two default neutrals share the expletive. Similarly, in (12) and (13) a default neutral depends on the finite element, but in this case it derives its complementing determiner from the valency of the verb, in the form of the designated subject. We should then extend (12) as in (14):

(14)

The default neutral associates itself with the designated subject in sharing its complement. Recall that (14) is still unlinearized. The linearization is now determined exhaustively by the categories and syntactic dependencies in the representation in (14). Default linearization in English is head-before-dependent – and as we can see applies to most of the dependencies in the structures we have looked at. But the phrase associated with a default neutral of P shows dependent-before-head, in common with a minority of other constructions. This circumstance overrides the default, and together they give the linearizations in (15):

(15)

The position of the subject is determined by argument-sharing with the default neutral of {F}, whose positioning is marked. What we have arrived at is the next stage in a hierarchy of re-representation: from the categorizations of a bunch of signs to these plus dependency relations between them, we now have that plus linearization of the signs. In the above account I have oriented the hierarchy from the point of view of expression. From the point of view of interpretation, the pre-finite complement of a verb, for instance, is interpreted as shared with a neutral dependent on the finite element; and, further this neutral is not part of the valency of F. We can now add a further structural component. A sentence is associated with an intonation contour, which can express a secondary feature of finiteness, but whose alignment is, in part, determined by the linear placement of the elements in the sentence. Thus in Suzie’s flying to London? a particular intonation contour alone expresses a kind of propositional interrogativeness. Compare the interrogative of Is Suzie flying to London?, whose expression involves linear positioning. Interrogative mood is a property of finiteness, but in Suzie’s flying to London? the unmarked placement of the nucleus of its intonational expression is normally localized on the terminal lexical component of the sentence – as crudely indicated by the underlining:

22

J. Anderson / Language Sciences 39 (2013) 15–30

(16)

On {interrogative} as a secondary feature of F, see again Section 3. Intonation has now been added to the cumulative hierarchy of re-representation. However, what is of even more significance here than the hierarchy is that each stage involves the addition of a linguistic representation of a different aspect of substance. Not just the categorization of signs but also the structures they project are substantively based. Intonation is a grammaticalization of a dimension of aural perception. Linearity is a grammaticalization of our perception of time. Dependency is a grammaticalization of what we perceive as cognitively connected by relative saliency in relational power.

Fig. 2. Substance, modules, and re-representation.

J. Anderson / Language Sciences 39 (2013) 15–30

23

The stages in the hierarchy are substantively based. Each stage involves addition of a substantively-based sub-module of syntax. And the plane of syntax itself is distinguished as a module from phonology by its substantive basis, its particular basic alphabet, the notional categories. Modularity as a whole is determined substantively. And particular (sub-)modules are motivated by the substantive content whose expression they introduce. We can diagram much of the above as in the left-hand column of Fig. 2 (adapted from the prologue to Anderson, 2011, vol. I, p. 7), which is oriented in the direction of expression. The right-hand column lays out a similar hierarchy for phonology, with the two hierarchies being linked at the top and the bottom. And, again from the point of view of expression, the arrows at the top and bottom introduce, by exponence, lexical phonology and utterance phonology. However, what I now want to focus on are the similarities and the differences between the two hierarchies – which serve to introduce the next question and the next section. 3. How similar are phonology and syntax? The phonological hierarchy of sub-modules has the same components as the syntactic – except that the basic alphabet in this case is phonological: the sound-based alphabet does not define merely a sub-module but the module as a whole. Moreover, the hierarchization is also distinct from in syntax, in terms of the placement therein of the dependency and linearization sub-modules. We have still to look at motivations for the phonological hierarchy of sub-modules; but Fig. 2, if its contents can be supported, already illustrates both the existence of analogies in structure between syntax and phonology and limitations on the extent of these. Durand (MS) sums up the notion of structural analogy as follows (and see too Durand, 1990, pp. 281, 286, 1995, Section 6): [L]a construction et la caractérisation des catégories et des constituants linguistiques reposent sur des analogies structurales fondamentales qui s’appliquent de la phonologie à la syntaxe-sémantique. Analogie, cependant, ne veut pas dire identité: les primitives de la phonologie n’ont ni le même fondement cognitif que les primitives de la syntaxe-sémantique, ni le même rôle, et la complexité de la constituance syntaxique qui est directement mise au service de l’expression du sens est beaucoup plus grande que la complexité des structures phonologiques. Both aspects of analogy, positive and negative, have a substantive basis: the presence of analogies in structure between the planes and their absence are also both substance-driven (cf. Anderson, 2006a – and on other approaches to analogy Anderson, 1987; also Scheer, this volume). The basic analogy is that both planes represent substance. Fig. 2 suggests that they also show substance-based sub-modularization. Phonology has a basic alphabet that is a grammaticalization of aspects of what we perceive as sound that serve the expressiveness of language. Phonology has sub-modules of linearization and dependency. The latter establishes dependency relations among the segments on the basis of their categorization, as in syntax. The linearization module deals with those linearizations that are determined by the categorization of segments, and, unlike in the syntax, it does not reflect the dependency relations between them. The expression pole of a sign is not normally unitary. However, much of the sequencing of segments constituting the pole follows from their categorization. On the other hand, the sign must specify the sequence of the bundles of segments that belong together as parts of the same syllable. This assumes an idea of the expression pole such as is represented schematically in Fig. 3. The syllabic bundles, arbitrarily limited to three for illustration, are sequenced individually for each sign, but within each bundle, the component segments, again arbitrarily limited to three, conform to redundancies. Let’s look at what generalizations seem to be involved in erecting the basic syllabic level of phonological structure and how this relates to the hierarchy of sub-modules. Attention to hypersyllabic phonology is precluded here for reasons of space (see e.g. Anderson, 1986, 2011, vol. III, Section 6.2.1). In the unmarked case, the vowel, the most sonorous segment, is head of the syllable, and consonants are positioned on either side, and if there is more than one consonant on either side, their relative positions are in general in accord with relative sonority. In addition, there are partially language-specific, or at least not universally-implemented sequencing stipulations that supplement sonority, as with syllable-final [-kt] in many languages, or which even reverse the sonority requirement, as with initial [s] + stop clusters in English and other languages. Even apparent exceptions to such determinacy of intrasyllabic sequence arguably contrast with the non-exceptional in their categorizations (as with the affricate that terminates German Schatz). At this point, before proceeding, we must investigate the categorization of phonological segments, including the representation of sonority.

Fig. 3. The sign relation.

24

J. Anderson / Language Sciences 39 (2013) 15–30

We can associate segment categories with having valencies. But largely these are not of the character I’ve so far attributed to syntactic categories. The latter are shown with complements in (15). But syntax must also allow for adjuncts, dependents that seek a head rather than being required by a head, as with the last item in Daisy read the book yesterday, whose status, and its syntactic consequences, are shown in (17):

(17)

The locative realized as Yesterday not only requires a D as a complement, but (optionally, in its case) requires to modify a verb. The latter is indicated by the backward slash before {P} in its valency. The effect of this in the syntax is to introduce the locative as a dependent of a higher P that replicates the modified P that is itself introduced from the lexicon, thus creating a more inclusive verb phrase, as also shown in (17). Complementation and adjuncthood are also found in phonological structure, but the latter is much more common. Thus the structurally optional consonant in pay is adjoined to the vowel, giving the configuration in (18), where {C} represents a plosive and {V} a vowel:

(18)

I come below to what motivates the placement of the consonant before the vowel, that is, what determines onset vs. rhyme allegiance for a consonant. We can introduce a pre-vocalic sonorant consonant, specifically a liquid, also dependent on the vowel, which, being more sonorous than [p], comes closer to it, as in pray. Basic serialization is sonority-driven. But the liquid, {V;C}, less consonantal than [p], will also depend on the [p], as shown in (19):

(19)

The consonant that contrasts most with the vowel, that is perceptually most distinctive, is the head of the cluster. The [r] depends both on a consonant lower than it in sonority (C<), and on a vowel. (19) has also introduced componentiality – as we shall see, an analogy with the syntax. In the representation of the primary category of [p], C appears alone, but it combines with V in representing [r]. The V and the C are again regarded as privative features whose combinations define categories, rather than being atomic category labels. Moreover, the semi-colon indicates that in this representation the property to the left is stronger than that on the right: this is another manifestation of the dependency relation (see e.g. Anderson and Durand, 1986, Section 3.3.1), which concept can be expressed graphically in various ways. C preponderates over/governs V in fricatives: {C;V}. These categorizations serve to rank [r] between [p] and [e] in terms of its sonority. And it is this categorization that leads to the serialization in (19).

J. Anderson / Language Sciences 39 (2013) 15–30

25

The cluster in spray introduces a further consonant. This is a fricative, intermediate in sonority between [p] and [r], normally representable as {C;V}. But it quite generally doesn’t come between them in such clusters. The [s] has a special status here: it is the only consonant that can precede an initial voiceless stop. Therefore, it can be represented contrastively in this circumstance as simply the unspecified consonant that is adjoined to only voiceless stops; and this unspecified consonant stands outside the sonority requirement on sequencing. Contrastively, it lacks any measure of sonority. The structural consequence is shown in (20):

(20)

[s] here belongs to a different system from [s] in, say, say, where it contrasts with a range of other segments, not merely with its absence. The special status of this [s] correlates with exceptional behaviour in a number of languages, such as the fact that each initial [s] + stop constitutes a distinct alliterative unit in Old English verse. Similarly, the apparent indeterminacy of sequencing exhibited by, for instance, ask vs. axe, both involving post-vocalic [s] and [k], is resolved by the observation that the two occurrences of [s] belong to two different systems and so have two different categorizations. It is the final [s] in axe that is special in this instance. Alphabetic writing (and hard-core phonemics) obscures this polysystemicity. Syntax and phonology share adjunction, then. Phonology can also show complementation such as we encountered in syntax. This involves us in looking firstly at the status of pre- vs. post-vocalic placement of consonants. A range of phenomena, such as rhyming in verse, suggest that post-vocalic consonants are more closely related to the vowel than the pre-vocalic, such that they come to constitute a sub-constituent with the vowel. That is, as a first approximation, we might represent rent as in (21):

(21)

The second-highest {V} is the head of the lower constituent, the rhyme, which contains a less-inclusive rhyme we shall return to and I shall then represent rather differently. And linearizations within the rhyme are in conformity with relative sonority. The hierarchization of constituents in (21) suggests that the syllable bundles of Fig. 3 contain a sub-bundle. In the case of (21) we have {r{t,n,e}}. The inner bundle is linearized first, with decreasing sonority, then the outer with increasing sonority, and at each stage the head is the most sonorous segment, the vowel. Thus far in these lexical–phonological representations, intrasyllabic linearization and dependency seem to reflect, independently, simply the categorizations of the segments. However, some vowels take a complement. Selection of the complement of a vowel presupposes that sequencing has priority over dependency assignment in that case, as is supposed in Fig. 2. As is familiar, we can on various grounds divide the English vowels of the major (accent-bearing) system into two groups, sometimes called ‘checked’ vs. ‘free’. [e] is a checked vowel, and so normally (ignoring morphological and other ‘appendices’) heads a rhyme with at least one other segment, and possibly two, as in (21). The free vowel of (18)–(20) normally takes only one coda consonant, as in (sp)rain, and may lack one. A single consonant after a checked vowel, being necessary, is thus a complement, and we can say that each rhyme, whether checked or free, has only one adjunct. The need for a complement may involve sharing the initial consonant of a following syllable, as in English.

26

J. Anderson / Language Sciences 39 (2013) 15–30

The complement to the vowel head is identified as the adjacent coda consonant: identification of the complement, and thus the dependency relation, pre-supposes prior linearization. It’s not just that the complement is the more sonorous rhyme consonant, because this doesn’t cover all cases, as illustrated by act. The sub-modules of phonology are apparently hierarchized as in Fig. 2. Even if we associate complement status in act ultimately with the difference in categorization between the consonants, we can at least say that linearization in the phonology does not presuppose dependency. As concerns the relation between sub-modules, we have both analogy and dis-analogy between the planes. Such considerations mean that we should substitute (22) for (21):

(22)

The vowel has the valency /C, i.e. it takes any consonant (not just {C}, plosive), and it takes as complement the rhyme consonant assigned adjacency to it by sonority, thereby preserving projectivity. As with Yesterday in (17) and verbs, for a consonant, being an adjunct of a vowel is optional, but it is the default option. Nevertheless a consonant may alternatively be a complement, as with, in the syntax, Yesterday in I enjoyed Yesterday. If the preceding is just, complement and adjunct are structural properties that are shared between phonology and syntax. Syntax and phonology can be perceived as similar in this respect. Anderson (2006a, 2011, vol. III, Section 1.7) suggest that we can also associate specifiers with phonology, in the shape of the [s] of (20), which displays behaviour reminiscent of syntactic specifiers such as very. However, complements in particular are marginal in phonology, and may lack any motivation in many languages, whereas transitivity is fundamental to syntax. This reflects the different roles of the two planes, in particular the need for syntax to represent scenes with multiple (true) participants. And we can also now observe other discrepancies within a general picture of the preference for analogy. The preference itself arises on account of the common mental capacities that are applied to the structuring of two domains with perceived similarities therein. But there are also differences in the substances and in the expressive functions of the planes that frustrate analogy. Thus, such trees as (20) and (22) maximize dependency, short of violating projectivity, or ‘tangling’. Anderson (2011, vol. III, Section 2.5) suggests that this has to do with maintaining timing relations. Syntax does not maximize dependency in general, but does allow non-projectivity, or ‘tangling’, as in (16). – or, more strikingly, (23) (where, for present purposes, I am not concerned with the nature of to-read, apart from its – non-finite – verb status):

(23)

Here there are two neutrals that are not part of a valency, and they both share with the subject of to-read. The lower one, dependent on the upper P, combines, in this ‘control’ structure, with the experiencer that is part of the valency of the verb. Here the argument sharing permits compact expression of a complex scene involving an entity participating in two different ways. This kind of complexity is unnecessary in phonology – which can therefore opt for the more restricted option of eschewing ‘tangling’, even if it is tightly controlled. The ambisyllabicity of [pı[t]ı] is also consistent with ‘no-tangling’.

27

J. Anderson / Language Sciences 39 (2013) 15–30

This adds another discrepancy between the planes, beside the difference in the hierarchization of the planes shown in Fig. 2. In both cases, the dis-analogy has a substantive basis. Such structures as that in (23) are not only unnecessary in the representation of the substance of phonology, but inimical to their direct manifestation in time. This property of phonological representations also underlies the ranking of linearization above dependency assignment that is embodied in the right hand column of Fig. 2: linearization directly reflects categorization. However, the representation in (23) also introduces another analogy, in the form of the presence of secondary categories, one of which, that of mood, is instantiated by the features {interrogative} and {declarative}. The representations of both phonology and syntax require a distinction between primary and secondary categorization. Primary categories determine the basic distribution of the segment/word; secondary features may fine-tune this, but they also participate in other types of regularity. For example, the secondary phonological feature of voicing, which is one manifestation of the secondary equivalent of V, represented {v}, may be required to be either present or absent in adjacent obstruents – as in English sift and swapped vs. sieved and swabbed. The feature {v}, which shares an attenuated ‘sonority’ property of {V}, thus has a primary congener, and {interrogative} perhaps not; but Anderson (2011, vol. III, part III) argues that both types – with and without primary equivalent – are found in both phonology and syntax. And, in terms of substance, dedicated secondary features nevertheless cohere with that of the primary category that they typically select. Thus, the secondary category of diathesis reflects the relationality of prototypical verbs, and the secondary category of aspect reflects their dynamism. Anderson (2011, vol. III) describes a range of other analogies and dis-analogies based in both cases on substantive considerations (and see already Anderson, 1992). For instance, in order to characterize sonority, componentiality was introduced above into the representation of phonological categories. We can distinguish the categories as in (24): (24)

{C} stop

{C;V} fricative

{V;C} sonorant con.

{V} vowel

As we progress from left to right, relative sonority – embodied in the proportion of V present – increases. Analogous here is the representation of ‘nouniness’ in syntax (i.e. the extent to which the syntactic behaviour of a category approximates to that of a noun) in terms of the proportion of primary features present in a representation (Anderson 2011, vol. III, Section 3.1 – and see Section 4 below). But there are still other motivations for appealing to componentiality – both in phonology and in syntax – though this is disguised by the notation for syntactic categories appealed to so far. Componentiality is another partial analogy in structure. And pursuit of its consequences begins to introduce considerations to do with universals. 4. What is the status of language universals? The componential representations in (24) allow us to distinguish natural classes. Thus, the class of continuants is the class of segments that contain V, just as the class of consonants contains C. Obstruents have a preponderance of C; sonorants have preponderant V. Likewise, the complexity of the representations correlates with the relative markedness of the class or segment-type. The least marked are {V} and {C}. This status is reflected in their distribution in the languages of the world and in acquisitional priority – though, of course, other factors are also operative in the latter case, in particular. But it is hypothesized that increasing markedness, associated with greater complexity of representation, correlates with decreasing generality. Likewise, syntactic categories are defined componentially – but with a difference. First of all, the set of basic categories is potentially larger, and so their categorization introduces further complications. In particular, depending on the language, we may have to distinguish adjectives as well as nouns and verbs. More fundamentally, it is necessary to make a distinction, as anticipated, between functional and lexical, or contentive, categories. And it is the presence of the former that introduces a number of dis-analogies with phonology. Recall, for instance, the role of the neutral semantic relation in allowing for ‘tangling’, a feature of syntax but not phonology. Both these differences from phonological categorization – increased number/ complexity and the functional vs. contentive distinction – are embodied in Table 1 (adapted from Anderson, 2007), which contains representations of syntactic primary categories. Table 1 Primary syntactic categories. Functional

Non-functional

Finiteness Comparator Determiner Functor

1: Contentive Verb Adjective Noun 2: Non-contentive Name

{P/} {P,N/} {N/} { /}

{P;N} {P:N<>} {N;P<<>>} {}

The functional categories are all necessarily complement-taking, even if this valency is satisfied lexically, as a result of conversion; the contentive categories are varyingly likely to have complements, as indicated by the hierarchy of angle brackets in Table 1. This correlates with their categorization, constructed of the scene feature P and the entity feature N: the

28

J. Anderson / Language Sciences 39 (2013) 15–30

former is relational and dynamic, the latter discrete and stable. Verb, the most relational contentive, with a preponderance of P, is most often complemented, and noun, where N is preponderant over P, is only marginally complement-taking (unless converted from a verb). Intermediate is the adjective. It is introduced in Table 1 as combining the representations of verbs and nouns – {P;N} and {N;P} – abbreviated as {P:N}. Its corresponding functional category involves an equal combination of P and N ({P.N}). Each functional category except the lowest in the Table has a corresponding contentive category, which are their typical complement. The final functional category is a pure relation, primary-category-free; the semantic relations are its secondary features (Anderson, 2006b). To it corresponds the non-contentive { }: names are non-denotational, but, when activated, by a name-giving, identify individuals (Anderson 2007). Recall here the representation in (23), which we can now translate in (25) into the componential notation of Table 2:

(25)

Names in English acquire the capacity to refer only by conversion to {N} (Anderson, 2007, ch. 8), as in the case of Daisy in (25). Despite being in some sense the non-relational equivalent of functors, names belong with neither contentive nor functional categories: unlike contentive categories, they do not denote a class, though they may identify an individual as belonging to a (gender) class, and they are never predicative; but they are non-relational, and belong to a class that crosslinguistically is varyingly open-ended, as we find, more strikingly, with the classes denoted by adjectives. Once more, as in phonology, natural classes emerge: the categorization of contentive categories includes preponderance/ dependency; any category containing P can be predicative; categories with a preponderance of P – {P} and {P;N} – are verbal, those with a preponderance of {N} are nominal. We can also define, in terms of proportion of N, a dimension analogous to that of sonority, what has sometimes been called ‘nouniness’ (e.g. Ross, 1973; Anderson, 1997, Section 2.6.3). And relative complexity within the two sets of functional and contentive categories defines relative markedness. Non-contentive categories are less marked than contentive. Early acquisition is dominated by the differentiation of { }, i.e. absence of distinction in categorization, into commands {P}, referentials (typically deictic) {N}, and vocatives { }. These are the basis for the development of more complex categorizations, as both the lexicon and combinatory possibilities are expanded, the latter via the recognition of syntactic valency (/), the former via recognition of denotation, as distinct from reference. Syntactic markedness is, of course, also reflected in relative generality throughout languages. All these properties of a componentiality based on privative features are analogously displayed in syntax and phonology. Markedness raises questions to do with the status of universals and its relation to substantiveness, as we have already observed in relation to phonology. Adjectives, as belonging to the most complex of the contentive categories, are not as widespread in language; there are languages that lack adjectives or in which they constitute a closed class (Dixon, 1982), and others (such as Cherokee – Lindsey and Scancarelli, 1985) in which they are all overtly derived from verbs or nouns. This correlates with their markedness. And their markedness reflects their lack of cognitive distinctiveness. Their categorial representation combines almost contradictory specifications – {P;N} and {N;P} – and the presence of these reflects their cognitive overlap with the other syntactic categories. Adjectives are traditionally said to denote (notably gradient) ‘attributes’ or ‘qualities’. Now, ‘attributes’ include ‘states’, which can also be represented by non-prototypical verbs (whose denotata are prototypically dynamic). Compare He lives and He is alive. Historically, indeed, alive is a prepositional (functor) phrase, another means of expression for ‘qualities’. ‘Attributes’ also include ‘properties’, and ‘properties’ may be denoted by Table 2 Igbo adjectives. ukwu ‘large’ ojii ‘black, dark’ ohuru ‘new’ oma ‘good’

nta ‘small’ oca ‘white, light’ ocye ‘old’ ojoo ‘bad’

(dimension) (color} (age) (value)

J. Anderson / Language Sciences 39 (2013) 15–30

29

non-prototypical nouns (a category whose denotata are prototypically discrete, and non-gradient). Consider He is (a) dependent, and compare She is a giant and She is huge. Schachter (1985, pp. 14–15) discusses what he regards as a typical closed class of adjectives, that of Igbo, presented in Table 2, where three of the sub-classes are gradient. Adjectives even for these categories are apparently dispensable with in particular languages. They are certainly frequently derived. Colour terms, for instance, may be based, metonymically, on words for entities whose possession of the colour property is typical. Consider the following Greek colour words: prasino ‘green’ (cf. praso ‘leek’), kitrinos ‘yellow’ (cf. kitro ‘citron’), portokali ‘orange’, galazio ‘light blue’ (cf. Ancient Greek kalais ‘type of stone with greenish-blue colour’), kokinos ‘red’ (cf. koko ‘bean, berry, particularly the kermesberry, used to die scarlet’), mavros ‘black’ (cf. earlier amavros ‘lightless, sightless’). Ble ‘blue’, mov ‘purple, ros ‘pink’, gri ‘grey’, kafe ‘brown’, and aspros ‘white’ are loanwords. The source of some Greek colour adjectives with a long history is more uncertain, but kianos (‘turquoise’), for instance, seems to be associated with ‘a dark blue substance, used in the Heroic Age to adorn works in metal’ (Liddell and Scott, 1889, p.454) – though in the feminine it is also used of, among other things, ‘the blue corn-flower’. Lefkos ‘white’ is associated with ‘bright’, ‘light’. For more detail see Androulaki et al. (2006). It has been claimed that some languages even lack a lexical distinction between noun and verb (see Mithun, 1999, Section 2.3 for some illustration and discussion). Even if this is the case, the robust distinction between argument and predicator (reflecting entity vs. scene) is maintained syntactically with the help of the functional categories {N/} and {P/} (and of course { /}), and is signaled by position and morphology in particular. Such a categorial distinction is plausibly a language universal, even if not necessarily lexicalized. Differentiation of entity and scene has the same basic status as voiceless stop vs. vowel. Universality is based on cognitive distinctiveness, and recurrence of the distinctive cognitive property. Less general distinctions in language are associated with cognitive properties that are less inherently salient and culturally more restricted. Non-generality of linguistic properties may also indicate increased grammaticalization, or conventionalization – increased distance from the original represented substance. Some of these highly grammaticalized phenomena may be quite common, however, if they serve economy and other functions, particularly where cognitive distinctions may still be recovered. For instance, we can differentiate between two subsets of the semantic relations as core and non-core, where the latter are all sub-types of locative. On grounds of economy the core functor contrasts in Table 3 may be neutralized in expression in various ways, as represented there, where ‘agentive’ includes what is often distinguished as ‘experiencer’ and intransitive agentives are analysed as a combination of agentive and neutral. In active languages there is neutralization of the expression of transitive and intransitive agentives; in so-called ‘ergative languages’, what is neutralized is all the combinations involving neutral, crucially the distinction between agentive and nonagentive neutrals (realized as an ‘absolutive’); in subject-forming languages, only the transitive neutral is distinguished. However, the valency of the verb concerned, together with word-order distinctions, serves to differentiate among interpretations of the neutralization. These neutralizations instantiate different measures of economy, based on the variable grammaticalization of topicalizability. None of these neutralizations is universal, however functionally important and common it might be. The categories and the modularization of language, the existence of and restrictions on structural analogies between syntax and phonology, and the relative generality of linguistic properties are all based on substance. The only universals are substantively-based. Failure to recognize this underlies the arbitrariness of the categories (such as ‘S, V, O’) invoked in the traditional pursuit of ‘generic universals’ (defined in Carr, this volume), as pursued by Greenberg (1963) and many others. Table 3 Core functor neutralizations.

30

J. Anderson / Language Sciences 39 (2013) 15–30

Substantiveness also means that it is unnecessary to postulate an innate ‘faculty of language’ and/or ‘language acquisition device’ that contains ‘naturalistic universals’ (see again Carr, this volume). This is particularly obvious as concerns the distinctive features of phonology, but it is also true of their syntactic equivalents. The phonological contrasts manifested in individual languages are grammaticalizations of the perception of distinctions in sound made available by our articulatory apparatus. Syntactic categories reflect salient properties in our cognition. And relative frequency in language of the phonological and syntactic distinctions that are grammaticalized reflects their respective cognitive salience. Further, as suggested in the previous sections, configurational and linear properties and others based on these – to do with adjacency, for instance – are substantively based, as is recursion (even if it is not manifested universally in language – cf. Everett (2010) – or, within language, outside syntax). As concerns recursion, it is not the case (despite Neeleman and van der Koot, 2006) that because dependency and constituency tree representations make possible the presence of recursion they predict that presence in every case of such representation. There are, for instance, good substantive reasons that can disfavour some types of recursion or recursion in general (e.g. Anderson, 2011, vol. III, part III). The presence of recursion, on the other hand, is not the only reason for attributing dependency or constituency to linguistic artefacts. Aside from this, the possible presence of recursion is rather far from being the most significant aspect of the human capacities displayed in the structure of language. The less mechanistic capacity for figurativeness, for instance, is much more characteristic of these – not to mention the compensating not-uniquely-human capacity for inertia. All of the preceding accords with the view that languages are cultural artefacts, consistent with such recent work as is reported in Dunn et al. (2011), and that they represent mental substances. As was familiar until the latter half of the 20th century, an autonomous natural ‘universal grammar’ (or any equivalent there might be in the ‘biolinguistic program’ – see further e.g. Laks, this volume) is simply unnecessary: there is nothing for it, and only it, to explain. References Anderson, J.M., 1986. Suprasegmental dependencies. In: Durand, J. (Ed.), Dependency and Non-linear Phonology. Croom Helm, London, pp. 55–133. Anderson, J.M., 1987. The tradition of structural analogy. In: Steele, R., Threadgold, T. (Eds.), Language Topics: Essays in Honour of Michael Halliday. John Benjamins, Amsterdam, pp. 33–43. Anderson, J.M., 1992. Linguistic Representation: Structural Analogy and Stratification. Mouton de Gruyter, Berlin. Anderson, J.M., 1997. A Notional Theory of Syntactic Categories. Cambridge University Press, Cambridge. Anderson, J.M., 2006a. Structural analogy and universal grammar. Lingua 116, 601–633. Anderson, J.M., 2006b. Modern Grammars of Case: A Retrospective. Oxford University Press, Oxford. Anderson, J.M., 2007. The Grammar of Names. Oxford University Press, Oxford. Anderson, J.M., 2011. The Substance of Language, vol. I: The Domain of Syntax, vol. II: Morphology, Paradigms, and Periphrases, vol. III: Phonology-Syntax Analogies. Oxford University Press, Oxford. Anderson, J.M., Durand, J., 1986. Dependency phonology. In: Durand, J. (Ed.), Dependency and Non-linear Phonology. Croom Helm, London, pp. 1–54. Androulaki, A., Gômez-Pestaña, N., Mitsakis, C., Lillo Jover, J., Coventry, K., Davies, I., 2006. Basic colour terms in modern Greek: twelve terms including two blues. Journal of Greek Linguistics 7, 3–47. Dixon, R.M.W., 1982. Where Have All the Adjectives Gone? de Gruyter, Berlin. Dunn, M., Greenhill, S.J., Levinson, S.C., Gray, R.D., 2011. Evolved structure of language shows lineage-specific trends in word-order universals. Nature 473, 79–82. Durand, J., 1990. Generative and Non-linear Phonology. Longman, London. Durand, J., 1995. Universalism in phonology: atoms, structures and derivations. In: Durand, J., Katamba, F. (Eds.), Frontiers of Phonology. Longman, London, pp. 267–288. Durand, J., MS. Quelques remarques sur les prépositions de l’anglais, l’hypothèse localiste et le principe d’analogie structurale. Everett, D., 2010. The Shrinking Chomskyan Corner: A Reply to Nevins, Pesetsky, Rodrigues. . Greenberg, J.H., 1963. Some universals of grammar with particular reference to the order of meaningful elements. In: Greenberg, J.H. (Ed.), Universals of Human Language. MIT Press, Cambridge, Mass., pp. 73–113. Liddell, H.G., Scott, R., 1889. An Intermediate Greek-English Lexicon. Oxford University Press, Oxford. Lindsey, G., Scancarelli, J., 1985. Where Have All the Adjectives Come From? The Case of Cherokee. Proceedings of the Annual Meeting of the Berkeley Linguistics Society, vol. 11, pp. 207–215. Marcus, S., 1967. Algebraic Linguistics: Analytical Models. Academic Press, New York. Mithun, M., 1999. The Languages of Native North America. Cambridge University Press, Cambridge. Neeleman, A., van der Koot, H., 2006. On syntactic and phonological representations. Lingua 116, 1524–1552. Ross, J.R., 1973. Nouniness. In: Fujimura, O. (Ed.), Three Dimensions of Linguistic Theory. TEC, Tokyo, pp. 259–376. Schachter, P., 1985. Parts-of-speech systems. In: Shopen, T. (Ed.), Language Typology and Syntactic Description, Clause Structure, vol. I. Cambridge University Press, Cambridge, pp. 3–61.