Journal of Pragmatics 6 (1982) 2~I-292 North-Holland Publishing Company
2~I
ON BEINC A M O D E L
Jacob L. MEY ~"
This paper discusses the notion of "model", as it is being currently used in much of the literature on AI, and confronts it with the commonly accepted humanistic concept of the same namt;. Three conditions on AI modeling are postulated: (I) it should represent language in use. (2) be psychologically plausible, and (3) strive towards a holistic approach. A word of warning is uttered concerning Ilhe use of metaphors, especially when dealing with models in a computer sarrounding.
1. Chaunceys R... Anyone who has seen Peter Sellers in his last movie (Being There; Ashby 1978) will remember the pathetic figure of Chance, the gardener who, by a series of unexpected events (and, indeed, chance) gets catapulted into mid-civilization from a sheltered existence among flowers and shrubs, at the astonishing age of fifty-plus. Chauncey Gardener, as he is called in his new identity, ]3ever has ac.~iuired any of the basic skills that we a~re told are indispensable', for a life in our culture. When he says: "I don't read newspaper,s", his statement is strictly and literally true: Chauncey has never had the chance to learn how to reaci. Hns audience, of com'se (at a press conference), attributes his utterance ,o an unprecedented degree of sophistication and independence of the ;pirit. Chauncey cannot write either, for that matte~', neither does he possess ~ny of the other qualifications we regard as essential for a life in modern, late 20th cenllury society. Actually, the only world knowledge he possesses stems from television: when his employer, the Old Man, who had found him as an o,phan in the streets, but who has never cared about the boy's eclucation, dis¢:ow.~rs that his work abilities improve when stimulated by TV, he has sets install,~d all around the hous,~ and garden, so Chance never gets to be lonely, or deprived of his hot line to the outside world. Thus, Chance the gardener rep~'esents a trve and authentic product of our * Aumor s a d d ~ s : Jacob L, Mey,
0378'2166/82/~-~/$02'75
a ~ Umversity, The Rasmus Rask Institute :of Linguistics,
© 1982 North-Holland
282
J.L. Mey / On being a model
media-dominated, "total institution" of a society. As such, he is a prototype, a mod.::l, of certain important traits and trends in modern communicative behavior. Furthermore, Chauncey Gardener is a model that may well come alive one day, and not just m the personification of a brilliant movie actor. Seeing this film, I was deeply moved anti strangely fascinated, both by Peter Sellers' marvelous acting and by the problems that were implicit in tiffs picture. How come, I asked myself, that such a perfectly preposterous, not to say ludicrous set-up: a modem "enfant sauvrtge" impressing President,,; and high finance by witless, and unwittingly produced trivialities, not only becomes perfectly acceptable on the screen (after all, that may have something to do with Peter Sellers' gift of making even the most bizarre and contradictory characters become credible), but also, and much more importantly, Chance the gardener and Chauncey Gardener, Presidential Adviser, seem like persons you have known all your life: perfectly normal, perfectly plausible, even against the critical benefit of the camera's double eye. Listening to Chauncey telling the President of the United States: "After winter, we'll have summer", we notice the puzzled look in the First Executive's eyes, and for a moment we feel that now the inevitable moment of truth must have arrived. But no: thanks to a little help from Chauncey's friend, the business tycoon whose wife had "found" Chauncey in the streets of Washington, D.C., just as the Old Man had found the boy Chance 50 years earlier, the President suddenly "understands" the depth of Chauncey's remark: from a banal observation on the revolving seasons, it becomes a profound and protring comment, fraught with hidden, but all the more powerful predictive capabilities with regard to the future of the nation's economy.
2. On being a model Chaunoey's fate can be used to illustrate a multitude of aspects of modern, industrialized society and its communicative strategies and devices.. My own thoughts, however, after having seen this picture, for several weeks kept revolving around one particular feature ,of the movie's main character. The question that 1 kept asking myself was: in what sense does Chauncey's success represent reality? What does this gardener-turned-Presidential-Adviser have to tell us about the processes of thinking, spe~,&ing, and communicatinj$, in short, about the things that are going on in our heads as well as in the heads of the people around us? In other words, is Chauncey Gardener in a way (and if yes, in w~at way) a model of human cognitive abilities and communicative capacities? Tile above question can also be formulated in another manner. But before I go on to do that, I have to mention something else. A few months after having watched the Peter Sellers movie, I discovered that I had not been the only one
J.L. Me), / On being a mode/
283
to be taken in by his tour de force in interpreting this "minimum pragmaticum" of human cognitive processing. In May of 1980, while the German AI society held a meeting at Bielefeld, West Germany, it so happened that some ot the city's playhouses were s h o ~ n g exactly this picture. Two of the participants, Katharina Morik and Gisda Zifonun, went off one night to have what they called some "non-discipline related entertainme~:~t", and of course, happened right onto Peter Sellers' Chance. Which, naturally, gave rise 1'.oa whole chain of very discipline-related reflections on the nature of cognitive processes an,d their modeling in AI, as compared to their representation (xmt to say simulation) in the movie. The two participants published their afterthoughts on Chance and AI in a very entertaining piece, which they called "Welcome Mr. Claance in the AI Community!" (Morik and Zifonun 1980). Here, too, they raise the certainly non-trivial problem whether or not Mr. Chauncey Gardener would be able 1Lo pass the Turing test. The question is not trivial, because it represents a dilemma: if Chance passes the test (by chance or otherwise?), as of course he should (after all he,'s human!), then we must ask ourselves what it takes to pass such a test. The answer is: very little; a small amount of knowledge about plants and trees and the cycle of nature will take you a long way. Thus, the Turing test reveals itself as a rather vacuous procedure (as many wise people have said all along, of course). But suppose now Mr. Chance flunks the test. Then he is worse than many a computer program, and the test is worthless again, since it doesn't do what it was meant to do" give us a way of distinguishing between a human and a non-human intelligence (or between intelligence and non-intelligence). We would have to put a human in the class of not-too-intelligent mach/nes, and everybody can see what that would do to our image. {I'11 come back to ttfis problem below.) Be this as it may, the. truly interesting question remains open: if we wish to attribute model qualities to either an intelligent program or a dumb human, in what sense can we say that the model is a valid representation of anything except itself?. If intelligent human behavior can be modeled by a dumb ma, zhine, or, vice versa, if machine behavior (non-intelligent in the human sense) can be modeled by a dumb human, then what is the scientific value of such modeling'? If anything can be a model of anything, what do we need models for? Under this angle, Mr. Chance's choices and chances of either passing (or failing) a Turing test are truly Hobsonian: do it, and you're damned (since it shouldn't have taken that little), don't, and you're damned, too (since, being a human, you're supposed to pass). Now, maybe the whole idea of using the test this way is circular; as Goodwin and Hein remark in their contribution to this issue (above, p.. 253), the really interesting distinction to be made here is not J
284
J . L Me.v / On beirg a model
that between human and non-human, but between intelligent an~ non-intelligent behavior. And, to make the discussior~ a little more concrete, let's focus on the modeling o t a specific human, cogpaitive activity, in partic~da~e that of language processing in terms of AI. Here, we will not consider !aaguage as a product of "choice and chance" (cf. Herdan 1958), but as an activity that speakers and hearers engage in as partners in a communicative si~l:uation.
3. Troubles with linguists: 1. A ~:ale of two models Since I'm a linguist, perhaps at this point it is appropriate to :,;ay something about the notion of "model", since linguists have used it (or misused iL whatever the feelir.~g may be) a lot over the years. The appropriateness is in the circumstance that we're dealing here with AI, a branch of science that is committed to modeling (some will say: simulating, but cf. Olsen's strictures ,3r~ . . .contrlhmicm . . . . . . . below, . . . . p. . . 3 ! !f ) human ~. o. '.~.l t l t t:': l t V.g.; . . tI&;I, VlU~5, incluoing, this in his . . . I:-°:':-~ ' '- ~' and in particular, humans' use of language. The question n ~ d s simply to be raised, what specific kind of acti¢ity (-ies) this modeling or simulation is supposed to aim at. In order to make the question a little more tangible, let me present you with two of linguistic history's famous models: the Eliza Cousins. The first Eliza,, also called Eliza D., is George Bernard Shaw's well-known character from Pygmalion. Her cousin (however remote) I'll call Eliza W., to do justice to her creator Joe Weizenbaum (1966). The two Elizas differ not only in name, but also in space and time (one's very British, the other American, and theL'e's a 70-year span between the two). M orc interesting, though, is the fact that Eliza D., as we all know, never made it beyond the stage (though she really was successful there, and still is, presumably, wherever My Fair Lady is played); but she never was a credible being, e~ cept perhaps for Professor Higgins. What you see on stage (or on the movie :screen) is what you get: a Shaw show, witty, entertaining, and thought-provoking. By contrasL probab~,y the only person who never believed in the other Eli:.a is her own spiritual father. Otherwise, she's a true live show for everyb~'dy People from all walks of life have been observed pounding the terminal, c~rs.:ng the keyboard as if it were a Ouija one, blaming the fake doctor, themselves, or their bad luck, but despite all: getting a true piece of (inger)actiono Certainly Eliza W. has its shcxtcomings, bat it is a workirg prograr~, representing some real, albeit r;~th~:r limited a.,,pects of language use and cognitive activity. True, such cognition as occurs has perchance more to do with the re-cognition of pre-establi,dled patterns, and the producing of standardized, pat answers. Yet, people believe in the doctor m~d take his (!) advice; and besides, who are we to say that such activities, as deployed and depicted by Eliza W~, are not part and parcel of human language production, and especially (as the Chauncey Gardener case shows) of human
J, L. Mey / On being a model
285
behavior in the interpretation of language (commonly called "under.,aanding")? Let me expand some more on this and ~sk: what do the two Elizas actual~ly produce, and how do their partners m interaction react to the Eli~ean utterantes? How do we inte~ret the linguistic behavior of these machines (cf. a!lso the case of Coppelia-like dolls and other homuncufi; see Goodwin and Hein's article, this issue p. 246)? The answer to this question will become clearer if we reframe it in the following way (and in doing so, we obtain the additie,nal advantage of bypassing the tricky question of these models' supposed or real 'qntelligence" altogether): What is Eliza D.'s in-, respectively output, as compared to that of Eliza W.? Eliza D. produces sentences, i.e. utterances of a particular, well-defined (not to say: well-fol mc dj kind that are programmed to sound rigiat, but are without much connection with real life. Mainly, Eliza D. doesn't offer any kind of openne$~ t~xu~arele raal "o,a.... 1¢ ,~.. o:, .... ':"'" ~:l:..~ r~ --------~ a r,r~t,anti~l v---,--,--,,-or ,,.,, ,o,,.,,,.l. ,, t,,,. ,~IILIdLOI, t l I L P l l |" S 1-'"h" 1 ~ 1 t ~ ld.ll,r' iOtl.J. may produce her famous "The rain in Spain falls mostly in the plain", but like the poor tourist who has learnt to parrot some of the pleases frem her or his pocket travel guide - she's at a complete loss as to where to go from there. Indeed, when the going gets too strong (as in the famous As,rot scene), she has no other choice than to make a quick ~:xit. Not so Eliza W.: she's a child of the real world. The program (I just called it "she", but I know I shouldn't) produces more or less apposite resportses to particular verbal stimuli. Eliza W. n~ver runs out on clients (nor out of responses either, for that matter). Like the Rogerian therapist it is supposed to simulate, it always has some further questions; if necessary, the program just ~' :b it. ,, bounces back to a previously used, earx~er entry on ks list. While Eliza D.'s most distinctive feature is its production (at all costs)of correct sentences (down to, and even with speci~ attention to the !revel of phonetics and pronunciation), Eliza W. attempts, however clumsil2¢, to :~imuhue human behavior. Eliza D. is limited, by virtue of her set-up and by decree oil her creator, to modeling a certain, limited, abstract human competence in language. Here, Shaw operated on an assumption that some half a ,'entury later came to be propagated as the true gospel of linguistic theory accordinl,3 to Noam the Prophet~ viz. that human linguistic activity essentially consists in the production of a l~imited subset of the set containing all, and only the correq't sentences of a language. By contrast, Eliza W. produces recognizable utterances in an interpretive context. Not only is this Eliza seven decades apart from her cousin in time, but she also represents an entirely different understanding of human linguistic (and cognitive) behavior, From the point of view of linguistic interaction, Eliiza W. assumes that its utterances are understood, and it is prepared to deal with any response. Contrariwise, Eliza D, (not unlike the grammarians of yester:¢ear) -
286
J.L. Mey / On being a model
only wants to know whether or not one can say this or that, and how it should be said to sound genuine, or be, we~l-formed. Basically, Eliza D.'s worries are about belonging: does she belong to the right set of people, do her uttewances belong to the set of correct s,mtence~ of English? Thus, she is a true-to.-type characterization of the linguistic tyranny exercised by the Native Speaker p a r excellence and his ideal language: the "King's English" (see Mey 1981). For Eliza D., a person is what he or she speaks: merely "Being There" is nc,t enough; you have to belong. By contrast, Chauncey Gardener, like Eliza W., seems to operate on the principle that you speak whatever you are, or people t~fink you are: the important thing is not to upset the linguistic rocker. It isn'~ ,too important that you say only correct things at all times; what matters is t h~.t your partners get your drift, that you're understood for what you are: someo:v, who wants to be understood, one who's not about to perpetrate linguistic a:trocities, such as rocTdng the commor~ boat, or not wanting "to keep up the others' belief systems" (cf. Morik and Zifonun 1980: 57). Thus, while Eliza D. might have a slight chance of being accepted by maxtmally the phonological component of a Chomsky-type generative grammar, Eliza W. belongs to the family of AI-oriented models, be it as a rather distant, and maybe spinster relative.
4. Troubles with linguists: II. Competence and AN
The controversies about competence in linguistics have been with us for quite a few years, and the notion seems to. possess an enviable degree of sturdy longeAty. It is interesting to note, though, that the concept of competence from the very beginning of its career has ibeen a non-empirical one. It was, so, to speak, called in from the cold, outer regions of thin air in order to explain a number of otherwise strange-looking phenomena, such as the problem of how people, given a limited exposure to the facts of life (in this case: of lang~uage), attain a near-perfect mastery, in the course of only a few years, withou~ any textbooks, teachers, or semblances of formal education. In a way, this line of n'easoning reminds us of the old belief, according to which young people (boy,,; in particular, for some mysterious reason), pick up their knowledge of sex (I'm talking here of sexual competence: performance is a later affair) from what is commonly and euphemistically referred to as "'the gutter". To which I would reply, with the unfortunate Franqois Delamater, that I have spent days and weeks, "making a comprehensive survey" of the graters of Greenwich Village without ever finding ihe slightest trace of sex [1]. Obviously, as Goodwin and Hein remark (thi,,; issue, p. 266), it "is imp,assi[I] The reference is to one of Thurber and Whitc's anti-hcr~es (1960: 87).
J.L. Mey / On being a model
287
ble to observe competence directly". Hence, the question is legi! imately asked: to what extent is competence an empirically justified concept? My answer to that question is: it's not. Furthermore, I feel that the complementary concept of "performance", as developed in generative grammar, suffers from the same birth defect as does its Siamese twi~n, competence. This is, of course, not to say that people do not perform hrtguistic or other acts; nor does it mean that we shouldn't try to describe and explain those acts, that concrete performance. But introducing "some performance model" /t la Chomsky only because we need it as a supplement to another abstrac, t model, that of competence, makes little sense. A performance mode~ wants to be taken seriously; it should not be introduced as a stop-gap, a post jactum appendix to compe~tence, to take care of the loose ends,, the things that competence can't deal with I21. H~ving gotten this theoretical stumbling-block out of the way, I want to ask the following question: What kind of linguistic model do we want to use in Al-reilated research? Here., I would like to state three minimal conditions phts an important corollary: The model should aim at representing language in use. The model should be psychologically plausible. It should be holistic rather than particularistic ("big" vs~ "little" model). Finally, we should never, i~, our minds or in reality, confuse the model with the "real thing"; therefore, we should exercise great restraint in de~ling with ~he metaphors that are built iato the aaodel (cf. Fortescue 1979, an,J Olsen's coatribution in this issue, p. 3151,. I'll deal v~ith the.. different conditions separately below. (a) Language in use. By this I mean that we should concentrate cur efforts on reproducing actual communicative behavior, including, the presence of other persons than the speaker. Specifically, we should take into account: (1) the effect that the speake/s words have on the hearer; (2) the preconditions that determine the hearer's understanding of the spoken utterances, not only the speaker's conditions of production of those utterances.
Both these aspects can be illustrated by inviting our friend Mr. Chance in once more. The way in which his drivel is understood b) ~ his powerful (and [2] Famous generativegrammarians, includingChomsky himself, have been known to discuss quite seriously the phenomenon of "presence of food in tbe mouth", as, havh~g to be dealt with by a ffleory of performance, This makes about as much sense as trying to de:~criibenormal hearing on the basis of patients with hearing trouble because of progressive e~ostoses in the aural duct.
288
J. 1. Ate), / On being a model
certainly not stupid) listeners i~, deternfined (as we saw above) b> their wish to protect their belief systems and to keep 7;hem intact. This is important, since it is those very systems that prop up their own protagonists' social power; hence, positive reaction on the pert of these h,~arers can be expected, once the lead is g~ven. Alternatively, to take a well..know:n example from the AI literature: The reason that a particular utterance in a restaurant situation is correctly understood is that it enters into a "stereotyped sequence of events", the "backbone" of what is called the restaur~,,t script (of. Schank and Riesbeck 1981: 101). We have determined beforehand what kind of effects what sort of utterances (may/can/should) have, :~iven the re.st;~urant setting. There is little probability of ~ e waiter tending you a musical score when you're sitting down at your table (except maybe in some utterly sophisticated cafe in midtown Manhattan). When you're placing your order, any tttterances about the menu are automatically understood as part of the restaurant script, i.e. here: "ordering food o,,a/.-,., a~,,t.,, Suppose you point to a l.,a~t~,~umr item on the nst; the waiter will interpret this as an ,order, not a comment on that item. I could even say, pointing to the entry "Jack Daniels'", something like: "What about that?", and the waiter w~ill automatically return with my drink, rather than engaging in a debate about the incorrect placement of the apostrophe in "Daniels'", or getting his ballpoint out [3]. f.~l~l.8/l
s..y|
1,Jl |ILl%
.
(b) By psychologically acceptable I mean that any AI model should purport to si:.~ulate the human mind and its workings [4]. This requirement is cert~:inly a far c ~ from Chomsky's original (1957)claim that the model be a characterizatio~'~ in "as neutral terms as possible" of the language used by an "ideal speaker" having a perfect command of his (!) language, to the exclus3ion even of possible mistakes (cf. also Chomsky 1965: 10). Thus, the question is not only to find out what people do with words (Austin 1962),, but also, and more importantly, what they think (or say they think) they d3 with words, and what the "other end" thinks of all that ~cf. Verschucren 1980). (c) The hcli,~~.ic aspect of AI research. In linguistic modeling, and of course long before AI got onto the scene, one could observe a particular philosophy of scientific work at play, which I think is beautifully illustrated by an early 'sixties cartoon depicting some NASA scientists (this was in the preparatory stage~ of the moon project)building a tall, very sophisticated kiind of structure, [3] For simil~ observations, of. Ehlich and~ Rehbein's (1972) "restaurant articlte". [4] And nc,t necessarily the human br~in"s. This is espcc.~ally important in view of th~ current tendency to a n a k ~ between the computer and the brain, or neural processt:s in genelral (el. the discussion on parallel vs. serial processing of info~rmation, in ne~arophysiological terms, ~eferrcd to by Goodwin and H-.~n in their contribution (above, p. 250).
J.L. Mey / On being a madel
289
resembling most of all a giant Jacob's ladder. "['he captior read: "Just a few more of these and we got it licked". I will call this kind of thinking "the little model fallecy". It consists of building a model based on some I;.mited, particular aspect of the activity you want to describe and explain, and then blowgng up your model to life size, declaring it to be the "real thing", the system, or what ha,~,~ you. In computational linguistics, e.g., modeling the prc~:esses of morphological derivation would be a first step on the way to automatic syntactic ar, alysis of sentences, which itself could be "expanded" into semantic analysis ~r~,cl understanding, and, God knows, fuU-blown machine translation some d~,y. r h e underlying belief was that once one got a little corner on ~he model market, on,. would be able to expand one's holdings and, using the same methods all along, one would maybe end up some day as the Hefner of the linguist=~c model world, the Godfather of linguistic science. Such assumptions, however, never seem to have worked out. Especially the history of machine transiatio~ is strewn wi1:h examples showing that "iittie models" never work in "big" surroundings: 'Lhere is no guar~.~ntee whatsoever that methods of detail p e r se will work for systems as a whoie. Linguistics, as any other behavioral science, seems always to suffer from the "n + 1" syr~drome, to modify Karpatscho~s expression ~see his contribution to this issue, above, p. 298): there's always another debt to meet, another mafia to appease. Now, what does it mean that the AI approach is holistic., and even maybe "hopelessly" so, as Goodw/n ~Lnd Hein put it (this issue, p. '263)? It does not mean that AI is against the principle of dt,~composing a larger problem into smaller ones that we know we're capable of .dealing with. It do,es mean, though, that the "smaller" units in AI in all probability are different from those in linguistics, and that they are handled differt;ntly. There is thus neither a poLnt in making a small model by itself, nor caa one transfer modules from one science to the other. Just to mentid:~n one case: the classical, structural lingui,,;tic unit of problem stating and problem solvi~g is the sentence [5]. In AI, as in non-traditional linguistics today, one considers utterances in their interac~;ive context, where the units are functional rather than grammatical. But also many developments in linguistics proper point the same way: text linguistics and the interest in speech acts are but a few examples. Especially in the lagter case, one can observe the deplorable effect of the "liltle model" fallacy: Austin s:a~ted his approach by focusing, unreflectingly and tradition-bound,, on the sentence as the natural unit of processing discourse, and looked inside the sentence for speech ac~: conditions. Only much later people got the idea that a speech act needn't b,,• circumscribed by senllence boundaries, in fact, needn't have any[5] This was the case long before the advent of structural linguistics, of. the chssical and sch~lastic traditions in which the sentence was considered the natural and logical expre~gsion of a fundamental statem©nt, the proposition or the "judgment".
200
J. L 3t~v / Oa bein,~ a m~lv/
thing to do witl~ a sentence as we traditiionally know it from linguistic works.. Fortunately, it seems that AlL is not about to commit the same mistakes (for which, naturally, it often has been attacked by the linguistic Establishmen0. (d) For parfir, shots, let me deal a couple of quick rounds to the s~bject of the model as compared to its image, reality. That one should not confuse the model with the real thing seems to be an obvious requirement on all m~eling, be lit in economics, literature, linguistics, philosophy, psychology, or, I'or that m~tter, AI. However, the trouble with AI models (and tht:ir advantage, of course) is that they actually (are at) work: they are dynamic, processual mc,Jels, not starlit, descriptive ones. This means that they actually produce testa,hie, quantitativdy and qualitatively measurable results. It makes sense to a~,;k whether it program works, and whether this program ~vork:; better than the one over there: t~ae criteria for evaluation (which always are the bigge:;t hindrance on the :end to agreement about quality) are not in dispute, .at leas1: not at the iew~.l at which we ctarrently evaluate and compare things in AI (the fact that there are other, superordinate criteria for evaluation. e.bout which not everybody sees eye to eye does not affect ~hi~). Beca~se of the above, we are able to avoid much of the endless, sterile disputes like we know them item linguistics (especmlly generative grammar), e.g. about the creation of a metric for (generative) grammars (el. Chomsky 1957. 1965). However, precisely because analogy is a great force in our thinking, and because the analogon, the model has come alive, so to speak, in AI, we have to be doubly c~,reful. We have to remind ourselves constantly what is the ultimate goal of AI: not to build ew.'r bigger and better robots, but to acquh'e a de~:per understanding of human cognitive processes. Naturally, there's nothit~g wrong with building intelligent machines (if we know what they're going 'io be used for), and of course such machines can 0e an enormous help in assisting humans and relieving them from chores and drudgery~ from poverty and even misery. However, the aims of AI remain to be clearly stated and restated: we're not interested in s~ubordinating people to machines, nor in replaying h,mans by machines, in the best science-fiction nightmare style. Neither should we forget theft for a number of people st~.¢h nightmares are anything but fictional (or fictive), and that ~n times of crisb; the human values, especially the value of human labor, ~tre much harder to b~ck up than in more prosperous p,~riods [6]. The." a~.m of AII is not to make human machines, but to make our us~.' of machine.,; ~ore human, as Norbert Wie, er said in an early [6] Cf. the following remark by JoLnson-Laird (1980: 110), quoted with some disapproval (for other reasons, cL my critique ,~f the "small mcgqel fallacy" above) by Goodwin and He,in els~wher.' in this issue (p. 269): "... the, ,i shc~uld be neither to, simula1¢ human behavior, ... nor to ¢.xercis~ artificial intellisenc¢, but to:~rce th,¢ theorist tt this~k again,.." (©mphasis added, JM).
J.L. Me), / On being a model
291
(1950) hook on cybo.rnetics, which he ,:ailed: "The human use of human beings". The intelligent machine ~s, more than anything else, a v,ay of knowing ourselves, of understanding our woT~ld. But inasmuch all of our understanding and knowledge is a matter of symbolic processing, we ourselves are ~, in a very definite way, "devices" for the processing of symbols, in thought a~; well as in reality. To put it in another way: tlhe machine m~y be a metaphor, but so are we. The difference is that we made the machine to match ,our owr~ cognitive processing, and therefore, we shouldn't scream if we're beaten by our own devices. in principle, there really isn't (or shouldn't b~,) any antagonism between the human and the machine. The making of an intelligent maclhine, in particular the creation of an artificial inteiligence, consists basically in matching two metaphorical systems: putting symbols in relation to other symbols, aud systems of symbols in rela6on to other systems of symbols. Which is what we've been doing all along, of course, in science as in the hum, aniti~.~s, but without having been as aware of it as we are now. What's new today, thanks to the machines, is that we physically control one of the systems in ways that were not thought possible before. However, precisely because of this very tangible physiical presence, the danger is always there that we start taking what we can see as what's really "Being There", according to the positivistic maxim: "What you see is what you get". Under the metaphorical conception of computer proce:,sing, not only are we dealing with symbolic systems and representations at differe.nt levels inside the machine, but far mo~'e pertinently, b) our dealings we're.• constantly stressing the essentially representatior~al character of all metaphorical symbol har~dling. In this sense, the computer and its programs are not "rear', despite their realistic appearance and their effects in time and space: their ultimate reality is the parent symbol, the human.
References Ashby, H. 1978. Being There. Columbia Pictures (movie; Europ~an marketiag title: Goodbye Mr. Chance). Austin, J.L. 1962, How to do things with worsts. Cambridge, UK: Cambridge Unive~'sity Press. Chomsky, N. 1957. Syntactic ~.~ructures. The Hague: Mouton. Chomsky, N. 1965. Aspects of the theory of syntax. Cambridge; MA: MIT F'ress. Coulmas, F., ed. 1981. A Festschrift for native speaker. The Hague: Mouton. Ehlich, K. and J. Rehbein 1972. 'Zulr Konstitution pragmatischer Einheitei~" das deutsche Speiserestaurant'. In: Wu~derlich, ed. pp. 209-254. Fortescue, M. 1979. Why the 'language of thought' is not a language. 5ome incoe:~istencies of the computational analogy of thought. Journal of Pragmatics 3: 67-80. (Reviev~ article of: 3.A. Fodor. 1975, The language of thought. New York: Crowell.)
292
J. L Mev / On &'rag a model
Herdan, G. !'958. Language as choice and chance. The Hague: Mouton. Johnson-Laird, Ph. 1980. Mental mod©ls in cognitive science, Cognitive Science 4: 7 1 - | 15. Mey, J, 1981, 'Right or wrong, my native speakc,r', In: F, Coulmas, ed. pp, 69-84, Monk, K, and (3, Zifonun. 19:80, Willkommen Mr. Chance in der AI Community. Rundbrief der Fachgrupp¢ Ktinstliehe Intrlligenz in der Oes©ilsehaft If'Orlnformalik 22: 56-58, Schank, R, at~d C, Riesbeck. 1981. bride computer und©r~aanding, Hillsdale, N J: Eribaum, Thurber, J. al~LdE,B, White, 1~60, Is sex necessary? Harmondsworth: Penguin, [1929,] Vcrschueren, ,I. 1980. What people say they do with words, Berkdey, CA: University of Calit\-~rnia, Ph, D, diss~tation, Wei~enbaum, J, 1966. ELIZA. Communications of the Association for Computing Machin~ry 9: 36-45. Wiener, N. 1950. The human t~se of human beings. Boston, MA: Houghton Mifflin. Wunderiich, D., ed. 1972. Linsuistische Prasmatik. Frankfurt am Main: Atheaaum, Jacob L, Me)' (b, 1926) is Professor of Linguistics at the Rasmus Rask Institute of Linguistics, (>dense University, Denmark, IRis main interests comprise: pragmatic linguistics, sociolinguis, tics, the languages of the sexes, artificial intelligence. Among his recent publications are: 1979, Pragmalinguistics: theory and practice (::Janua Linguarum 86; the Rasmus Rask Serie:~ in Pragmatic Lingu!stics I). The Hague: Mouton, 1980. Right or wrong: my native speaker'. In: F. Coulmas, ed., Festschrift for native speaker. The Hague: Mouton. pp. 70-84. 1981. (with H Haberland) Wording and warding: the pragmatics of doctor-patient conversation. Journ~d of Pragmatics 5: 103- i 13. 1982. Whose language? A stt ,iy in the politics of language use. To appear.