Lingua
79 (1989) 327-346.
North-Holland
327
REVIEWS Sergei Nirenburg (ed.), Machine translation: Theoretical and methodological issues. Cambridge: Cambridge University Press, 1987. xv+ 350 pp. &30 (hardback); f 12.50 (paperback). Reviewed by: Geoffrey Sampson, Centre for Computer Analysis of Language and Speech, University of Leeds, Leeds LS2 9JT, UK. Twelve of the seventeen chapters in this book represent the proceedings of a conference held at Colgate University in 1985. To these have been added an introduction by the editor, a chapter by Allen Tucker surveying the MT domain (Tucker distinguishes the alternative theoretical approaches that have been used to create MT systems, and describes eight of the leading extant systems), a brief chapter by Alan Melby on machine-aided translation, a chapter by John White describing the University of Texas/Siemens METAL project (one of the most successful MT systems to have been developed in an academic environment), and an interesting chapter by David McDonald discussing the relevance of pragmatics-based language generation systems for MT. The additional contributions, particularly Tucker’s, were perhaps incorporated in the hope of converting a proceedings volume into a more general standard reference work on MT. If that was the intention, it was frustrated by the publication at about the same time of Hutchins (1986), which as a purpose-written standard reference could hardly be bettered. Nevertheless, the Nirenburg book does contain some interesting material, though like other conference proceedings its quality is uneven. A thumbnail sketch of the history of MT might run as follows. Translation was one of the earliest applications envisaged for computers (for instance it was advocated by Alan Turing in a report drafted within a month or two of the first successful operation of a stored-program electronic computer, in 1948) and in the 1950s and early 1960s considerable resources were devoted to developing MT systems, though with only modest success. Most of this work fell into abeyance for a decade or so after 1966, as a consequence of the publication in that year of the report of a US government enquiry (the ‘ALPAC Report’) which found that automation of the translating process, even if technically feasible, was not economically justifiable in the conditions of the time. By the late 1970s the economic equations had changed (computing was cheaper, and there was a greatly increased demand for translation particularly from multilanguage polities such as Canada and the EC); efforts resumed. But from the perspective of 1970s academic thinking about the nature of language the pre-ALPAC work looked naively simplistic. The field fissioned into two somewhat separate
0024-3841/89/$3.50
c) 1989, Elsevier Science Publishers B.V. (North-Holland)
communities: academic researchers tended to work on systems that incorporated ‘deep’ linguistic and artificial-intelligence concepts, but these were often ‘toy’ systems rather than ones which did useful work, while researchers in more commercial environments sacrificed theoretical respectability to the need to produce systems which succeeded in yielding output in response to real-life input texts, even if the quality of the output was commonly such as to require human post-editing. (It emerged that a combination of computer generating rough-draft translations with human post-editor was often an economically attractive alternative to all-human translation.) This fission was far from complete. Apart from the METAL system mentioned above, TAUM-METEO, the first MT system to enter routine operational use without postediting (it has been translating Canadian public weather forecasts into French since 1977) was produced at the University of Montreal. In their chapter in this book, Jaime Carboneli and Masaru Tomita of Carnegie-Mellon University argue that the time has now come for the two tendencies to reintegrate. Taken as a whole, though, the book leaves me sceptical about whether that is a realistic prospect. The contents are heavily biassed towards the ‘academic’ side of the divide - well-known commercial systems such as SYSTRAN, LOGOS, Weidner, or ALPS are mentioned only fleetingly. And much (though certainly not all) of the academic research represented is purely programmatic, evincing little real concern with the problems of processing full-scale natural languages with their tens of thousands of idiosyncratic lexical items and their unpredictably flexible grammatical and orthographic rules. One symptom of this is that (unlike Hutchins’s book referred to earlier) the book under review includes scarcely any sizeable examples of the output of MT systems. Apart from a two-paragraph example quoted by Melby, produced by an unidentified system, and an English to Japanese translation of a formal change-ofaddress memo quoted by Carbonell and Tomita, I noticed only brief hypothetical examples such as John White’s,English translation of Ein ulter, vrrheirureter Mum hat dieses Programm niemals geschrieben (as White says, ‘perhaps an odd sentence semantically’). One particularly valuable chapter, to which the epithet ‘programmatic’ certainly does not apply, is by Donald Walker of Bell Communications Research (‘Bellcore’). on the bank of machine-readable knowledge resources for natural language processing assembled at that institution, which include vast quantities of real-life English text, electronic versions of several published dictionaries, and a number of other large works of reference. Walker describes two examples of software systems that exploit these resources: FORCE4, which identifies the main topics of news stories, and THOTH, which allows a reader to summon up on a screen clarifications of unfamiliar concepts or names. I suspect that some of the more theoretically-minded writers represented in the book might see the goals of these systems as rather trivial; but a healthy lesson emerging from Walker’s discussion is that creating a software system which actually achieves even simple tasks related to unrestricted, real-life natural language. using. resources which are actually available today, is surprisingly challenging. (Walker’s systems are not directly related to MT, however.)
Another significant chapter is by Doug Arnold and Louis des Tombe on the EC’s EUROTRA project for translating between each pair of the EC’s seven (when EUROTRA was initiated - now nine) official languages. EUROTRA is probably the largest MT research project currently under way anywhere, and relatively little has been published about it (thanks, I believe, to an exaggerated emphasis by the EC authorities on commercial secrecy); this chapter may be the fullest public statement available. It would be tedious to summarize individually each of the seventeen chapters. I shall conclude by picking out two or three specific points that struck me as worth comment. More than one contributor suggests that a good way forward is to develop systems that handle particular ‘sublanguages’ of a natural language, and the chapter by Richard Kittredge of the University of Montreal argues at length that particular topic areas, e.g. stock market reports, have their own, relatively well-defined grammars. I have not seen Kittredge’s data; but the evidence I have examined at first hand seems to imply, on the contrary, that English genres as different as fiction and technical writing are more homogeneous than one might expect at the grammatical level, though no doubt they diverge considerably in vocabulary (cf. Ellegard (1978) Sampson and Haigh (1988)). Kittredge’s discussion of the sublanguage concept. which in part derives from writings by Zellig Harris, seems to confuse topic-specificity with a quite different property relating to formal logic. I am not clear that the logical concepts of consistency and completeness have any application to a natural language, and a sublanguage in the topic-specific sense surely cannot be assumed to be closed under the paraphrase relationship: for instance (using Kittredge’s examples), it is true that This theorem provides the solution to the boundary value problem has a paraphrase What this theorem does is provide the solution to the boundary value problem, but it does not follow that each sentence is equally apt to appear in texts on mathematical analysis ~ this seems unlikely in the latter case. Ralph Weischedel and Lance Ramshaw offer a useful discussion of the need to deal with ill-formed language, but they overlook relevant work, notably by Yannakoudakis and Fawthrop (1983) at the University of Bradford on patterns of spelling mistakes. And finally, I should like to note with regret that Cambridge University Press has printed this book by photographing the output of a laser printer, so that it lacks the crispness of traditional print. If a manuscript is available in electronic form, it seems lamentably penny-pinching not to feed it into a proper typesetter.
References Ellegdrd,
A., 1978. The syntactic
structure
Stockholm: Almqvist and Wiksell. Hutchins, W.J., 1986. Machine translation: Sampson,
G.R.
and R. Haigh,
texts. Gothenburg
Past, present,
future.
1988. Why are long sentences
Kytii et al. (eds.), Corpus linguistics, Yannakoudakis, E.J. and D. Fawthrop, sing and Management
of English
19, 87-99.
Studies
Chichester:
longer
in English,
43.
Ellis Horwood.
than short ones? In: Merja
hard and soft. Amsterdam: Rodopi. 1983. The rules of spelling errors. Information
Proces-