lt~tcr/b('c.~ in Compz~ting, 2 (1984) 269 - 277
269
TOWARDS A PRAGMATIC PHILOSOPHY OF ARTIFICIAL INTELLIGENCE
PETER A. LEADBETTER Department o f Mathematics and Computing, The Polytechnic, Wolverhampton (Gt. Britain)
(Received April 4, 1984)
Su mmar y Recent media interest in artificial intelligence (AI) (mainly through commercial exploitation of e x p e r t systems) and programming languages used within the field can only be sustained and built upon if those interested possess an overall awareness o f the background against which most AI research is undertaken. AI has a small, although growing, fundamental base of core t h e o r y ; it attacks difficult problems by expedi ent means and pioneering ef f o r t and aims to generate sufficient or adequate solutions which can themselves be refined and t hen perhaps i ncorporat ed into the fundamentals of AI t h e o r y .
1. I n t r o d u c t i o n Much of the core philosophy of current artificial intelligence (AI) work is misunderstood by m a ny people outside the AI c o m m u n i t y and is highlighted by the following: the nature of the discipline itself, the recent availability of much-needed c om put i ng pow er (in hardware terms), the availability of adequate interfaces and programming environments, the nature of the problems being tackled, the direction and initiative of the fifth-generation Japanese ef f or t and, finally, the realization t hat conventional software techniques and e q u i p m e n t architectures do not provide the requisite facilities. Even though the need to appreciate a milestone such as sufficiency is i m p o r t a n t in its own right, there is a suspicion t hat it represents only a halfway stage in the y o u n g field of AI, whose fundam ent al research problems closely resemble those of two decades ago with the e x c e p t i o n t hat the problems themselves have n o w been m uc h refined.
2. C o n v e n t i o n a l s o f t w a r e and m a c h i n e a r c h i t e c t u r e
Even though changes in chip t echnol ogy have brought a b o u t smaller, faster and more reliable hardware we are in the midst of a software crisis 0252-7308/84/$3.00
© Elsevier Sequoia/Printed in The Netherlands
270 because computing principles (architectural design and software techniques) have remained fairly static over the last 30 years. The acceptance of formal techniques of software development such as structured programming has not been effective in combatting this. The first and most obvious cause is that hardware costs are decreasing significantly and labour-intensive software costs are increasing. Two generalities which illustrate this phenomenon are firstly that the complexity of a chip doubles every year and secondly that the costs per m e m o r y element have decreased by one order of magnitude approximately every 6 years. What is needed is the maximization of personnel performance and the production of cheap, reliable and easily maintainable user software. This is becoming more difficult to achieve. The following factors have all combined to increase the burden on the maintenance function w i t h o u t which most computing systems would soon cease to operate. There has been an upsurge in the number of applications for computers combined with the increasing age and obsolescence of existing applications. Also, there is a more rapidly changing user and business environment together with a growing number of legislative changes which have to be incorporated into the detail of applications. Two aspects have implications which affect the maintainability of a system; these are an increase in system complexity through the interlinking of applications and/or data, and the introduction of on-line and conversational working. It is therefore more difficult to produce robust software. As a consequence of the more widespread use of computers, there is also an increase in the number of end users and a decrease in end-user sophistication relative to the system complexity, particularly through online working. These aspects impose stringent requirements on software especially with regard to reliability and recovery. Turner [1] is worried about the high cost of producing software for comparatively simple applications and states that the main obstacle to the wider use of computers is the inability to produce cheap manageable software. The more dramatic the advances that take place in hardware, the more embarrassing this becomes. Turner argues that the basic problem lies in the nature of existing programming languages. He cites the use of FORTRAN and COBOL which emerged between 1955 and 1960 and set the pattern for later languages. These later languages evolved through complication and not through underlying principles. If PASCAL and F O R T R A N are compared, the similarities are more striking than the differences or, more precisely, the differences are superficial whereas the similarities are fundamental. Turner continues by saying that most c o m m o n languages in use t o d a y are of the sequential imperative type with assignment as their basic action and, as such, are unnecessarily lengthy in their approach. Programmers generate a fixed number of debugged lines of code, irrespective of the language that t h e y are working in, and this means that the cost of producing software is dependent on the level of the language being used. For example, FORTRAN is five to ten times more productive than the equivalent assembly language.
27l
What is needed, t her e f or e , is a new level o f language which will be able to increase the expressive power of those writing code in it. This implies a move to some kind of non-procedural language. A move from an imperative to a descriptive language should lead to the p r o d u c t i o n of shorter programs. An incentive which militates against the use of procedural languages is that developments are now taking place on the hardware side. The yon Neu man n architecture of the 1940s, which specified a single active processor coupled to a large passive store with a narrow c o n n e c t i o n between the two (only one word in store can be accessed at a time}, has remained largely unaltered ever since. The processor and store are n o t now made by d if f er en t technologies but are built from very-large-scale integration chips and there is no obvious reason why m o n o p r o c e s s o r architectures should c o n t i n u e to be built. A second f a c t or is t hat the speed of a yon Neumann c o m p u t e r is limited by the bandwidth of the c o n n e c t i o n between the processor and the m e m o r y (Backus [2] has called this the " y o n Neumann b o t t l e n e c k " ) . There are technical limits why the speed of this c o n n e c t i o n c a n n o t c o n t i n u e to be improved. In a multiprocessor architecture, arbitrary increases in speed can be gained simply by adding m ore processors to the network. The move to non-procedural languages, i.e. to those languages which are descriptive rather t ha n imperative, appears to be reinforced by the fact th at workers on data flow and ot her highly parallel machine architectures are all moving towards this same idea. At the m o m e n t three possible approaches are being followed by workers in the field. The first of these is logic programming which is based on predicate calculus with the first system called P R O L O G being i m p l e m e n t e d in 1972 by Colmerauer and his colleagues in Marseilles. A n o t h e r school o f t h o u g h t follows the work of Backus [2] who promulgates a functional style of programming, and finally there is the use of applicative languages based on the application of h calculus to recursion equations. Whichever of these approaches becomes dominant, it seems to be only a m a t t e r of time before algebraic procedural languages start to b e c o m e o u t d a t e d .
3. Fifth-generation Japanese initiative The Japanese are already 3 years into a 10 year research and develo p m e n t p r o j e c t which will c o m b i n e radical hardware advances with knowledge-based systems at the software end. According to D 'A gapeyeff [3], their initiative involves a forced-pace d e v e l o p m e n t with bot h social and e c o n o m i c goals culminating in the leadership of t he i n f o r m a t i o n t echnol ogy sector. This d e v e l o p m e n t is being s uppo rt ed by the agreement among hundreds o f designers and by t he c o o p e r a t i o n bet w een governm ent and industry and research organizations. A p p r o x i m a t e l y £ 1 2 0 0 million in funds is being made available over the project's lifespan of 10 years. If such a bold c o n c e p t were to succeed, t hen any delay in reacting to this initiative would only
272 damage the competitive position of Western countries. Another significant consequence would be a decrease in the exporting power of companies that depend on information processing, which in turn would reduce the independence of nations such as Britain. One of the interesting aspects of the Japanese proposal surrounds their c o m m i t m e n t to use logic-based programming techniques and their intention to discard all existing software techniques in favour of a completely new and radical approach. The reason for this is obvious: the Japanese do not have to expunge 30 years of software development techniques that the Western world has grown up with.
4. Artificial intelligence environment The most widely used AI programming language to date is that invented by J o h n McCarthy in 1958 which is known as LISP. He implemented this as a practical list processing language for AI work on the IBM 704 computer. LISP is the second-oldest high level language after F O R T R A N and was primarily designed to facilitate the easy manipulation of symbols and symbolic processing. Initially LISP was a local geographical p h e n o m e n o n , whose implementation was poorly documented. Despite the fact that LISP was heavily machine-resource bound and very poor at mathematical manipulation, it soon caught on and quickly became the standard. Since that time, virtually every AI laboratory in the U.S.A. that has implemented any practical system in the diverse field of AI has used LISP to construct such a system. Barr and Feigenbaum [4] reported that " . . . the major LISP dialects, MACLISP and INTERLISP, are certainly amongst the most highly developed programming environments ever created". Typical programming-support features which might be included in such an environment are the use of an interactive language, a good editor, interactive debugging facilities (brakes, back traces and facilities for examining and changing program variables) and i n p u t - o u t p u t routines so t h a t the program is not burdened with such details. As a higher than high level language, LISP is now implemented on current state-of-the-art hardware and there are several personal computers designed specifically for LISP programming. These "LISP machines" offer complete powerful computational facilities, very good graphics interfaces, interfaces to networks for sharing resources between personal work stations and all the features o f the advanced LISP programming environments. The following features serve as a comparison between current AI programming and conventional programming. AI programming has a declarative focus, is predominantly interactive and allows rapid prototyping and incremental programming. It has a rich tool set and tool building environment, is multiwindow screen oriented (two dimensional) and has a high machine-power-to-programmer ratio. In contrast, conventional programming is procedurally focused. It is becoming more interactive and relies on structured programming techniques. At the same time, it is slowly be-
273 coming more tool conscious, is generally line oriented (one dimensional) (although it is becoming more screen oriented) and has a low to moderate machine-power-to-programmer ratio. There are supporters of both the LISP and the logic programming cause (predominantly the Japanese), the main difference so far being that the logic programming systems do n o t have a very rich and highly supportive programming environment within which to work. The Americans openly champion LISP, whereas the logic programming protagonists claim that it is a higher form of programming than LISP and yet do not c o m m e n t on its programming environment. There is little published work emanating from Japan.
5. Early philosophy In the late 1950s, researchers in the field of AI were overly optimistic about the results that they could produce in this new and young discipline. Their claims for capturing intelligence, producing general problem-solving systems and automating the process of computer programming were much too ambitious and, as a result, during the 1960s the area of AI and the workers in it were subjected to a certain a m o u n t of ridicule. However, its practitioners persevered and during the 1970s learnt many crucial lessons. This experience together with the new knowledge-representational techniques and languages, improved machine resources and recent commercial interest have m e a n t t h a t the field has flourished. Unfortunately, in Britain the Lighthill [5] report of 1973 did not favour the development o f AI and the influence of this report is still evident. After the report appeared and until very recently, the Science Research Council did n o t fund any significant projects on AI. Many effective workers in the field could obtain no finances for their research and hence left Gt. Britain for the promise of ample funds. The media are now dealing with AI in the same way as its exponents did in the early days, i.e. they are making misleading, bold and dangerous statements couched in simplistic terms. As a result, the layman is receiving fallacious and spurious information. It has recently been announced that Britain has now more home computers per head of population than any other c o u n t r y in the world including the U.S.A. This may be true, but most of these machines are simply toys for playing games. More attention should be paid to the worthwhile aspects of more powerful systems. In 1976, Newell and Simon [6] emphasized two basic concepts: symbols and search. They were concerned with physical symbol systems that manipulate collections of symbohc structures and perform problem-solving tasks using heuristic search, their central hypothesis being that physical symbol systems have all the necessary and sufficient means for intelligent action. Although traditional computer science is mainly concerned with numeric and well-defined problems to be solved algorithmically by con-
274 structing efficient c o m p u t i n g systems, AI is interested in symbolic illdefined problems which are hard to formalize but which can be solved by designing adequate representations o f the knowledge available and nondeterministic search strategies and reasoning systems. The fundamental question to be put about such AI systems (or the f u n d a m e n t a l p hi l os ophy o f such systems) is not whether t hey are consistent o r complete, or even efficient, but w h e t h e r they. represent an " a d e q u a t e " or " s u f f i c i e n t " solution to the original problem. As Charniack et al. [7] stated, "AI problems are usually ill defined, and the theories proposed are o f t e n to o co m pl ex and complicated to be verified by intuitive or formal arguments. Sometimes the only way to understand and evaluate a t h e o r y is to see what comes next. To find this o u t and to check for obvious inconsistencies and contradictions, we write programs that are intended to reflect our theories. If these programs work, our theories are n o t proved, of course, but at least we gain some understanding of how t h e y behave. When the programs do n o t w or k (or we find ourselves unable to program the theories at all), then we learn what we have y e t to define or redefi ne. " Such a shift f r o m the " e f f i c i e n c y " to the " a d e q u a c y " o f the solution as the main goal has an immediate implication in that there is no longer a r e q u i r e m e n t to design special purpose software and hardware, starting from the concrete need and ending with the system. Such a shift from the efficiency to the a d e q u a c y o f t he pr obl e m solution in the history of AI appears to have run i n parallel with the shift of concerns f r om the c o m p u t e r to the man in interactive systems. It is f r om such thoughts, and from the trends towards programs which are clearer and m o r e easily u n d e r s t o o d and which make much less than optimal use of the hardware, that programming work in AI is regarded mo r e as an art than as a science. This ad h o c approach to the solution o f potentially intractable problems has left AI in the position o f being m ore r e n o w n e d for those products t h a t have been generated as a spin-off or b y p r o d u c t o f some original investigation. One classic example of this is the advent of interactive multi-access time-sharing systems which evolved as a spin-off from the AI laboratories in t h e U.S.A. during the early 1960s.
6. Knowledge engineering E x p e r t systems (within the field o f knowledge engineering) t y p i f y some cu r r en t AI work and, because current commercial backing has increased interest in them and enlarged their audience, t h e y will be surveyed as a pattern for some present issues. There has been a paradigm shift from the problem-solving approach to the knowledge-driven approach. The problemsolving approach em bodi e d the use of a general purpose problem-solving m e t h o d o l o g y (weak m e t h o d s for non-domain-specific situations) and derived all the needed i nf or m a t i on from basics. This resulted in a combinatorial increase in th e n u m b e r o f possible processing routes and was impractical for
27~ all e x c e p t toy-like systems. (Game-playing programs quickly gave great complexity from relatively simple initial structures.) In contrast, a knowledgedriven approach uses pre-stored facts a b o u t the problem domain and there is no need to derive all the required information from the beginning; intermediate results are pre-stored as large batches of i nform at i on (chunks). This approach is based on knowledge-based and frame systems with which efficient p e r f o r m a n c e can be achieved and which t h e r e f o r e allow for the feasibh, o p e r a t i o n o f practical systems. What is knowledge? Lenat and coworkers [8] answer this by saying that knowledge is a c o m b i n a t i o n of facts, beliefs and heuristics, where a heuristic is defined according to Feigenbaum and Feldman [9] as a "rule of t h u m b , strategy, trick, simplification or any kind of device which drastically limits search solutions in large problem spaces. Heuristics do not guarantee optimal solutions; in fact, t hey do n o t guarantee any solution at all; all th at can be said for a useful heuristic is t hat it offers solutions which are good enough most of the t i m e " . Hence, using multiple-knowledge sources gives us th e leverage to minimize blind search and r e d u n d a n t c o m p u t a t i o n and hence directs us towards the answer. An expert system is a program that solves difficult problems using application domain knowledge and problem-solving skills. It is able to perform in the absence of an effective algorithm and can deal with uncertain and c o n t r a d i c t o r y information. E x p e r t systems generally provide for an incremental p e r f o r m a n c e i m pr ovem e nt, can explain system results and behaviour and support multiple uses o f system knowledge. T hey also display the potential for acting as knowledge repositories before expertise is lost by death, senility or retirement. A partial list of successful expert systems to date would include the following. PR O SPE CT O R has discovered a molybd e n u m deposit whose ultimate value will probably exceed U.S. $100 million. R1 configures c o m p u t e r requests for VAX c o m p u t e r systems at Digital E q u i p m e n t C or por at i on despite the fact t hat even the resident experts did n o t think that it could be done [10]. D E N D R A L supports hundreds of international users daily in the elucidation of chemical structures [11]. CADUCEUS embodies more knowledge of internal medicine than any h u m a n and can correctly diagnose com pl ex test cases that baffle h u m a n experts [12, 131. P U F F now provides expert analyses o f p u l m o n a r y function disease at California Medical Center [14]. When it is borne in mind t h a t e x p e r t systems have undergone 20 years of d e v e l o p m e n t in one form or another, there are very few successful systems around if the criterion of success is th at the a m o u n t of m o n e y saved is greater than the cost of development. Because of the c o m p l e x i t y of the tasks, AI research generally identifies as m a n y problems as it solves. A n u m b e r of the current criticisms which may be levelled at expert systems are t hat the early systems are relatively simple and inflexible and also that t h e y are restricted to very narrow domains. Their incremental i m p r o v e m e n t is limited to the poi nt when the system's reliability and intelligibility become unstable. This results because
276 t h e increasing n u m b e r o f a d d e d - o n rules b e c o m e s t o o difficult to c o n t r o l . The e x p l a n a t i o n facilities o f e x p e r t systems are little m o r e t h a n t h e following o f b a c k w a r d chaining i n f e r e n c e rules and these facilities c a n n o t m o n i t o r or adjust the s t r u c t u r e o f their o w n p r o b l e m solving as t h e y have n o high level r e p r e s e n t a t i o n o f it. T h e y c a n n o t t r a n s f e r c o n c e p t s and p a t t e r n s o f i n f e r e n c e f r o m o n e d o m a i n to a n o t h e r , t h e y c a n n o t integrate d i f f e r e n t k n o w l e d g e d o m a i n s and c u r r e n t systems c a n n o t explain their conclusions to d i f f e r e n t users in a d i f f e r e n t m a n n e r . Also, the f r o n t - e n d p r o c e s s o r which provides t h e i n t e r f a c e in natural language t e r m s b e t w e e n the user and the s y s t e m can o n l y a c c e p t and parse a restricted subset o f English words and, in t u r n , g e n e r a t e an u n n a t u r a l t y p e o f " c a n n e d " o u t p u t .
7. T h e artificial intelligence discipline Much o f t h e same kind o f research will take place in AI t h a t has been going o n f o r t h e past 20 years, m a i n l y because the k n o w l e d g e - r e p r e s e n t a tional formalisms are n o t y e t strong e n o u g h to c o p e with the c o m p l e x i t i e s o f handling large k n o w l e d g e bases and their i n h e r e n t organization. I d o n o t k n o w w h e t h e r this is because o f the fuzziness o f the activity, or because t h e field itself is n o t f o r m a l e n o u g h t o be a c a n d i d a t e f o r academic interest {the research e n d e a v o u r o f formalizing k n o w l e d g e or reasoning t e n d s to be a u n i q u e e x p e r i e n c e each t i m e t h a t it is u n d e r t a k e n and t h e r e f o r e it is difficult t o a u g m e n t and build a c u r r e n t c o h e r e n t t h e o r y ) or because the small n u m b e r o f p e o p l e c u r r e n t l y involved in t h e small n u m b e r o f centres (it is e s t i m a t e d t h a t o n l y b e t w e e n 100 and 4 0 0 p e o p l e world wide have t h e level o f knowledge r e q u i r e d t o b e capable o f building an e x p e r t system) are so e n v e l o p e d in their small c o m m u n i t i e s t h a t very little progress seems to e m a n a t e f r o m a n y w h e r e else b u t t h e " a c c e p t e d centres o f e x c e l l e n c e " . What is certain is t h a t AI is essentially an e x p e r i m e n t a l discipline which builds a r o u n d itself a t h e o r e t i c a l f r a m e w o r k as a result o f applying its o w n e x p e r i m e n t a l t e c h n i q u e s to itself. It is c o n t i n u a l l y refining the questions which n e e d to be a n s w e r e d r a t h e r t h a n a t t a c k i n g a p r o b l e m d i r e c t l y until a s o l u t i o n is f o u n d . It borders o n t h e absolute b o u n d s o f research w h e r e p r o b l e m d e f i n i t i o n is t h e m o s t difficult p a r t and w h a t results is a r e d e f i n e d p r o b l e m . A n y occasional success is f o l l o w e d b y a r e t h i n k o f the strategy w h i c h usually involves a r e o r i e n t a t i o n or reappraisal o f the original p r o b l e m s i t u a t i o n a n d its ramifications.
References 1 D. A. Turner, Recursion equations as a programming language, in J. Darlington, P. Henderson and D. A. Turner (eds.), Functional Programming and its Applications, Cambridge University Press, Cambridge, 1982. 2 J. Backus, Can programming be liberated from the yon Neumann style? A functional style and its algebra of programs, Commun. ACM, 21 (8) (1978) 613 - 641.
277 3 A. D ' A g a p e y e f f , Expert Systems Fifth Generation and [;.K. S~tppliers, N a t i o n a l C¢mlputer Centre Publications, Manchester, 1983. 4 A. Barr a n d E. A. F e i g e n b a u m , The ttandboot~ of AI, Vol. II, P i t m a n . L o n d o n , 1982, p.67. 5 J. Lighthill, Artificial intelligence r e p o r t t o t h e Science R e s e a r c h Council, Rep, 1973. 6 A. Newell a n d H. A. S i m o n , C o m p u t e r science as empirical i n q u i r y : s y m b o l s and search, T h e 1 9 7 6 A C M T u r i n g L e c t u r e , C o m m u n . ACM, 19 ( 1 9 7 6 ) 113 ~126. 7 E. C h a r n i a k , C. K. R i e s b e c k a n d D. V, M c D e r m o t t , Artificial Intelligence Program ruing. E r l b a u m , Hillsdale, NJ, 1 9 8 0 . 8 F. H a y e s - R o t h , D. A. W a t e r m a n a n d D. B. Lenat, Building Expert Systems, A d d i s o n Wesley, R e a d i n g , MA, 1 9 8 3 . 9 E. A. F e i g e n b a u m a n d J. F e l d m a n (eds.), Computers and Thought, McGraw-Hill, New Y o r k , 1 9 6 3 . 10 D. V. M c D e r m o t t , R I : t h e f o r m a t i v e years, A r t i f Intell. Mag., 2 ( 1 9 8 1 ) 21 - 29. 11 R. K. L i n d s a y , B. G. B u c h a n a n , E. A. F e i g e n b a u m a n d J. L e d e r b e r g , Applications o f Artificial Intelligence for Organic Chemistry: The D E N D R A L Project, McGrawHill, New Y o r k , 1 9 8 0 . 12 H. E. Pople, J. D. Myers a n d R. A. Miller, D I A L O G : a m o d e l of diagnostic logic for i n t e r n a l m e d i c i n e , Int. Joint Conf on Artificial Intelligence, 4 ( 1 9 7 5 ) 848 - 855. 13 R. A. Miller, H. E. Pople a n d J. D. Myers, Internist-1 -- an e x p e r i m e n t a l c o m p u t e r based diagnosis c o n s u l t a n t for general i n t e r n a l m e d i c i n e , N. Engl. J. Med.. (August 19, 1 9 8 2 ) 4 6 8 - 4 7 6 . 14 E. A. F e i g e n b a u m , T h e art o f artificial intelligence: t h e m e s a n d case studies of knowledge engineering, Int. Joinl Conf. on Artificial Intelligence, 5 ( 1 9 7 7 ) 1 0 1 4 - 1029.