Expert systems and design Conail 0 Cath~iin Queen's University of Belfast, N. Ireland
The use of expert systems in design involves certain difficulties. Not least of these is eliciting information from the expert. Computational issues concern logic and the understanding of concepts by the machine. There are also problems in accepting the information from the machine. Keywords: expert systems, design, man-machine interface
An expert system is a computer program which, • handles real-world, complex problems requiring an expert's interpretation. • solves these problems using a computer model of expert human reasoning, reaching the same conclusion as a human expert would when solving a problem I. H u m a n experts often work in areas where scientific knowledge lags behind the practical or empirical knowledge of how to make and do things. An expert system can therefore be seen as an empirical tool for experimenting with the representation and use of knowledge in a way which produces novel or quicker results in a way similar to human experts. Typical applications include operations where, • • • • •
knowledge is expressed with certainty, a great deal of knowledge must be consulted, all possibilities must be explored, databases of facts must be referred to, preliminary consultations would help prepare for a meeting with a human expert 2.
Using rules which are embedded within it, an expert system can make inferences about the information contained in its knowledge base and answer questions both about the information and about how it made the inferences. The use of heuristic methods means that
58
quite complex problems can be tackled without a theoretical formulation, and this avoids the combinatorial explosion which can occur when there is a theory. Since humans are bad at remembering detail, expert systems are useful for problems which involve selection and querying operations. Expert systems also score in another area: computers are more systematic at reasoning that humans. H u m a n psychology is such that it is easier for a human to perceive and understand a positive proposition in logic than its logically equivalent inverse 3. People take longer to solve problems containing negatives, and are more prone to fallacy. However it must be admitted that the machine can sometimes be too keen. Attempts are being made to produce systems which generate new rules from examples, but it has been pointed out that this can lead to arbitrary rules such as ' D o n ' t pour concrete on a Thursday '2. Common sense cannot be encoded in a few rules.
DEJA VU Design routinely involves a great deal of these kinds of activities. What are the possibilities for applications of expert systems in designing? Much has already been written on work in progress, but some reservations are in
0142-694X/87/02058-04 $03.00 © 1987 Butterworth & Co (Publishers) Ltd
DESIGN STUDIES
order and if this seems an apparently negative note perhaps it is because most of the literature and devotees of expert systems have yet to discover the difficulties identified more than a decade ago by design researchers in dealing with flesh and blood experts. See for example Cross 4'5 where the topic is discussed by a number of people. These problems include: • communication and the multidisciplinary nature of design, • experts' failure to understand human needs, • the tendency for experts to solve the wrong problem, • deskilling and the acquisition of expertise. The difference is that previously these questions were being studied by individuals, essentially without computation, at a time when there was no such thing as 'information technology', whereas now techniques are being automated 'warts and all' with little awareness of these matters. The problem of getting the knowledge out of the expert and into the computer has been recognized, perforce, and the 'knowledge engineer' has been invented to solve it. Up to now, our experience shows that this does not represent a realistic approach. Most problems requiring expert advice have sufficient nuances and complexities that it is impossible to expect an inexperienced person to take a rough description from the expert and code it directlyI. Engineering skills would not seem to be the most appropriate for this activity. 'Knowledge midwife' might be a more appropriate job description. Any agent that comes between the expert and the computer is at best a filter and at worst an interpreter. There is a high risk of perinatal damage and subsequent impairment of performance! Expert systems, at least in their present manifestations, simply do not address the problem of dealing with experts. Good advisory interactions and problem formulation (design involves problem formulation more often than not) and plan generation (especially with regard to obstacles, side effects, interactions, and trade-offs) help determine the right questions to ask, and help find or evaluate possible answers. A good adviser must be able to do more than provide a solution and some description or justification of the solution process but be able to participate in the problem-solving process, to answer questions like: what would happen if x? are there side effects to x? how do x and y interact? what produces x? how to prevent x? what are the preconditions (requirements) and post-conditions (consequences) of x?6 And as Woods puts it, the assumption o f user incompetence is almost always unwarranted. If we accept Polanyi's 7 dichotomy that theoretical knowledge typically resides in books while skill typically resides in people, the development of science and technology can be seen as the gradual shifting of information out of people and into books. However, books are not good containers of practical skills. Growth
Vol 8 No 2 April 1987
of theory seems to be accompanied by loss of skill. The emergence of the knowledge-based professions in the last century involved the loss of, or at least the shedding of, some skills which were then relegated to vocations of lesser economic and social status, or discarded. This process produced a change in the way people were educated or trained. A great deal of formal professional education now consists of concept acquisition and manipulation rather than of learning practical skills, an effect which has been exacerbated by the emphasis on universities in professional education. With the passage of time the activity of a profession may come to bear little relation to its implicit skill base. This can lead to a crisis of confidence or disasters or both. The present tribulations of the architectural profession are due to a misfit between its skill base and the technological and financial environment in which the profession now operates. It is becoming more and more difficult for individuals to control complexity and the sheer quantity of information in everything from building sites to nuclear power station emergencies. Expert systems, as presently constituted, are not a panacea for this. What level of expertise is needed to recognise erroneous machine output or a situation that is beyond the capabilities of the machine? (Machine experts are at best only usually correct). Can people use syntactic cues? (this output looks funny for this type of problem) or experience-driven associations (in this situation the machine usually screws up) to filter erroneous system output? A related issue is the question of loss of skill. Some degree of expertise would seem to be required to filter machine output: what factors determine if the user of a machine can develop or maintain that expertise. Learning by doing applies to cognitive as well as to perceptual-motor skills. 6 How do you learn if you do not do? A considerable part of the designer's task consists of relating different areas of expertise and here expert systems cannot help. It is conceivable that a database of design failures could be built up which might support an expert system. Sometimes the same design failure can crop up in very different places: plasticiser migration is a problem whether it is migrating from its cling-film wrapper into cheese or out of a damp-proof course into the mortar. Non-theoretical professional knowledge is passed on informally as rules, principles, rules of thumb, 'tricks of the trade', tips or simply pieces of advice. These are often referred to as 'heuristics', a usage which is to be deprecated. However, in an area of expertise lacking a practical component less relevant lore would tend to be lost over time: It has to be recognised that in giving up the interplay between knowledge and its regular practical exercise, we are departing from the only conditions we know for the successful development of art and science s .
59
COMPUTATIONAL ISSUES There is another set of issues which can be loosely classed as computational. At the heart of every expert system is what is called the inference engine, presumably an oblique reference to Turing Machines. This is what takes the decisions and draws conclusions. Texts of expert systems tend to go to some lengths to describe and categorize this part and may discuss the suitability of various logics and languages for particular sorts of problem. What they are less forthcoming about is the level of our understanding of conditional probability and fuzzy logic as well as the difficulties arising from the use of subjective probability estimates. It is important to realise that expert systems are not neutral and passive receptacles of knowledge. They force one to look at it in a certain way. Wason and JohnsonLaird have provided a good example of how psychology affects the use of logic and reason. The inferences drawn can depend on how the information is represented rather than on logic alone. With computer programs the effect is more marked. For example, in Prolog, a language commonly used for writing expert systems, the following difficulties have been identified in most implementations: • the states of FALSE and UNKNOWN are not distinguished; • the qualification of 'answered'/'unanswered' is not provided; • numeric work is limited to integers; • the search method is blind depth-first which may not be the most appropriate; • uncertainty is not catered for 2. It is a truism that unstructured information is of little value: Information becomes knowledge when structured. When structures are recognisable to humans we call them concepts9. That some are beginning to grasp this is becoming evident: ' N o system really understands the meaning of the words in its network '2. Michie and Johnson have stated that ' I n logic only concepts can be known by the machine'. A concept, in this sense, is perhaps closer to a definition than a meaning. Meaning, like beauty, is in the eye of the beholder. But it would be rash to be dogmatic about this - probably as machine performance improves our own concepts will be redefined. The fact that the abstraction of knowledge carries a penalty was elegantly expressed by Bertrand Russell: Pure Mathematics is that subject in which we do not know what we are talking about, or whether what we are saying is true n°. The cost of formalizing mathematics is the loss of all meaning. This is equally true of logic systems. Logic is concerned with deriving theorems from axioms, that is to say drawing conclusions from assumptions. The truth of the assumptions is not an issue and in any case is not
60
provable. This fact shows up a weakness of the axiomatic method: the problem of consistency. If assumptions cannot be shown to be true then their consistency may bc open to doubt. In turn conclusions drawn from them may be open to doubt 1°. The term 'bug-free' program has had some currency lately. A bug-free program is an impossibility ~t. Apart from those due to mistakes, at a higher level there are bugs arising out of meaning and definition. These are by far the most difficult because they do not arise from 'mistakes' as such, but from a mismatch between the intentions of the original analyst programmer and those of the final user. Of course the intentions of different users can vary widely. A school of thought has grown up around the idea of system design from provably correct constructs, but its difficulties have been acknowledged by Martin 1~ Lakatos ~2 has shown that a proof is always open to refutation by some novel setup which is intuitively close to, but not identical to, the original theorem. I f we regard the application as a theorem then it is easy to see how a program could be subverted by only slightly different problems. Computer operating systems have always suffered from this. In the 1970s it used to be said of one system which the author used, ' I f it's documented it's a facility; if it's not it's a bug!' The folklore said that it was essential to stay with either odd- or even-numbered releases of the operating system because the manufacturer had two teams working on updating. A change to the 'identical' product of one team would produce a host of bugs some of which had been fixed already by the other team! Piecemeal fixes can be a minefield and can have unforeseen effects 9. There used to be a perception that compatibility and standardization were among the most difficult problems in the processing of information. But above this level of implementation difficulties on different machines there is comparatively little awareness of the issue of standards in the information technology community. The author attended a workshop on standards for the humancomputer interface sponsored by the Alvey programme, and was dismayed at the level of understanding and the uncritical attitudes apparent. H u m a n experts are sometimes wrong but they are protected if they have adhered to accepted standards. Creativity always involves risk and standards are really about containment of risk - not something to do with file transfers. There is little awareness of this in the information technology community. Standards must reflect what Woods has called the 'cognitive window' in the man-machine interface. A hundred years or so ago the international electrotechnical commission was founded largely because of the problems of communication in the field which was expanding rapidly. Standards will be crucial for the man-machine interface and must not be left to be decided by default by hardware manufacturers, or indeed by ergonomists. Some things are too important to be left to experts. Woods has coined the term 'cognitive technologies' to describe the shift in emphasis from perceptual and motor
DESIGN STUDIES
skills to monitoring and fault management. These would be concerned with identifying the decision and problemsolving requirements of a particular field and then improving on them, as well as the development of human-computer systems. These would aim to provide effective decision support and advice. They would also address the question of making the best use of humans and machines. Present systems tend to use the human as an input/output channel with the ability to filter the information. But this leaves too much decision-making with the machine and degrades overall system performance, especially after the machine makes a mistake.
CONCLUSIONS Almost all builders of expert systems have alluded to the problem of putting the information in, but at the sharp end the problems are likely to be worse. How does the consumer assess the machine's solution? How does the machine decide? Human experts carry responsibility for their decisions but who is responsible if a computer makes a wrong decision? It is necessary to consider the question of giving the user discretion to reject the machine's decision. Too much emphasis on method leads to solving the wrong problems: Tool builders have focused, not improperly, on tool building, how to build better performing machines. But tool use involves more. The key to effective application of computational technology is to conceive, model, design and evaluate the joint human-machine cognitive system . . . Like Gestalt principles in perception a decision system is not merely the sum of its parts, human and machine. The configuration or organisation of the human and machine components is a critical determinant of the performance of the system as a w h o l e . . . This means using computational technology to aid the user in the process of reaching a decision, not to make or recommend solutions 6. Expert systems will improve decision-making in some areas, but will damage it in others. There needs to be more emphasis on the problems and on the users. This may come in a later generation of systems. In the meantime it is prudent to remember that expert systems do not have common sense and it may be safer not to
Vol 8 No 2 April 1987
believe any output from a computer unless it can be verified.
ACKNOWLEDGEMENT The author is grateful for the assistance he received from Marc Foote of Logica, Cambridge.
REFERENCES 1 Weiss, S and Kulikowski, C A practical guide to designing Expert Systems Chapman and Hall, London (1983)
Allwood, R, Stewart, D, Hinde, C and Negus, B Report on Expert System Shells Evaluation for Construction Industry Applications Loughborough University of Technology (1985) 3 Wason, P and Johnson-Laird, P Psychology of Reasoning: Structure and Content Batsford, London (1972) Cross, N (Ed.) Design Participation: Proceedings of the Design Research Society's Conference, Manchester, 1971 Academy Editions, London (1972) 5 Cross, N (Ed.) Developments in Design Methodology John Wiley, Chichester (1984)
6 Woods, D 'Cognitive Technologies: the Design of Joint Human-Machine Cognitive Systems' AI Magazine (Winter 85186) 7 Polanyi, M Personal Knowledge: Towards a Post-Critical Philosophy Routledge, London (1973)
Council for Science and Society New Technology: Society, Employment & Skill Blackrose Press, London (1981). Quoted in Woods 6 Michie, D and Johnson, R Creative Computer: Machine Intelligence and Human Knowledge Penguin Books, Harmondsworth (1985) 10 Nagel, E and Newman, J G6del's Proof Routledge & Kegan Paul, London (1959) 11 Martin, J System Design from Provably Correct Constructs Prentice-Hall Inc., Englewood Cliffs, NJ (1985) 12 Lakatos, I Proofs and Refutations Cambridge University Press, Cambridge (1976)
61