Robotics & Computer-Integrated ~tanu]hctunng, Vol. 3, No. 2, pp. 2 6 3 - 2 0 8 . I987
0 7 3 0 - 5 8 4 5 8 7 5 3 . 0 0 - O.00 Pergamon Journals Ltd.
printed in Great Britain.
• Paper
AI TOOLS
FOR INTELLIGENT
MANUFACTURING
A. MARKUS Computer and Automation Institute, Hungarian Academy of Sciences, H-1502 Budapest P.O.B. 63, Hungary. Conventional software tools are inappropriate for the representation of several important features of engineering knowledge. This paper outlines these features and discusses the applicability of several Artificial Intelligence techniques.
1. INTRODUCTION Looking back to the past of C A D / C A M , there can be found enormous, successful projects bravely solved by using outdated software tools and development methods. This is surely not due to the lack of information about the vogue in the software of those days but to the fear of loosing their vested interests. The integration of new pieces of programs into old frameworks has become a standard mean of software development for C A D / C A M . With the aim of helping the renewal of the set of software tools routinely applied for IMS, this paper exposes some features of engineering knowledge which call for unconventional representation and outlines some of the AI-based programming concepts and tools which seem to be promising in coping with these features.
use simulation techniques, at least for finding those parameters under which the analytic methods will show a behaviour with enough (and proven) resemblance to the actual situation. The above cases can be considered as straightforward improvements in the use of conventional methods. Furthermore, we want to show that some important features of the expertise (and, now and then, the lack of expertise) concerning the new manufacturing environments cannot be efficiently represented by updating conventional programming schemes. (The term "conventional" stands for program packages with a built-in hierarchy of subroutines, driven by ad hoc user interfaces. Usually, these packages have sophisticated monitors for coping with unfriendly operating systems, but they do not offer an overall framework for their usage and their general-purpose features do not contain more than graphic and data base interfaces.) Our claim is that this conventional usage compels the user to solve too many and too hard problems again and again, and that it might be cheaper to adapt at least some pieces of the AI offering. With the exception of the particular tasks where a straightforward application of the underlying physical laws gives a (unique) solution, manufacturing knowledge consists of relatively independent pieces of knowledge, i.e. it is more horizontal than vertical. It is hard to find either a hierachical structure in it or an overall driving principle in the course of its application (or, formulating more technically: there is no fixed goal-structure of these problems). Although
2. FEATURES OF ENGINEERING K N O W L E D G E THAT DO NOT FIT INTO THE CONVENTIONAL FRAMEWORKS To start with the simplest case, running out of the legitimate application area of an old system may be caused by the enlarged size of a computational task (e.g. after changing to more flexible routing in enhanced manufacturing environments). This kind of change calls for shifting the focus from applying " p u r e " operational research methods guaranteeing optimal solutions to techniques incorporating heuristic elements into'the search for suboptimal results, v'~a Even under the normal range of operating conditions, the enhanced modularity of the manufacturing resources may invalidate the modelling assumptions the methods are based on, consequently one has to
Acknowledgements--The author thanks Joe Hatvany, Zs6fia Ruttkay and J6zsef Vfincza for the discussions held with them. 263
264
Robotics & Computer-Integrated Manufacturing • Volume 3, Number 2, 1987
there is valuable advice over a considerable range, finding its formal, computer processable representation is a recent, hot field of research. ~'-'s In the short run, it looks unavoidable to be able to represent manufacturing knowledge in the form of unstructured pieces. An efficient representation of engineering knowledge needs variable depth of storage and recall: the detail can be enlarged in the course of using the pieces of knowledge, the depth of processing has to be varied according to the specific goal, to the risk of the decision, to the time constraints, etc. The unknown or omitted details, or in several cases the very nature of the problem (e.g. the stochastic effects) result in incompleteness and in the uncertainty of the representation. Another consequence of the improper (illstructured, insufficient, and/or conflicting) knowledge is that the revision of the results cannot be avoided (e.g. after using wrong defaults of data or after faulty assumptions about the range of the validity of a specific piece of knowledge, s To have a representation with a manageable size one is frequently compelled to use simplified, deterministic assumptions about the future (no occurence of rare but highly disturbing effects, durations equalled by' the mean values etc.). In short, the representation of engineering knowledge cannot prove to be viable without a robust tool for revising the results. On a more technical level, this need for the capability of revisions calls for a tool for tracing the dependencies in the representation. An advanced manufacturing environment requires monitoring and diagnostic features capable not only of avoiding catastrophic breakdowns and damage but localizing the effects of minor malfunctions. This cannot be done without a finely tuned system for recognizing situations and the means for finding the proper actions. 9 Last but not least, the problems related to the interfaces must not be underestimated: for a complex system working in a domain with the features outlined here, the adequacy of the interfaces is a matter of decisive importance: poor interfaces may upset the exploitation of a system's most inherent features. Although most of the features outlined here call for unconventional representation techniques, one has to find some ways of including old modules into new systems. Since this transplantation is an expensive operation, an early decision in favour of using a newer, more appropriate set of tools can save a considerable amount of resources in the long run,
even if at the time of their implementation they might be less efficient, less friendly and familiar than the old ones. 3. AN O V E R V I E W OF AI T O O L S AND T E C H N I Q U E S P O T E N T I A L L Y USEFUL FOR IMSs
Without explicit references to particular program products and their specific application areas (for an application-oriented survey see Refs 5, 1.2, 15) we shall attempt in this section to outline a personal perspective of the available AI tools and techniques. The focus of attention is set on the exposition of open questions, and, in the case of Prolog, on a claim for re-evaluation.
3.1. Effects of the use of art A 1 language: drawbacks and advantages of Prolog From a theoretical point of view, changing in favour of another programming language may be a matter of secondary importance. Nevertheless, in the software engineering practice new trends are usually labelled by using new tools, 2~'-'~and each new tool gives birth to lots of enthusiastic papers about the new tool. Our opinion is that, concerning Prolog, this excitement has been a bit oversized and the image of Prolog still needs some minor touch-ups. So here we shall again try to summarize our experiences with its practical use, to outline what Prolog is and what it is not. In the hope that the dedicated architectures ~7 will make the practical limitations (the low speed, the limits of the size of the programs, the lack of interfaces, the poor support of program developm e n t - a t least when c o m p a r e d with LISP's) obsolete in the near future, at least for the next few years, until the theory and the software development in the particular areas catch up with the new possibilities, only those features are mentioned here which seem to be inherent in the very core of the logic programming approach.
3.1.1. Data format and pattern matching. The built-in pattern matching mechanism deals solely with terms. Obviously, using terms is an ideal tool for handling data if and only if they can be easily transformed into trees, especially lists. In a situation when we used to pass an array of data to a subroutine in a traditional p r o g r a m m i n g language (in order to offer random access to the data from inside the subroutine), or when we used to build up data structures with pointers to avoid copying considerable amounts of data, at least in these cases there are no ready-made solutions in Prolog.
AI tools for intelligent manufacturing • A. MAR~LS On the other hand, in Prolog there is no need and there is no built-in means to use sophisticated representations (e.g. with type checks) of the data: if the bracketing is correct, the data are correctly represented, The integration of the commercial Prolog systems with a knowledge representation scheme (e.g. with a real-size frame handling system) is a task to be solved in the future.
3.1.2. The declarative style of programming.
A
frequently praised feature of Prolog is that it offers a pure, control-free, declarative style of programming, simply because the rules/facts can be stored in any order without changing the meaning of the program and because the sequence of the conditions in the rules is said to be interchangeable. This is true from the logical point of view, but one can scarcely find a meaningful program where this supposition holds. As a matter of fact, in most applications, it is far better to look at the Prolog programs in the so-called procedural way: here the sequencing of the rules and of the conditions have similar importance than that of the conditions have similar importance to that statements of a subroutine. Related to the traditional languages, Prolog's execution mechanism has one more point where one has to be cautious: the effects of backtracking and the unbound variables cause much pain, and not only for inexperienced users.
3.1.3. Prolog's search mechanism. Prolog's chronological backtracking (that is, altering the last changeable decision first until a solution is found) is a transparent, easy to use method of implementing simple searches, consequently, it results in programs which are relatively easy to debug. Problems arise first when one wants to overcome this mechanism in an economical way, e.g. when one wants to r e m e m b e r results found on an abandoned branch (what is more, to transplant a branch onto another node of the tree, or to modify the execution inherent in the program code). For such cases, that is, when one wants to amalgamate the features of Prolog's search mechanism with some other method in order to get a modifiable goal tree not inherent in the code, our recommendation is to avoid making a patchwork of local amendments, scattered throughout the system: it is better to implement a new level, to make your own inference engine or to adopt/adapt one from the folklore. The conclusion is that Prolog's power is in its ability to be used as a very flexible high level assembler, efficiently performing a lot of awkward work instead of the user. Its execution mechanism is transparent and powerful, it facilitates writing inter-
265
preters for specific formal languages, but, in all cases, the core of the problem solving remains the user's job just like before. Similarly', it may' be the most legible language, but its form does not suggest many more ideals than the form of the old favourites.
3.2. Basic characteristics and application areas of rule-based systems Besides logic programming, rule-based systems are the other widely-used programming environments with the emphasis on the separation of control information and the domain specific knowledge. C o m p a r e d with the other, well-established declarative knowledge representation technique, that is, with the frame-based approach, the characteristic difference is that a dedicated inference engine belongs to each rule based system. On the other hand, compared with the goal-driven style of the usual direct applications of Prolog. rule based systems work in a data-driven fashion. The first point we want to emphasize here is that these distinctions are far from absolute: in solving complex problems the rule-based systems do not work efficiently without some means of adding control information, either hidden in the programming practices or explicitly declared as programming features, e.g. in the form of metarules or as an additional structure of the working memory. As the abundance of experiments around OPS5 (see e.g. Refs 10, 18) shows, the formulation of this layer is a manifold problem. The other measure of the power of a rule based system is its pattern matching capability. Based on OPS5's success, there is a strong indication that the simplest choice is the best one, that is. when the types of the elements in the working m e m o r y belong to a declared set of record-like structures, built solely of the simplest data-types. Beside the efficiency aspects related to the adequacy of compilation techniques here, this limitation forces a strong, but familiar, programming discipline onto the users. On the other hand, supposing that the rule based system is an extension of a frame-based one :° the obvious thing to do is to pass its inherent facilities to the rules, too; this is the case e.g. in our Hungarian PROPS, which is a prototype version of OPS5 in Prolog: here the working m e m o r y consists of Prolog terms. ~0 In the framework of rule-based systems, an important point is their answer to the conflict resolution problem, i.e. the way they choose the rule instance for the next firing. Giving the highest priority to the most recent instance, and accepting the principle of
2~e3
Robotics & Computer-Integrated Manufacturing • Volume 3, Number 2. 1987
refraction (i.e. no instance may' fire more than once) requires a sophisticated administration of the conflict s e t ) On the one hand, the recency-based firing rule results in a deterministic flow of control, one predictable even for inexperienced users: its role is similar to that of the depth-first, left-to-right execution in Prolog. On the other hand, a somewhat deeper problem arises: since the recency concept is based on the existence of separate elements in the working memory, it is hardly definable for framelike structures with their access pointers among each other. Having discarded the recency concept, the other alternative is to define an additional structure of the rules and/or of the working memory; now, inside a group of instances picked out for the next firing(s) the particular order might be indifferent or programmable by specific tools. For structuring the rule base and the working memory, the concept of blackboard architectures seems to be the most promising. Here the supervised cooperation of independent problem solver experts reduces the difficulties with combining various models of the same problem, programs of various origin etc. interestingly enough, even the kernel of such a system calls for rule-based implementation, v~ From the applications' point of view, it is well known that the rule based systems are proper tools for a large variety, of tasks, ranging from analysis (e.g. the diagnostic tasks) to synthesis (e.g. configuring and redesigning complex objects, making agendas).I.~<:I.-'~ In the context of IMSs, rule based systems are promising candidates for implementing a kind of truth maintenance as well. The tracing of the effects of the modifications of knowledge can be facilitated, basically, in two ways: one is to store the dependencies among the facts that may change or among the alternative beliefs, like in;: the other is a reduction of knowledge in cases when it has led to inconsistent deductions. In this second approach, contradiction occurs between the changed facts and the previously derived consequences. Truth maintenance is the removal of inconsistent deductions, thus, re-building a consistent state of description. 2~ Rule based systems may facilitate even the handling of pieces of uncertain knowledge: the working m e m o r y elements may have some kind of uncertainty attribute, the uncertainty of the inferences can be included into the rules. The inference mechanism itself may adopt various kinds of reaction to the uncertainty of the knowledge represented: both a modified criterion of the fulfillment of the conditions
and modified conflict resolution principles may be useful. ~ Last but not least, rule based systems are a promising framework for learning systems as well. Learning is a central concept of the so called 2nd generation expert systems, z: distinguished by their effort to combine heuristic reasoning based on rules with so-called deep reasoning based on a model of the problem domain. In the simplest form the progressive refinement of an initial rule set is based on adding more conditions in order to discriminate among the situations more finely. The improvement of the efficiency of the rule set needs more sophisticated techniques: rules should be combined and generalized, new concepts formed, the rule set should be restructured. Although the need for the learning systems is beyond doubt, their feasibility for solving the reallife problems of IMSs is so far an open question.
3.3. The improvement o/'simulation methods by AI tools Although simulation techniques are well established, traditional tools of process design and execution, they can be greatly improved by the help of recent AI techniques, both by making'the interfaces more convenient and by facilitating new description methods of the objects under simulation. The relation of an expert s.vstem incorporating the extra features and the simulation tool may vary from a loosely coupled one to an embedding structure. ~t The most obvious application of the expert system approach in simulation is the case when the specific parameters, or in general, the structure of the simulation is generated by non-conventional tools, ranging from model generation through ad hoc symbol manipulating techniques to an advisor for helping in the proper setting of the underlying statistical evaluation. In both cases, the expert system's help is focused on supporting those features which might be unfamiliar to the manufacturing engineer, and the aim is to increase the credibility and validity of the simulations. Data-driven inference methods can greatly improve the analysis of the results of the simulation by recognizing situations, making summaries etc. Through the impoved interfaces the model refinement process becomes more efficient, the net result more appropriate: the expert system elements test the simulation and vice versa. As the simulation comes to be more mobile, its application area, covering originally the aggregate scheduling level, enlarges towards the detailed problems and becomes a tool for on-line opportunistic scheduling.
AI tools for intelligent manufacturing • A. MAR~ct.S The rule-based approach can be adapted to describe the operating rules of the environment, the processes and the evaluation; unfortunately, this technique has severe disadvantages in the case of events that cannot be scheduled to a specific time by the rules. An unconventional application of simulation techniques (in the case of the event-oriented approach similar to that of SIMULA67 ~) is that of planning the course of actions of communicating processes in terms of simulation. Here again, the programming concepts of a simulation approach are used to describe the environment, the processes, their communication but with an addition: the system has a goal to reach, and the simulation mechanism runs to find the choices which fulfill the goal.
3.4. Natural language interfaces for intelligent manufacturing In C A D / C A M , the least widely applied topic of AI research is natural language processing. The problem of "natural language" processing for manufacturing cannot be equated to the problem of the interpretation and generation of utterances in a well-chosen sub-English: it should incorporate the other media of engineering communication like drawings and charts as well. Although the implementation of some assembly system of canned English question-answer pairs may improve the user-friendliness of a system in the long run, IMSs need tools which enable the user to form an accurate model of the system. Even if the knowledge engineering people may be successful in building an intelligent system with the help of clumsy interfaces, the user might, in the best case, not tolerate the lack of insight into the "thinking" of the system. (At large computer installations, the misconceptions about the principles of the operating system and an underestimation of the system's abilities, rooted in the lack of proper interfaces, usually result in the clients' fanciful misuse of the operating system's features, or in a deteriorated operating practice, in short, in a decreased performance of the machine.) On the other hand, the system should be able to acquire a correct identification of the user: no cooperative behaviour, no task-oriented dialogue can be produced without proper means for expressing the user's intent, interest, beliefs. It seems that, parallel with the efforts to produce powerful, dedicated tools of particular kinds of natural language interactions in CIM, there is an urgent need to clear up the most general features of engineering interactions.
267
4. PRODUCTIVITY ADVANTAGES OF USING AI TECHNIQUES FOR IMS APPLICATIONS DEVELOPMENT From the software engineering point of view, the AI approach can be characterized with a new routine development path of the software systems. With the use of AI tools the-traditional development phases (that is, producing detailed specifications typically without computer-aided tools, coding in a separate phase, testing, respecifying, re-coding etc.) can be replaced with a new specification phase, to be made in the framework of a formal language and with a second phase of interpretative testing executed on the specification itself. If one needs a very efficient final version, the incremental reformulation of these runnable specifications may end by entering into the old specification/coding/testing cycle, but at the time of turning to the old tools, the specification will be a better-defined one or, usually, a prototype. So it may happen that the finished expert system does not employ any AI techniques at all. The prototypes implemented with AI techniques are recommended to be employed as final versions in those cases where the formulation of the task is problematic, where the details of the application task frequently change. By using the AI tools, the test phase becomes more transparent, since the user may trust the soundness of the interpreter used for testing. Locating the errors of the more-or-less unstructured pieces of knowledge can be greatly helped by the explanation facilities of the inference engine. On the other hand, there is a peculiar drawback of this kind of development, namely that the piecewise development and the easy modification of the specification does not encourage the deep understanding of the problem domain as a w h o l e ) The undisciplined modifications may lead to an observationally adequate system with poor descriptive and explanatory features. In order to avoid this effect, sooner or later, some methods of software engineering should be applied inside the AI approach as well. 5. CONCLUSIONS The present paper is confined strictly to show that there are several techniques, the ones mentioned here and others as well, that offer conceptually rich and/ or efficient frameworks for solving specific subproblems in specific subfields. Some of them are not included in the mainstream of AI research, being themselves applications of limited scope; sometimes they might be picked up even from fields far from discrete manufacturing.
268
Robotics & Computer-Integrated Manufacturing • Volume 3. Number 2. 1987
T h e u n j u s t i f i e d e x t r a p o l a t i o n of k n o w l e d g e e n g i n e e r i n g e x p e r t i s e from o n e field to a n e i g h b o u r ing one may' result in d r a m a t i c d e c r e a s e of efficiency and p o w e r , so the t r a n s p o r t of system k e r n e l s which have b e e n p r o v e n a d e q u a t e for one specific p u r p o s e m a y be d i s a p p o i n t i n g in the n e i g h b o u r i n g areas. It s e e m s that the c h a l l e n g e to build g e n e r a l p u r p o s e systems, c o n s o l i d a t e d in their m e t h o d s a n d techniques w o u l d tax the r e s o u r c e s of even the largest o r g a n i z a t i o n s . H o w e v e r , sets of widely' a p p l i c a b l e m e t h o d s and t e c h n i q u e s m a y quite possibly be within reach.
REFERENCES 1. Descotte, Y., Latombe, J-C.: G A R I : A Problem Solver That Plans How to Machine Mechanical Parts. Proc. of the 7th IJCAI, pp. 766-772, I981. 2. Doyle, J.: Truch Maintenance Systems for Problem Solving. ,",'liT AI Laboratory Report, AI-TR-419, 1978. 3. Doyle, J.: Expert Systems and the Myth of Symbolic Reasoning. IEEE Trat,sactions on Sw Eng. SE-I 1, pp. 1386-1390, 1985. 4. Forgy. C.L.: A Fast Algorithm for the Many Pattern/Many Object Pattern Match Problem. Artificial lntellige,ce, 19, pp. 79-91, 1982. 5. Fox. M.S.: Industrial Applications of Artificial Intelligence. To appear in Artificial Intelligence in Manufacturing (ed. Thomas Bernold), Springer, Berlin, 1986. 6. Futd, I.: Combined Discrete/Continuous Modeling and Problem Solving. In AI, Graphics and Simulation (ed. G. Birtwistle), The Society for Computer Simulation, San Diego, pp. 23-28, 1985. 7. Greene, T.J., Sadowski, R.P.: Cellular Manufacturing Control. J. Manuf. Syst, 2, pp. t37-145, 1983. 8. Hatvany, J.: The Efficient Use of Deficient Knowledge. Ann. of the CIRP, 32(1), pp. 423-425, 1983. 9. Hatvany, J.: Available and Missing AI Tools. Annals of the CIRP, 35(2), pp. 433-435. 10. Horvfith, M., M~.rkus, A.: Prototype of a Prologbased Design Engine. Proc. of Int. Symp. On Design and Syntesis. Tokyo, pp. 7-10, 1984. 11. O'Keefe, R.: Simulation and Expert S y s t e m s - - A Taxonomy and some Examples. Simulation, 40, pp. I 0 - 1 6 , 1986.
12. Kempf, K.G.: Manufacturing and Artificial Intelligence. Roboucs, !. pp. 13-35. i985. 13. Kim, S.H.. 8uh, N.P.: Application of S?mbolic Logic to the Design Axioms. Robotics and Co.zputer Aided Manuf 2. pp. 55-64, 1984. 14. Kusiak, A.: Applications of Operational Research Models and Techniques in Flexible Manufacturing Systems. Eur. J. Operational Res, 24, pp. 336-345, 1986. 15. Latombe, J-C.: The Role of Artificial Intelligence in C A D / C A M Analyzed through Several Examples. Proc. of Prolamat 1985, Paris, pp. 88-96. 1985. 16. McDermott, J.: RI: A Rule-based Configurer of Computer Systems. Artificial Intelligence, 19, pp. 39-88, 1982. i7. Nakazaki. R. etal.: Design of a Co-operative High Performance Sequential Inference Machine (CHI) NEC Res. and Dev., No. 80, pp. i 0 - 1 8 , January 1986. 18. Rosenthal, D.: Adding Meta Rules to O P S 5 - - A Proposed Extension. SIGPLAN Notices, 20, pp. 79-86, 1986. 19. Rychener, M.D. etal.: A Rule-based Blackboard Kernel System: Some Principles in Design. Technical Report, Carnegie-Mellon University, The Robotics Institute, 1984. 20. Rychener, M.D.: PSRL: An SRL-Based Production Rule System. Technical Report, Carnegie-Mellon University, The Robotics Institute, 1984. 21. Schor, bl.l " Using Declarative Knov,'ledge Representation Techniques: Implementing Trtlth Maintenance in OPS5. Proc. of the Ist Conf. on AI Applications, Denver, pp. 261-266, 1984. 22. Steels, L.: Second Generation Expert Svstems. Future Generation Computer Systems, I, pp. 213-221, 1985. 23. Szuba, T.: Automatic Program Synthesis System for N.C. Machine l'ools Based on PC Prolog. Angewandte Informatik, 26, pp. 234-243, 1984. 24. Szuba, T.: PC-Prolog for Process Control Applications. Angewandte Informatik, 26, pp. 164-171, 1984. 25. Tomiyama, I . . Yoshikawa, H.: Extended General Design Theory. Prepr. of IFIP W.G. 5.2. Working Conf. on Design Theory for CAD. Tokyo, pp. 75-104, 1985. 26. Vesonder, G.T. etal.: ACE: an Expert System for Telephone Cable Maintenance. Proc. of the 8th IJCAI, Karlsruhe, pp. 116-121, 1983.