Machine learning approach to machinability analysis

Machine learning approach to machinability analysis

Computers in Industry 37 Ž1998. 185–196 Machine learning approach to machinability analysis A. Sluga a a,) , M. Jermol b, D. Zupanicˇ c , D. Mladen...

397KB Sizes 1 Downloads 196 Views

Computers in Industry 37 Ž1998. 185–196

Machine learning approach to machinability analysis A. Sluga a

a,)

, M. Jermol b, D. Zupanicˇ c , D. Mladenic´

c

Department of Control and Manufacturing Systems, UniÕersity of Ljubljana, AskerceÕa 6, 1000 Ljubljana, SloÕenia b DZS, Inc.Mestni trg 26, 1000 Ljubljana, SloÕenia c Department of Intelligent Systems, Jozef Stefan Institute, JamoÕa 39, 1000 Ljubljana, SloÕenia

Abstract Optimisation and automation of determination of cutting conditions in operation planning depend significantly on availability of reliable machinability data and knowledge. In order to improve and automate the tool selection and determination of cutting parameters in operation planning we have to re-formulate and generalise the existing machinability knowledge. In the paper the existent machinability data base was analysed by the use of machine learning methodology. A multi-stage experiment has been carried out, comprising Ž1. preparatory phase in which manual construction of higher level attributes and grouping of similar learning examples to obtain more consistent decision trees was performed, and Ž2. learning relations between workpiece materials to be machined, cutting tool features and cutting conditions. Within the learning process several decision trees have been synthesised predicting tool features, cutting geometry and cutting parameters from a set of attribute values. The investigation has revealed the extended insight into the machinability domain, as well as the possible knowledge synthesis regarding workpiece material to be machined and cutting tool as a bottom-line in operation planning for NC-programming and automated process planning. q 1998 Elsevier Science B.V. All rights reserved. Keywords: Knowledge acquisition; Clustering; Tree induction; Cutting tool; Cutting conditions

1. Introduction Machinability data, which include the selection of the appropriate cutting tools and the machining parameters of speed, feed and depth of cut, play an important role in the efficient utilisation of machine tools and thus significantly influences the overall manufacturing costs w3x. High performances of machining operations on CNC machine tools in term of machining time and cost can only be achieved if we deal with machinability data of high quality and reliability. If a tool is not an appropriate one, or if we are too conservative in determination of cutting

)

Corresponding author. E-mail: [email protected]

conditions, it may result in long machining time and high cost of a particular operation. On the other hand, if the cutting conditions are severe, the frequency of tool change may be too high and the loss due to a sudden tool breakage may increase. The resulting costs of manufacturing operations may increase disproportionately, and a production schedule may fail due to higher variability of operation times, unforeseen disturbances in a production flow, and low predictability of tool logistic requirements. Computer-aided process planning systems have not reached yet the level of maturity for satisfactorily running industrial applications. This is particularly recognisable in the operation planning, selection of operation sequence and routing. This is partially because skilled humans do a better job, partially due

0166-3615r98r$ - see front matter q 1998 Elsevier Science B.V. All rights reserved. PII: S 0 1 6 6 - 3 6 1 5 Ž 9 8 . 0 0 0 9 8 - 0

186

A. Sluga et al.r Computers in Industry 37 (1998) 185–196

to a lack of accurate tooling and machinability databases w2x. There are several approaches to extract the machining knowledge. An approach to the tool selection in grinding by inductive learning techniques based on shop-floor experience has been demonstrated by Filipicˇ and Junkar w5x. Park et al. w17x have demonstrated the use of an heuristic modification process by neural network in modifying recommended cutting condition retrieved from database. They have shown how to include shop-floor practice to improve the accuracy of cutting conditions in operation planning. Similar approach was used also by Schultz w20x. The training data were taken from Sandvik Coromant catalogue and 40 machining experiments were done on the shop-floor for testing the generated knowledge. The assessment of the results by the workshop experts has been encouraging. An approach to modelling data selection by the theory of fuzzy sets is demonstrated by El Baradie w3x. The model is based on the assumption that the relationship between a given workpiece material hardness and the recommended cutting speed of that material is an empirical relationship. The model has been applied to data extracted from the Machining Data Handbook w8x, which provides tool material grade, feeds and speeds recommendations for variety of work materials and different operations. The results show a good correlation between the Machining Data Handbook’s recommended cutting speeds and that predicted using the fuzzy logic model. This research is mostly focused on machinability data analysis that is applied to systematically gathered machining data from laboratory tests and workshop experience w9x in order to develop generalised knowledge expressed as a set of propositions. The objective of the research is twofold, namely Ž1. to get better insight into machinability domain and Ž2. the more effective use of the comprehensive machinability database in terms of computerised knowledge base for NC-programming and automated process planning.

2. Machinability issues The machining response data, such as tool-life, accuracy, surface roughness, cutting forces, etc. de-

pend on work material, machining parameters and operation characteristics. The mathematical models describing these responses are developed as a function of machining parameters. The modelled responses and the best workshop practice have been compiled in various forms as to be used for industrial application. In the operation planning the tool choice is based on geometry of the feature to be machined and tool material is selected with the reference to workpiece material to be machined. The cutting conditions are then optimised in terms of minimal costs or time. The selected tool and cutting parameters largely influence the quality and costs of machining. Tool material characteristics of different tool manufacturers deviate considerably and it is impossible to standardise cutting material grades in accordance with cutting characteristics of the material to be machined. Besides, in a factory there is only a limited set of cutting tools for variety of machining operations and work materials. So the choice is very subjective. However, the choice is based on different kind of recommendations. The main source of recommendations comes from tool manufacturers in form of tables, and descriptive, loosely defined rules. An important source of information of tool characteristics and machinability data comes from a few associations specialised on collecting the cutting data, e.g., Machining Data Handbook w8x. The notable one is the Exapt–Infos association which machinability data model is suitable for tool selection and cutting data optimisation. The model itself is empirically defined. Because of the high costs of collecting data only a fraction of empirical data for all possible material pairs, i.e., toolrcutting material and workpiece material to be machined, is available. The information is implicit, and can be used only for the cases which fit the collected set. Machinability data inherit the probabilistic nature of machining processes. Besides, non-unique description of part material and tool, missing data, the data that comes from different test environments are introducing noise into data. However, the problem is to build the knowledge base for more general use in automated process and operation planning. In this research we have built a knowledge base from the existent Infos machinability data model for the use in computer aided operation planning. In this context

A. Sluga et al.r Computers in Industry 37 (1998) 185–196

the basic goals of the experiment were: Ži. elicitation of hidden implicit knowledge from machinability data, Žii. redefining the existing machinability data model, and Žiii. development of knowledge base module for tool material selection and determination of cutting parameters for use in computerised operation planning.

3. Machine learning algorithms used There are several machine learning techniques based on learning from examples which are suitable for different kinds of knowledge synthesis in manufacturing planning domains w16x. In supervised machine learning, a set of examples with known classifications is given. An example is described by an outcome Žclass. and the values of a fixed collection of parameters Žattributes.. Each attribute can either have a finite set of values Ždiscrete attribute. or take real numbers as values Žcontinuous attribute.. For instance, in machinability prediction problem the examples contain material, cutting parameters and selected tool that comprise both continuous Že.g., Yield strength. and discrete attributes Že.g., Material group.. Machine learning tools w13x have been applied in a variety of real-world domains. These tools enable the induction of knowledge in different forms, for example, the form of rules or decision trees. In this study, the program Magnus Assistant w14,15x and the program CART w1x as implemented in S-Plus package w21x were used to construct decision Žsometimes also referred to as classification. trees. In parts of this study also hierarchical agglomeration clustering as implemented in S-Plus package was used. 3.1. Induction of decision trees There are different algorithms for induction of decision trees. In this research, two programs for induction of decision trees are used, namely Magnus Assistant and CART. Both tree induction algorithms belong to the ID3 family of systems for top-down induction of decision trees w18x. Magnus Assistant algorithm recursively builds a binary decision tree.

187

The nodes of the tree correspond to attributes, arcs correspond to values or setsrintervals of values of attributes, and leaves Žterminal nodes. correspond to predicted classes. In each recursive step of the decision tree construction, an attribute is selected and a subtree is built. Algorithm 1 gives an outline of the Magnus Assistant algorithm. if empty example set then this node becomes a null-node labelled with naive Bayesian class distribution; else if all examples belong to the same class then this node becomes a leaf labelled with the class of examples; else if any of the pre-pruning criteria is satisfied then this node becomes a leaf labelled with the majority class of examples; else for the next node select the best attribute Žwhich minimises the expected entropy of the training set. split the set of examples in the node into disjoint subsets according to binarized values of the attribute recursively repeat the whole algorithm for each subset; end if end if Algorithm 1. Magnus Assistant algorithm. The recursion stops in three cases. First, when all examples belong to the same class, meaning that generated leaf is labelled with the class of examples. Second, when there is no example, meaning that generated null-leaf indicates uncovered areas in example space. The algorithm labels null-leaf with the class distribution generated from the examples in the root of the decision tree using naive Bayesian classifier w13x, since there are no examples in a null-leaf to be used for labelling null-leaf. Third, when some of the pre-pruning criteria described later is satisfied, meaning that a leaf is labelled with the class distribution generated from the examples in it. At each recursion step, the selection of the best attribute is based on the ‘informativity’ of the attribute w18x aimed at minimising the expected number of tests needed for the classification of new cases. Tree construction is thus heuristically guided by choosing the ‘most informative’ attribute at the current node of a partially built tree. InformatiÕity of an attribute Žusually referred to as information gain. measures the information gained by partitioning a set

A. Sluga et al.r Computers in Industry 37 (1998) 185–196

188

of examples according to the values of the attribute A: information_gain Ž A, E . s entropy Ž E . y Ý PÕ Ž E . = entropy Ž EÕ . Õ

Where E denotes the set of examples on the current node of the decision tree, PÕ Ž E . denotes the probability of attribute value Õ in example set E, and EÕ denotes the subset of examples for which attribute A has value Õ. The summation is over all the values of attribute A. The definition of entropy is taken from the information theory and in our case it can be described as the average amount of information needed to identify the class of an example taken from the set of examples E: entropy Ž E . s Ý Pc Ž E . = log 2 Pc Ž E .

Ø stop if the majority class in the current example set has a higher probability than the threshold probability Žthe so-called ‘minimal weight threshold’.; Ø stop if the informativity of the best attribute is below the threshold value. Threshold values for all the three pre-pruning parameters are set by the user and should reflect the expected level of noise in the data. CART has similar tree induction algorithm Žsee Algorithm 1. as Magnus Assistant, but without generation of null-leaves that was one of the important issues in this research. The tree pruning mechanism is also different but the main difference between these two systems is in the criteria used for selection of the best attribute at each tree node. Magnus Assistant uses information_gain that has entropy as the impurity measure of examples set, while CART uses gini_index as impurity measure.

c

Where Pc Ž E . stands for the probability of identifying an example as belonging to the class c approximated from a set of examples E, and is usually calculated as the proportion of examples belonging to the class c. The summation is over all the classes. Magnus Assistant incorporates also mechanisms for dealing with noisy data Ži.e., data comprising errors.. The main idea underlying this mechanism is to ‘prune’ a decision tree in order to avoid overfitting the data set of examples which may be erroneous. Magnus Assistant uses two kinds of tree pruning techniques to deal with noisy data: postpruning and pre-pruning. Post-pruning is used after the tree induction to substitute some of the subtrees with terminal nodes Žleaves.. In post-pruning, the expected quality Žpredicted classification accuracy. of each internal node is compared with the expected quality of its subtrees to decide whether to prune the subtrees. On the other hand, pre-pruning is used during tree induction, and not after the tree induction as in case of post-pruning. Pre-pruning enables noise handling by means of stopping the generation of a subtree and making a leaf node labelled with the majority class of the current examples. There are three pre-pruning criteria that can be used by the above algorithm: Ø stop if the number of examples in the current node is smaller than the threshold value;

gini Ž A, E . s gini_index Ž E . y Ý PÕ Ž E . Õ

= gini_index Ž EÕ . Where E denotes the set of examples on the current node of the decision tree, PÕ Ž E . denotes the probability of attribute value v in example set E, and EÕ denotes the subset of examples for which attribute A has value Õ. The summation is over all the values of attribute A. Formula for calculating gini_index is the following: gini_index Ž E . s 1 y Ý Pc Ž E .

2

c

Where Pc Ž E . stands for the probability of class c approximated from a set of examples E and is usually calculated as the proportion of examples belonging to the class c. The summation is over all the classes. To classify a new case, a path from the root of the tree is selected on the basis of the values of attributes of the new example to be classified. In this way, for a given example, the path leads to a leaf which assigns a class that labels the leaf. The selected path may be viewed as a generalisation of the specific example for which the prediction is being determined. If a leaf is labelled with more than one class, each with the probability of class prediction, then the

A. Sluga et al.r Computers in Industry 37 (1998) 185–196

class with the highest probability is selected for the classification of a new case. The entire decision tree reflects the detected regularities in the data, describing the properties that are characteristic for subsets of examples belonging to subtrees. The ordering of attributes Žfrom the root towards the leaves of the tree. reflects also the importance of attributes for the predicted class in the leaf. The measure of attribute informativity is the selected measure of importance. 3.2. Clustering The aim of cluster analysis is to classify objects into groups w4,7x. Many clustering algorithms are based on hierarchical agglomeration, which starts with each object forming a separate group and in which objects or groups close to one another are successively merged. Other algorithms use iteratiÕe relocation, which start with an initial classification and attempts to improve it iteratively by moving objects from one group to another. The iterative relocation method is limited in that it requires one to specify the number of clusters in advance. Clustering algorithms are based on heuristic measures of closeness between groups. 3.3. Hierarchical agglomeration All clusterings used here are based on hierarchical agglomeration. A result of this algorithm is a dendrogram. Each node of the dendrogram represents a group of objects merged in this node. Each leaf of the dendrogram represents an object. More similar objects are merged in lower level of the dendrogram then less similar. each object is a group: Ci s  X i 4 , X i g E,i s 1,2 . . . ,n initial distance matrix is given or calulated; repeat select two groups with the smallest distance: dŽ C p ,C q . s min u ,Õ dŽ C p ,C q .,u s 1,2, . . . ,n j-1, Õ s u q 1,u q 2, . . . ,n j ; merge the groups C p and C q into a new group Cr s C p j C q ; remove the groups C p and C q , and insert the group Cr ;

189

calculate distances d between the new group Cr and the rest of groups; unt 1 single group remains; Algorithm 2. Hierarchical agglomeration algorithm. Algorithm 2 starts with each object in a group of its own. Initial distances between each pair of objects can be given or calculated according to some measure. For instance, distance between two objects can be sum of squared differences of same parameter values. At each iteration the algorithm merges two groups to form a new group. The number of iterations is equal to the number of objects minus one. At the end all the objects are in a single group. In each merge step, algorithm selects two groups with the smallest distance. These two groups are replaces with the new one. The most usual methods for calculating distances between the new group and the rest of the groups are: Ø single linkage: d Ž C p j C q ,C k . s min Ž d = Ž C p ,Ck .,dŽ C q ,Ck .. w6x, Ø complete linkage: dŽ C p j C q ,Ck . s maxŽ d = Ž C p ,Ck .,dŽ C q ,Ck .. w12x, Ø average: d Ž C p j C q ,C k . s 1r ŽŽ n p q n q . n k . =ÝU g C j C q ÝV g C dŽU,V .w22x p

k

4. Application of machine learning in machinability knowledge synthesis Several objects like workpiece, cutting tool, cutting conditions, machine tool, and clamping devices influence cutting operation performances in terms of part quality and operational productivity, cost and time. To satisfy these requirements and objectives set by design and planning system we have to be proficient in technology information. The Infos machinability model incorporates over 50 parameters, and as such highly valuable and not yet fully consummated information. The data contained in Infos database has been collected through extensive experiments in industry and specialised research laboratories worldwide. 4.1. Machinability model The machinability model is composed of several empirically defined models. Database encompasses machinability model which describes parameter data

190

A. Sluga et al.r Computers in Industry 37 (1998) 185–196

for different tool–work combinations for turning, drilling and milling operations w9x. In this learning experiment only data for turning were analysed and 50 attributes were considered. In terms of machinability the technological information is presented in terms of: Ža. workpiece, described by work material Žmaterial designation, chemical composition, mechanical properties, such as yield strength, hardness., heat treatment, shape, pre-machining, surface conditions, etc. Žb. cutting tool, described by cutting material Žcutting material designation., tool type, shape, tool geometry, etc. Žc. cutting geometry, inc. cutting angles, cutting direction, etc. Žd. cutting conditions such as cutting speed, depth of cut and feed, and Že. operation specifics such as cooling, machine tool, workpiece clamping, etc. The extended Taylor equation is used as a tool–life computational model, which incorporates speed, depth of cut, feed, cutting time and tool wear as variables. The Kienzle computational model, as defined elsewhere, is used for cutting force modelling in terms of depth of cut and feed. Additionally, constraints of both models are specified in terms of maximum and minimum values of cutting speed, depth of cut and feed. 4.2. The learning process Several learning tests were applied in this research w10,11,19x. In different tests the learning algorithms described above were used separately or in a combination. There is no uniform way to use the most appropriate methods in the learning specific domain. In the machinability domain we applied two phases in learning process. The preparatory phase consisted of Ž1. redefining the original machinability model to obtain the transformed set of attributes, and Ž2. grouping similar learning examples for building more consistent decision trees. In the learning phase, the learning was performed. In preparatory experiments all attributes were included. Since the database is organised by Material Number according to standard DIN 17007, the values of attribute Material Number were used as class in classification tree induction. The resulted classification tree pointed out that the problem should be divided into subproblems. The same data was used in clustering leading to similar conclusions, i.e the sen-

sible relations relevant for machining have to be taken into account. Therefore the original data model was redefined according to the suggestions given by domain expert. For instance, instead of using two interval boundaries of cutting conditions the arithmetic mean was used. Then, three main subproblems were defined based on technological rules in operation planning: Ø tool selection according to work material, Ø selection of cutting geometry, and Ø determination of cutting conditions. The tool, cutting geometry, cutting conditions are all described with several attributes. For each of the above subproblems a set of classification trees was induced. Each tree considered influence of work material to one of the attributes describing tool, cutting geometry, cutting conditions. For example, Tool Material Grade was used as class and Material Number, Hardness, Heat Treatment, Yield Strength, Pre-Machining as attributes. The analysis of the induced set of classification trees pointed out the need for additional database restructuring. For example, hardness is described by two attributes, one giving the testing methods, and the other one the measured value. Hardness conversion table was used to convert all the measured values Žgiven in HB, HV, HRb, HRc. into HB values. In further experiments some lower-level attributes were described with higher-level attribute according to domain knowledge. For example, Material Number originally described with over 200 values vas replaced by Material Group having 23 values that also conforms standard DIN 17007. In order to get estimates for uncovered parts of the example space by means of null leaves, Magnus Assistant was applied in further experiments. The modified data sets were used by Magnus Assistant in tree induction using different noise handling Žprepruning, post-pruning.. The commonly used approach in machine learning of inducingrtesting classification trees based on the hold-out method Ž70%r30%. was repeated ten times. More precisely, 70% of randomly selected examples were used as training examples and the remaining 30% as testing examples. Summary results obtained as an average classification accuracy over ten repetitions confirmed, that the proposed approach is promising.

A. Sluga et al.r Computers in Industry 37 (1998) 185–196

Fig. 1. The decision tree for relation tool material grade-material group. 191

192 A. Sluga et al.r Computers in Industry 37 (1998) 185–196

Fig. 2. The classification tree for tool material grade related to work material attributes ŽMagnus Assistant..

A. Sluga et al.r Computers in Industry 37 (1998) 185–196

5. Results and discussion The existing Infos machinability data model was redefined regarding the analysis of test results performed on the original data. For example, hardness was originally described by two attributes, that were converted into one ŽHB values.. In final experiments 19 classification trees were induced, one of them is given in Fig. 1. The classification tree in Fig. 1 represents influence of Material Group, Heat Treatment, Hardness and Yield Strength to Tool Material Grade. The most important attribute is Material Group appearing on the first two level of the tree. The Tool Material Grade values given in the leaves of the tree represent the majority value in that leaf. Different leaves assign different probability to majority value. That is the reason why tool material grade appears in neighbouring leaves Že.g., HC on the left hand side of the tree in Fig. 1.. For tool material selection the important attributes are Material Group, Heat Treatment, Hardness and Yield Strength. In difference to the existing practice and knowledge where Hardness is the most important attribute, these experiments show that Material Group is to be considered first. This was evident from decision trees where Material Group appeared closer to the root of the trees than Hardness, which indicates the importance of the attribute. That also means that in further research the investigation should be focused in the relations between work material structure and tool features, cutting geometry and cutting parameters. The classification tree in Fig. 2 is similar to the tree in Fig. 1. It is induced on same data set but induced using different machine learning algorithm, namely the first tree is induced using CART and the

second one using Magnus Assistant. The main difference between them is appearance of null leaves generated by Magnus Assistant. Null leaves represent the uncovered space of examples. For instance, in the data there are no examples for the work material which is described by: Hardness between HB 0 and HB 173, Heat Treatment be soft annealed, and for values of Material Group a, b, c, d, e, h, k, t Žsee Fig. 2.. This indicates that the learning domain, i.e., the collection of machinability data, is unevenly covered by learning examples. Based on attribute value sets as defined in the nodes of classification trees one can draw up guidelines for the minimum set of machinability tests to perform to fill the domain as to obtain more consistent knowledge for automated operation planning. Classification accuracy, commonly used in machine learning, was also applied here to measure the quality of induced model. It is calculated as the percentage of correctly classified examples among all the classified examples. Classification accuracy of decision trees generated by Magnus-Assistant was 65% before the null-leaves elimination. After that, the classification accuracy rose up to 79%. Thus, null-leaves were eliminated, since classification returned by null-leaves is based on some assumptions that are not valid for the examined data. As a result of that for 10–15% of testing examples the system did not predict any class value. The implicit knowledge in the machinability data record sets is elicited and explicitly formalised in two modes. First, the knowledge is represented as a set of classification trees that define different dependencies, for example between work material and tool material. While original machinability data record

Table 1 Material groups generated from examples New material group

Material groups ŽDIN 17007.

1 2 3

Cast iron with bullet graphite, High-grade heat resisting steel, Ni Alloy with Co Cr and Mo Cast iron with lamellar graphite, Cast iron with bullet graphite with Ni, Tool alloy steel with Cr Cast iron with lamellar graphite, Cast iron with bullet graphite, Tool alloy steel with Cr–Mo Cr–Mo–V Cr–V, High grade heat resistant steel, Ni alloy with Co Cr and Mo, Al–Si alloy Binary, Al–Si alloy with Mn Fundamental carbide steel, Stainless steel with Mo, Nitriding steel High grade steel, High grade quality steel, Tool alloy steel with Ni Malleable cast iron, Structural steel Tool steel grade, Tool alloy steel with W Cr–W, Stainless steel with other elements

4 5 6 7

193

194

A. Sluga et al.r Computers in Industry 37 (1998) 185–196

Table 2 The machinability rules in a prolog syntax wstcodeŽ0.602.: EDGE_LENGHT- 12, SMALL_ANGLE) 0, memberŽSHAPE_NAME,wSquarex., THICKNESS_VALUE) 3,18. wstcodeŽ0.704.: EDGE_LENGTH- 12, SMALL_ANGLE) 0, memberŽSHAPE_NAME,wSquarex., CLE_ANG) 10; CLE_ANG- 10, M_MIN- 0,38. . . . .. wstcodeŽ1.4021.: ALPHA) 5.5, VC ) 155, . . . . . . memberŽDISTINCTIVE_MARK_DESCRIPTION,wone_sided_h,two_sided_hx., memberŽSHAPE_NAME,wSquarex., FC ) 1535.

sets enclose only experimentally defined tool-work material pair, the set of decision trees covers wider machinability domain. Classification accuracy is therefore lower, e.g., 0.80 for the relation tool–work material. However, the basis for automated tool selection is thus given. Second, more general groups Že.g., work material, see Table 1, heat treatment. and wider ranges for continuous type of attributes Že.g., Hardness, Yield Strength. are defined for the automated tool selection. Therefore, in the operation planning more generalised groups of work material properties should be taken into account. The experiments confirm that the proposed methodology is appropriate for automatic prediction of proper machining data for untested combination of workpiece and tool materials. Several conclusions were drawn from the experiments that were also

used in generating machinability rules for use in automated operation planning. Ž1. Material number is inapplicable for automated operation planning. This is the reason that a generalised description was used. The learning system generated 7 new material groups out of material groups presented in learning examples, i.e., from the machinability data base ŽTable 1.. Ž2. Analogously, there were 9 groups of Heat treatment values generated by the evaluated classification trees. Ž3. In case of continuous type of attributes Žhardness, yield strength. the classes Žranges. have been defined by the system. Ž4. The pre-machining of a workpiece is not explicitly significant for tool and cutting geometry selection and cutting condition determination.

Table 3 Machinability rules ruleŽ2,mach,‘KC11’,- ,w1780.5x,254,864.500000, Tempered,1.. ruleŽ4,mach,‘APMIN’,- ,w1.55x,39,137.399994, No heat treatment,1.. ruleŽ8,mach,‘TOOL_MATERIAL_GROUP’,:, wHC,HM,P10,P25x,27,70.860001, No heat treatment,1.. ruleŽ16,mach,‘VCMAX’,- ,w262.5x,11,12.890000, No heat treatment,1.. . . . .. ruleŽ63,mach,‘TOOL_MATERIAL_GROUP’,:, wP10,P15x,5,0.000000,Soft annealed,0..

A. Sluga et al.r Computers in Industry 37 (1998) 185–196

The classification trees were used for the development of the knowledge base module for tool material selection and determination of cutting parameters for use in operation planning. Each classification tree represents the dependency of one of the tool attributes or one of cutting geometry attributes with the reference to all work material attributes. All decision trees were pruned in order to achieve maximal classification accuracy. After that, each decision tree was represented as a set of standalone rules. Rules representing the dependency of all tool attributes were grouped together. Among them all the rules having the same attribute value were merged. For example, in Table 2 two rules having wstcode Ž0.704. are merged. The same process was performed for the rules representing the dependency of all cutting geometry attributes with the reference to all work material attributes. The rules are expressed in an independent format ŽTable 3. and also in Prolog’s form ŽTable 2. in order to use them in an automated operation planning system. In the system, in terms of methodology used, there is no difference between learning process from the reference machinability database and knowledge refinement process including new coming machining data from daily workshop practice. However, it is to be stressed out that a human expert is still needed in the process of learning and discovering new knowledge. 6. Conclusions In this paper an attempt to formalise the machinability knowledge from the machinability data base is presented. The methodology of learning from examples was employed. It reveals that the relationship between workpiece, tool and cutting conditions can be described and evaluated by machine learning techniques. So, a vast source of manually oriented machining information can be effectively used in computerised manner. The synthesised classification trees represent a compact form of machinability knowledge. The compound classification trees can be used as decision drivers in the automated tool selection and cutting parameters determination process. Thus, the approach used here provides the contribution to an integrated automation of manufacturing systems.

195

The developed system also enables the determination of a guideline which machining experiments still have to be performed in order to fill the existent data and to refine the knowledge base. Thus, the lack of data can be minimised and much of cost intensive experiments can be avoided. References w1x L. Breiman, J.H. Freedman, R.A. Olshen, P.J. Stone, Classification and Regression Trees, Belmont, CA, Wadsworth, 1984. w2x N.B. Colding, The Adaptive Virtual Reality Aided Factory Management System—A Response to Industrial Requirements, Int. Colloquium on Flexible Manufacturing Systems —Perspectives for the 21th Century, Univ. of Ljubljana, Ljubljana, Slovenia, 1996, ps. 25. w3x M.A. El Baradie, A fuzzy logic model for machining data selection, Int. J. Mach. Tools Manufact. 37 Ž9. Ž1997. 1353– 1372. w4x B. Everitt, Cluster Analysis, Halsted, New York, 1980. w5x B. Filipic, ˇ M. Junkar, Inductive learning approach to the selection of tools in grinding process, 2nd Int. Conf. on Artificial Intelligence Applications, Cairo, Egypt, 1994, pp. 513–520. w6x K. Florek et al., Sur la liaison et la division des points d’un esemble fini, Colloquium Mathematicum 2 Ž1951. 282–285. w7x J.A. Hartigan, Clustering Algorithms, Wiley, New York, 1975. w8x IAMS, 1980, Machining Data Handbook, Metcut Research Associates, ISBN 0936974001. w9x INFOS, 1988, Service fuer Schnittwerte—Richtwertempfehlungen fuer die Drehbearbeitung, Exapt Verein, Aachen, Germany. w10x M. Jermol, Integration of Generative CADrCAPP into Flexible Manufacturing System Lakos 250, M.Sc. Thesis, University of Ljubljana, Slovenia, 1996, ps. 122 Žin Slovene.. w11x M. Jermol, D. Zupanic, ˇ D. Mladenic, Application of machine learning systems in material pair selection and cutting values determination in turning process, Proceedings of the 5th Electrotechnical and Computer Science Conference ERK’96, Vol. B, Portoroz, Slovenia, 1996, pp. 179–182 Žin Slovene.. w12x L.L. McQuinty, Hierarchical Linkage analysis for the isolation of types, Educ. Psychol. Measur. 20 Ž1960. 55–67. w13x T.M. Mitchell, Machine Learning, McGraw-Hill, 1997. w14x D. Mladenic, Magnus Assistant: Technical documentation, IJS, Report DP-6938, IJS, Ljubljana, Slovenia, 1994. w15x D. Mladenic, The learning system Magnus Assistant, BSc Thesis, University of Ljubljana, Slovenia, 1990 Žin Slovene.. w16x L. Monostori, A. Markus, H. Van Brussel, E. Westkampfer, ¨ Machine learning approaches to manufacturing, Annals of the CIRP 45 Ž2. Ž1996. 675–712. w17x M.-W. Park, H.-M. Rho, B.-T. Park, Generation of modified cutting conditions using neural network for an operation planning system, Annals of the CIRP 45 Ž1. Ž1996. 475–478.

196

A. Sluga et al.r Computers in Industry 37 (1998) 185–196

w18x J.R. Quinlan, Induction of decision trees, Machine Learning 1 Ž1986. 81–106. w19x A. Sluga, M. Jermol, An approach to machinability analysis with machine learning techniques, 2nd World Congress on Inteligent Manufacturing Processes and Systems, Springer, Budapest, Hungary, 1997, pp. 54–59. w20x G. Schulz, D. Fichtner, A. Nestler, J. Hoffmann, An intelligent tool for determination of cutting values based on neural networks, 2nd World Congress on Inteligent Manufacturing Processes and Systems, Budapest, Hungary, Springer, 1997, pp. 66–71. w21x S-Plus, S-Plus for Windows, Version 3.2 Release 1, MathSoft, S-PlusqAT&T, 1994. w22x J.H. Ward, Hierarchical grouping to optimize an objective function, JASA 58 Ž1963. 236–244. Alojzij Sluga received BSc and MSc in Production Engineering from the Faculty of Mechanical Engineering, University of Ljubljana. He obtained his PhD degree in Technical Science in 1987 at the University of Ljubljana. He is presently the associate professor at the Department of Control and Manufacturing Systems at the Faculty of Mechanical Engineering, University of Ljubljana. He has worked with different industries in the areas of manufacturing technology design, CADrCAM, and design and implementation of information technology. His current research interests include manufacturing systems, computer integrated manufacturing, enterprise modelling and quality technology. He has published 27 research papers. Mitja Jermol received MSc in Production Engineering from the Faculty of Mechnical Engineering, University of Ljubljana. He has worked with different industries in the areas of manufacturing technology design, CADrCAM and design and implementation of information technology. He is presntly the head or research laboratory for multimedia and informatics in education at DZS , Educational Publishing Division in Ljubljana. His current interests are the development of information system to support publishing process, development and applying new models of the learning and machine learning in distance and flexible learning.

Darko Zupanicˇ is a member of Department of Intelligent Systems at the Department of Intelligent Systems, Jozef Stefan Institute in Ljubljana, Slovenia. In 1997 he received MSc in Computer Science at University of Ljubljana, Slovenia. Currently, he is a PhD student. His main research interests are in the areas of Constraint Logic Programming and Machine Learning, both from the field of Artificial Intelligence.

Dunja Mladenic is a final year PhD student in Computer Science at the Department of Intelligent Systems, Jozef Stefan Institute, Ljubljana, Slovenia. She graduated in Computer Science at Faculty for Computer and Information Science, Ljubljana University and continued as PhD student there focusing on Artificial Intelligence. Most of her research is connected with the study and development of machine learning techniques and their applications on realworld problems from different areas, e.g., medicine, mannufacturing, pharmacology, economy. In 1996r97 she spent a year doing research in the learning laboratory at Carnegie Mellon University, Pittsburg, PA, USA. Her current research focuses on using machine learning in data analysis, with particular interest in learning from text applied on World Wide Web documents and intelligent agents.