An expert system approach to quality control

An expert system approach to quality control

Expert Systems with Applications PERGAMON Expert Systems with Applications 18 (2000) 133–151 www.elsevier.com/locate/eswa An expert system approach ...

134KB Sizes 1 Downloads 259 Views

Expert Systems with Applications PERGAMON

Expert Systems with Applications 18 (2000) 133–151 www.elsevier.com/locate/eswa

An expert system approach to quality control E.P. Paladini* Departmento de Engenharia de produc¸a˜o e Sistemas, Universidade Federal de Santa Catarina, CP 476, Trindade, 88010-970, Florianopolis, SC, Brazil

Abstract This paper describes the basic actions proposed for quality inspection. These actions involve structuring a Decision Supporting Expert System (DSES) to help with those decisions related to the preliminary activities for inspection development—most of them relating to determining of the need or convenience of carrying out the inspection itself. Once the opportunity to carry it out is defined, the expert system helps the user to select the type of inspection to adopt from amongst: (1) automatic or sensorial inspection; (2) inspection by samples or complete; (3) acceptance or rectifying and, in the most relevant module, (4) inspection by attributes or by variables. The complementary documentation of the DSES contains the directions to operate it, the rules and qualifiers that make up the system, as well as the results achieved through its experimental implementation. q 2000 Elsevier Science Ltd. All rights reserved. Keywords: Expert system; Quality control; Total quality management

1. Introduction Having the current concept of quality in mind (and the context of Total Quality Management itself), the feasibility process of basic quality control actions is nowadays considered to take place within a well-defined structure, known as quality evaluation system. Inspection, in turn, is the most important activity in the quality evaluation system of an industrial process. When correctly developed, the inspection makes possible to carry out a precise analysis of how the process operates and serves as a basis for a set of decisions that directly affect it, such as corrective and preventive actions which must be complied with in order to guarantee acceptable quality levels. Quality inspection has got a number of effective techniques with a wide range of applicability. Since there are several techniques available, there are many options for those who intend to devise an inspection process. Nevertheless, this situation poses difficulties as to the correct use of the different techniques, since most of such techniques have their own utilization particularities and yield results valid only within certain contexts. The need of choosing the most adequate technique for each situation shows that it is necessary to organize the information related to quality inspection, or else the whole quality evaluation process could be seriously compromised. The present paper highlights this question, which is * Tel.: 155-48-331-7000; fax: 155-48-331-7075. E-mail address: [email protected] (E.P. Paladini).

considered to be a relevant restriction to the perfect use of quality evaluation. A more efficient way of optimizing quality evaluation development is proposed. We dedicate special attention to quality inspection by attributes, a much more difficult area when it comes to making the correct decision about quality evaluation of services, products and processes. The literature about quality inspection is plentiful and easily accessible. More often than not, classical texts on quality control deal with this subject by focusing on procedures of sampling for acceptance, whereas rectifying inspection is rather rarely analyzed. The development of the theoretical models of the Operating Characteristic (OC) Curve associated with basic concepts, such as Acceptable Quality Level (AQL), Lot Tolerance Percent Defective (LTPD) and producer’s and consumer’s risk, is always presented as a means of motivating the analysis of sampling schemes or sampling plans and inducing the procedures which aim at structuring such plans and making feasible their performance analysis, mostly in terms of reliability and costs. Tables suggesting models of sampling plans are just as relevant and are present as well in almost all books of the area. The following authors can be cited as classical in this field: Besterfield (1990), Charboneau and Webster (1997), Juran (1999), and Montgomery (1998), to name a few. Texts of renowned authors, such as Dodge and Deming can be found in more specialized journals. A collection of articles by Dodge about inspection by attributes was made available by the American publication called Journal of Quality Technology in 1977 (see Dodge, 1977). As for the work of Deming, articles on this subject written by him can

0957-4174/99/$ - see front matter q 2000 Elsevier Science Ltd. All rights reserved. PII: S0957-417 4(99)00059-7

134

E.P. Paladini / Expert Systems with Applications 18 (2000) 133–151

be found in the same journal published in 1985 (Papadakis, 1985). The types of inspection are manifold, each of them bearing their particularities, specific purposes and restricted adequacy to certain contexts. Such is the case of sampling inspection or complete inspection, as well as inspection by attributes or by variables, for instance. The choice of the correct type of inspection is extremely relevant for quality evaluation, since the adequacy of the inspection process to the context where it takes place has been considered in many situations and its importance has always been stressed. Let us consider the inspection of raw materials as an example. An incorrect development of this type of inspection can give the suppliers the impression that quality is not important to their customers and of course the suppliers will not have to regard it as relevant. Such a situation, which can be seen in practice, is to be found in several texts. Horsnell (1988), for example, draws attention to this fact, by showing that the use of inspection by attributes when products enter the factory is of such usefulness that it transcends inspection proper, in addition to being an easily applicable technique. In a broader analysis, one can see that the quality of the process can be significantly (and positively) affected by the mere inclusion of inspection in the process (it has been considered many years ago—see, for instance, Whittle, 1964). Quality inspection is widely used nowadays. Nonetheless, several problems have been encountered upon using inspection systems and such problems derive for the most part from an inadequate application of the prescribed methodology. This inadequacy results from decisions which ought to be made as regards the way the inspection is to be carried out. Such decisions are considered here to be intuitive. However, they do deserve detailed investigation. We understand that the lack of an adequate methodology for planning the decision-making process is a fundamental restraint to the use of quality inspection. This situation justifies the elaboration of a general Decision-Making Support System. Due to its peculiarities, the system was organized under the format of an expert system, which benefits from the advantages of artificial intelligence (AI) in order to carry out an evaluation. It should be noted that since inspection is the most important part of Quality Evaluation, the expert system proposed here plays a fundamental role in the Total Quality Management area. It is easy to see that this paper describes a process of interaction between users and computers in industrial companies. The process becomes feasible through the development and application of an expert system related to the area of quality control. This paper describes the developed expert system, its operating methodology and lists the results obtained after its application in the companies we have studied. The interaction between the human resources

of the organizations and the computational resources was intense in the last few years. With the advent of AI this interaction has acquired new, rather specific characteristics. 2. Artificial intelligence applied to quality evaluation A brief review of the specific technical literature reveals that there are many cases of successful applications of Al techniques to the Quality Evaluation area. Such applications were helpful in the resolution of relevant problems. Below are some examples thereof: (a) Hosni and Elshennavy (1988) reported the development of a quality control system based on knowledge, suitable to specific procedures of inspection by variables and to the selection of control graphs, both by attributes and by variables. (b) Eyada (1990) developed an expert system aimed at auditing procedures of Quality Assurance involving both suppliers and products in the process. Some years earlier, in the same field, Gipe and Jasinski (1986) conducted an analysis of expert system adequacy to Quality Assurance problems, showing the feasibility of this application. Another interesting expert system was developed by Crawford and Eyada (1989) with the purpose of planning the allocation of resources for the Quality Assurance program. (c) There are several expert systems used to select control graphs (see Alexander & Jagannathan, 1986; Dagli, 1990; Dagli & Stacey, 1988). An expert system designed for Process Control at a general level was developed by Moore (1995). (d) Evans and Lindsay (1987) reported the development of an expert system for Statistic Quality Control, which not only selects control graphs, but also offers interpretations of such graphs and provides conclusions about the control status of a process. (e) Brink and Mahalingam (1990) developed an expert system which evaluates quality at manufacturing level, so as to detect and correct defects occurring during the productive process. (f) Pfeifer (1989) reported the development of an expert system to detect defects during the productive process and described its successful application in Germany. (g) An expert system which makes use of pattern recognition in order to ‘visualize’ pieces under inspection was developed in 1989 in the United States (see Ntuen, Park & Kim, 1989). This system, called KIMS, was successfully tested in a variety of experiments (one of them is reported by Ntuen, Park & Sohn, 1990), where the performance of the KIMS in image recognition tasks is assessed. (h) Fard and Sabuncuoglu (1990) developed an expert system which seeks to select sampling by attributes, determining which type of sampling is the most adequate for each case: simple, double or multiple. A project of a

E.P. Paladini / Expert Systems with Applications 18 (2000) 133–151

simple sampling plan by attributes based on the Fuzzy Sets Theory was also proposed (see Kanagawa & Ohta, 1990). (i) Lee, Phadke and Keny (1989) reported the development of an expert system for evaluating quality inspection. Let us first observe the feasibility and convenience of this application. Indeed, the quality evaluation problem has rather particular characteristics. One can see at first that it is a decision problem faced by humans in certain situations. In the case of quality inspection by attributes, for instance, because operators do not have a specific mechanism capable of measuring a given characteristic, the decision agent seeks to identify peculiarities in the situation being studied which allow him/her to compare what the operator sees with a given pattern. This is a situation where specific analytical methodologies are deployed based on typical procedures of a quality inspector. In a situation like this, a computational device must act in an absolutely similar fashion as would an inspector himself/ herself in order to be useful to the decision-making process. Hence the justifiability of rendering Al techniques adequate to this case. Such adequacy seems to be already evident in the list of applications done. Indeed, the fact that there are several applications in the field of Quality further validates this hypothesis, although there was no record of an application aimed specifically at the process of decisions about quality evaluation as we have developed here, neither in the literature consulted nor in the personal contacts made. It is worth mentioning the effectiveness analysis of the treatment of the problem by the Al tools. Considering both the objectivity in tackling the question and the reliability of the process, one can realize that the application of Al can be extremely useful in getting more reliable and precise information as regards the evaluation of the product. Additionally, the process of getting such information is sped up. In a cost–benefit analysis, these aspects could pay off the costs that the implementation of the system might have. Finally, attention should be drawn to the fact that the knowledge required to solve the problem is available and structured under a reasonable degree of organization. The innovative aspect of the proposal is less centered on the generation of definitely new knowledge than on the practical application of certain tools to a specific situation. 3. Structure of an expert system for quality evaluation Broadly speaking, the application of expert systems to quality evaluation aims at defining the nature of the quality inspection process that ought to be used for each case, adjusting the techniques available to the process that is being studied. Thus, the system should advise its users to make decisions compatible with their specific application. Such decisions should be logically organized and,

135

therefore, we propose that the System be divided into modules which go through organized and well defined phases, in an evolutionary decision-making process. Thus, the system is divided into two general phases: Phase 1: Involves decisions related to (1) inspection execution or not. In case the decision of executing the inspection is made, one checks the characteristics it must possess: whether (2) automatic or sensorial, (3) by sampling or complete, (4) by acceptance or rectifying inspection. Phase 2: Once the need or convenience of executing the inspection (module 1) is defined and the way it should be developed (modules 2, 3 and 4) is determined, the next step is to make the most relevant decision: to opt for (5) inspection by attributes or by variables. Before the inspection system is analyzed, we propose that one consider whether or not pre-control activities should be developed. This is module zero of the system because it comprises decisions related to pre-inspection actions. It is then possible to study the other questions in connection with inspection execution and the ways of doing it. Each decision was related to a module of the expert system, which we set out to describe. Each module lists: (1) the objectives of the basic decision to be made; (2) the theoretical background underlying the decision, including the concepts that back up the analysis to be carried out; (3) the general characteristics of the module and (4) its structure. It is easy to see, then, that the DSES we describe here is a structure with six expert systems inside it. We call these six expert systems “modules”. Each one has general and specific characteristics. General characteristics are the same for all of them and these items are described next. Specific characteristics are described in the module characteristics of each expert system. 3.1. General characteristics of the modules Each module (in reality an expert system) has the following general characteristics: 1. It is an expert system based on rules. 2. The system can list all the qualifiers, as well as the rules where they are being used. 3. The system can show all the rules where the choices were used. In this case, the choices appear in all the rules used for making the decision. 4. The measurement units are integer numbers ranging from 0 to 10. 5. The adequacy of the choice becomes evident when values close to 10 are attached to it; its inadequacy, on the other hand, is made clear by values close to 0. 6. All the possible rules are used in data derivation to get to the best choice. 7. The system does not show the rules when they are being used. However, this option can be changed. 8. Most of the rules have bibliographical references which provide them with a theoretical background.

136

E.P. Paladini / Expert Systems with Applications 18 (2000) 133–151

9. Some rules are also equipped with explanatory notes as to their formulation or the concepts underlying them. 10. The module structure always includes some basic areas, which have to do with analyses related to specific topics of that module. Now we set out to describe each module and their specific characteristics. 3.2. Pre-inspection actions Before deciding which way the inspection is to be executed and even if it should or should not be carried out, we suggest an analysis of the opportunity, convenience or even the necessity of the introduction of pre-control actions, whose concepts and characteristics are described in the presentation of the module of the DSES which analyses the question. 3.2.1. Decision supporting expert system. Module 0: execution (or not) of pre-control—PREC subsystem 3.2.1.1. Objectives of the module This module of the decision supporting expert system (DSES—Module 0) determines whether it is necessary to establish pre-control procedures before the quality inspection is effectively structured and developed. 3.2.1.2. Theoretical background The basic idea grounding the establishment of inspection at factory operator level stems from the general axiom of quality, according to which quality should be produced and not only controlled. This means, in truth, that the ultimate goal is to build quality in the product, instead of simply trying to get it from intensive control, tests or a close monitoring of the process by highly competent inspectors. In a process which seeks quality production, it seems reasonable to assign the workers who manufacture the product the task of inspecting it. In operational terms, this means that the first control of a product is carried out by those who manufacture it. Inspection at operation level is executed during the normal development of the productive process by the operators themselves. Although it is an inspection in its conventional form, i.e. a process which aims at determining whether a piece, sample or lot complies with certain quality specifications, it seems plausible to assume that it will take on particular characteristics, due to the context where the inspection is conducted. Firstly, it is worth mentioning that this type of inspection is only justified if it can be used for improving the effectiveness of the evaluation process as a whole (Simmons, 1990, p. 163). Thus, its execution is expected to affect the average quality level of the process without incurring high costs. In order to guarantee the effectiveness and reliability of the inspection, it is necessary to give the inspection

agents agility and authority, so that they can alter the productive process as soon as a problem has been detected. It seems equally relevant to observe the production volumes with which the process usually operates. In case of a high volume, a more efficient action could be to allocate the operator to a simple inspection, rather than having an inspector take care of several operations, many of them being of distinct natures. It may be that the greatest benefits of inspection at operator level are mostly centered on responsibility transfer. In this case, the effort towards and the duty of quality come to be part of the operator’s responsibility too. Thus, the notion that the production of defective items is acceptable because the inspection will detect them later on is no longer possible. The result is the direct participation of the operators in the quest for quality. Like conventional inspection, inspection at operator level requires adequate and well-defined planning. There are, evidently, differences in terms of ability to carry out an inspection between operators and quality inspectors. Such differences ought to be taken into account in quality control planning. Pre-control is the preliminary inspection of the pieces produced, carried out by the workers themselves during the development of the process in the production line. It is, therefore, a type of inspection at operation level. This activity does not exclude conventional control nor the inspection carried out by the quality control team. It is simply a previous evaluation of the process, performed by the team operating the process itself. Pre-control allows operators to alter the working conditions of the process, thus speeding up preventive and corrective actions which aim at keeping the quality level stable. In this sense, pre-control is always useful when the operator has both competence and authority to make such changes. Otherwise, it is dispensable. In case the operator, for example, does not know how or is not allowed to adjust the equipment he/she operates, there is no reason to assign him/her this share of responsibility for the quality control levels of the process. Pre-control is typically an evaluation by attributes. Indeed, should it require sophisticated inspection techniques or rely on several other elements, such as technical support, the use of laboratories, complex measurements, etc, its execution becomes unfeasible. In this case, pre-control is rendered dispensable and formal inspection is regarded as more effective, since it possesses all of those resources above. Like inspection, pre-control requires studies of process capability, which certainly make part of any control program. It is worth pointing out that if the process is out of statistical control, pre-control will lack the due technical conditions to be carried out. In short, pre-control becomes dispensable if it is considered to be inefficient as a tool for quality evaluation and process monitoring. This will be the case whenever it yields results of little significance (e.g. if the process is out of

E.P. Paladini / Expert Systems with Applications 18 (2000) 133–151

control or the operator does not know it well) or if its execution is complicated. It becomes unfeasible if the operator lacks time, ability or resources to execute it properly. Pre-control execution can be justified or not, depending on a number of factors. Thus, an analysis of such factors seems to be extremely relevant for quality evaluation as a whole, mostly if we consider evaluation by attributes, the basic pre-control method. Hence the need of structuring a module of the DSES to be activated before the inspection is started and that makes possible to determine whether the introduction of pre-control procedures in the factory is useful, necessary or convenient. This is the objective of this module. 3.2.1.3. Module characteristics In addition to the ten general characteristics listed, this module has the following specifications: (a) Number of rules: 55 (b) Number of qualifiers: 17 (c) Choices: 2 (d) Decision of the system: pre-control is recommended or dispensable. (e) Example of a rule: IF: The operator does not interfere with the process, THEN: Pre-control is recommended—probability: 3/10 Pre-control is dispensable—probability: 7/10 (Rule 13) (f) Example of a qualifier: The operator 1. Is well acquainted with the process; 2. Is reasonably acquainted with the process; 3. Is little acquainted with the process (Qualifier 13). 3.2.1.4. Module structure The module comprises four basic areas, which have to do with analyses related to the relevance and nature of the inspection, the defect detection process, the operators’ action upon the process and the production process proper. In broad terms, each area comprises the following aspects, amongst others: (A) As for the nature and relevance of the inspection: 1.1. Role played by the inspection in the quality of the process; 1.2. Type of inspection usually deployed; 1.3. Results expected from the inspection. (B) As for the defect detection process: 2.1. Sensorial evaluation of quality; 2.2. Tests required for defect detection; 2.3. Reliability of the defect detection tests; 2.4. Resources necessary for carrying out the tests. (C) As for the operator’s action upon the process: 3.1. Operator’s qualification and competence; 3.2. Operator’s scope of action and authority over the productive processes;

137

3.3. Operator’s usual working conditions; 3.4. Operator’s experience in that particular process. (D) As for the productive process proper: 4.1. Production levels (quantities usually produced); 4.2. Levels of capability of the process; 4.3. Usual conditions for process operation. 3.3. Inspection system From now on, we set out to describe the modules which comprise the basic decisions regarding the necessity, convenience or opportunity of inspection execution, as well as the way it is to be developed. 3.3.1. Decision supporting expert system. Module 1: execution (or not) of inspection—EXEC subsystem 3.3.1.1. Objectives of the module The evaluation of industrial systems operations is developed by procedures related to quality inspection, conducted at the level of processes and products. Quality inspection is a process that seeks to identify if a part, piece, sample or lot meets certain quality specifications. In this way, the inspection evaluates the quality level of a piece, comparing it with a pre-established standard. What does inspection exactly perform? We can say that inspection tries essentially to provide a diagnosis of the product in terms of its quality level. Such a diagnosis is always focused on the quality of characteristics, that come to be each and every one of the elementary properties that the product must possess to permit its functioning with complete fulfillment of its design and the final purpose to which it is destined. The inspection goes through several steps and there are even important decisions before inspecting. In fact, there is a critical decision to make and it is related to the choice between inspecting or not (a piece or a sample, for instance). This is the objective of this module. So, we propose an interactive process to help the decision agent when he/she has to select the better option. 3.3.1.2. Theoretical background The inspection plays an important role in quality evaluation. Today, the emphasis is towards integrating inspection in the most appropriate way to the widest possible context, for example, the statistical control of processes or the Quality Control System. It is thus noted that the inspection process has special relevance in any quality evaluation methodology. This importance justifies the efforts made to adapt the inspection to the needs of each evaluation process considered. In this way, one can realize that inspection is not an end in itself, but rather a means of reaching specific objectives, expressed by a given general philosophy of evaluation or a control process. We can thus conclude that inspection is useful only if it is integrated to a wider system, if its results effectively contribute to a more encompassing evaluation of

138

E.P. Paladini / Expert Systems with Applications 18 (2000) 133–151

the process and, above all, if such results supply the basis for specific actions, appropriate to the system of quality control that is being utilized. This paper deals with quality evaluation, a very important activity in production processes. The tasks related to an important decision are emphasized. The decision is about the best option to adopt in a given situation where it becomes necessary to define if an inspection must be executed or not. Having these considerations in mind, it should be noted that at certain points of the productive process or for certain parts, or even for specific situations, the execution of an inspection is effectively justified. This statement derives from what composes an inspection—fundamentally it is a diagnosis of the process, with detection of defects, identification of situations of non-compliance to some patterns, analysis of cases where basic operational requirements were not met and, finally, specific evaluations of the characteristics of product quality in the various manufacturing phases. We can state that the execution of an inspection is justified, therefore, if it is placed within a broader process, in which it fulfills only the role reserved for it—that of a simple support activity. Its appropriateness to the control strategies or to the methodologies of process evaluation will then be essential to determine if it should or should not be executed. It is commonly said that it is not easy to identify the characteristics that justify (or do not justify) the execution of an inspection. Since it is an interactive decision, the need is detected for the construction of a decision-making support expert system that allows the determination of the best option between (1) adopting inspection in a given situation where it becomes necessary or (2) not executing inspection. This is the objective of this system, which compares benefits and restrictions to the execution of an inspection in a specific point of the process and defines which position to adopt. The main characteristics of an expert system like the one described here are the number of rules; the number of qualifiers; the choices; the decisions of the system; the scale of values; the information about the use of the rules; examples of rules and of qualifiers; the notes and the references. In terms of its use, we can see that the application of expert systems to quality evaluation seeks to define the nature of the quality inspection process that should be utilized in each case, making it appropriate to the process as a whole. The system should direct its user to make decisions compatible with their specific application. These decisions must be logically organized, and for this reason, we propose the division of the system in modules that perform organized and well-defined phases, in a progressive decisionmaking process. The system is divided in two general phases. Phase one involves deciding whether or not inspection should be conducted. If the decision to perform an inspection is made, it is

necessary to check which attributes it should have: whether automatic or sensorial, whether by sample or complete, whether by acceptance or correction. In phase two, once the necessity or convenience of executing the inspection is defined and the form of developing it is determined, we move on to the next relevant decision: inspection by attributes or by variables. The first module of the first phase is the one where the decision about the necessity or convenience of executing inspection has to be made. This paper refers to this module—in reality, a complete expert system. The system is composed of five basic areas that involve analyses relative to the nature of the inspection, the product, the process and the lots as well as the analysis of the quality level of the process. This is an adaptive and learning system, as can be seen in the described system. In general terms, each area involves aspects (among others) (a) concerning the nature of the inspection; (b) concerning the nature of the product; (c) concerning the nature of the process; (d) concerning the nature of the lots and (e) concerning the quality level. We detail each one of these areas. The expert system has rather special characteristics, which make possible interesting analyses of the interaction between man and machine, the object of this paper. In fact, it regards logically organized and connected decisions, which determine the necessity for actions identically organized by people; it works with the decisions that affect the development of the assembly line, which determine alterations in the behavior of these very people; the decisions are dynamic and progressive, which determines the necessity of adopting behavior standards that demand continued improvement and, finally, the area of performance of the expert system is precisely evaluation of quality, which refers to the results of the activities developed by people, determining the necessity of evaluating their own performance. 3.3.1.3. Module characteristics It should be noted that the use of expert systems provides the basis for evaluating the interaction between man and machine; the follow-up of experimental groups and the employment of fuzzy logic to evaluate the behavior of the personnel involved on an objective basis are the elements which complement the methodology utilized here. The expert system deals with the basic decisions concerning the necessity, the convenience or the opportunity for the execution of the inspection as well as how to develop it. This is a typical DSES. The expert system determines the most appropriate choice. In addition to the ten general characteristics listed earlier, this module has the following specifications: (a) The expert system has 66 rules and 30 qualifiers. (b) The decision of the expert system refers to executing or not the inspection (choices). (c) The system determines its choice (inspecting or not inspecting).

E.P. Paladini / Expert Systems with Applications 18 (2000) 133–151

(d) In order to decide on the better choice between inspecting or not inspecting, the system uses a set of rules. (e) An example of a rule the system uses: IF: A defect on the product is dangerous to the user. THEN: inspection must be done—probability: 9/10 inspection should not be done—probability: 1/10 (Rule 15) (f) The user interacts with the system selecting an option from the qualifiers that the system shows. (g) An example of a qualifier shown to the user: The defect of the product 1. is very easily detected; 2. is easily detected; 3. is difficult to be seen; 4. is almost impossible to be seen without special devices. (Qualifier 10)

3.3.1.4. Module structure There are 25 main groups of qualifiers presented to the user. These groups include all the areas we have used to make the structure of the expert system. The groups are: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25.

Product quality consistency. Performance levels of the production process. Information and/or product flows. Relation between defect occurrence and phases of the production process. Necessity or convenience to evaluate the quality of conformity of the products. Economic justifications to evaluate the lots. Necessity or convenience to evaluate the lots. Probability of defect occurrence. General purposes of the inspection. Quality patterns definition and stability. Characteristics of the product to be controlled. Quality evaluation precision. Inspection cost levels related to the importance of the piece. Necessity or convenience to classify the pieces. Inspection consequences on specific phases of the process. Costs of rejections. Relation between process phases and probability of defect occurrence. Types of tests to execute the inspection. Useful information about defect correction or prevention. Statistical process control. Efficacy levels of the inspection. Control actions related to defective pieces. Necessity or convenience to get information about the lots. Consequences of defective product rejection. Capability levels of the production process.

All these groups have specific rules and qualifiers.

139

3.3.2. Decision supporting expert system. Module 2: automatic or manual inspection—AUMA subsystem 3.3.2.1. Objectives of the module The main purpose of the analysis performed here is to determine the more adequate choice for a specific situation, in the case of a decision between quality inspection (evaluation method considered here) carried out manually, by the inspectors, and quality inspection conducted automatically, i.e. developed by special devices. Quality inspection is the process that seeks to determine whether a piece, sample or lot complies with certain quality specifications (Paladini, 1999). Thus, the inspection evaluates the quality level of one or several pieces, comparing them with a set of predefined patterns. The inspection provides a diagnosis of industrial products in terms of the quality level that they have. Such diagnosis is always centered on the quality characteristics, which regards each and every of the elementary properties that the product should have so as to guarantee its satisfactory operation in order to comply with both the product design and purpose. 3.3.2.2. Theoretical background Quality evaluation is an important element of the production processes of goods and services. In fact, to analyze the work we have done is as old a purpose as the work itself. The efforts to define the best way to carry out work evaluations have been, without a doubt, notable through time. In the specific case of quality of products, services and processes, inspection is the most common evaluation method that has been used. Today inspection is considered to be an “old-fashioned” procedure in the context of Total Quality Management. This position is justified by the fact that many quality systems use inspection as the only element for quality production, which is obviously not correct. It is necessary to realize, however, that when appropriately applied to Total Quality processes, inspection plays an extremely important role, since it provides the basis for evaluating the production operations. So, inspection is just one of the elements of Quality Management. But possibly the most important one. There are many agents that execute quality inspection. In the productive process as a whole, the most important inspection agent is the customer or the final user of a product or service. During the operational phases of the productive process, the most usual inspection agent is the classic “quality inspector”. The inspector is the professional specialized in different ways of evaluating the quality level of products, parts of them or, more commonly, specific quality characteristics. Depending on the nature of the evaluation, however, the inspector can be substituted by some equipment or, more specifically, a device that applies a quite specific group of tests. Here, we have the so-called automation of the inspection process.

140

E.P. Paladini / Expert Systems with Applications 18 (2000) 133–151

In the first case—where the quality inspector is the inspector agent—we have the manual or human inspection. In the second case, we have automatic inspection. There is a difficult decision to be made here between two completely different situations: (1) to maintain manual inspection, i.e. the inspection performed by “human” inspectors, in the traditional way, or (2) to introduce the use of automatic inspection devices, where we should consider a variety of aspects. The main elements to consider in order to make a decision should be, in principle, the following processes: (a) the nature of the inspection process; (b) the production process; (c) the decision process and (d) the inspectors that work in the quality evaluation process. According to these four elements, manual inspection can have several advantages over automatic inspection. Also according to these same elements, automatic inspection can be better in terms of productivity, for instance, when compared with manual inspection. So it is necessary to consider each case carefully. Manual inspection, if compared with automatic inspection, can be seen as an easier and faster adaptation, since it uses human versatility. This kind of inspection involves a wide and detailed judgment. Automated inspection focuses on some specific aspects of a product’s quality characteristics. Thus, the first type of inspection has as a result, a “general” view of the product; the second type of inspection, however, has precise and specific ideas about a given quality characteristic. It is obvious that automated inspection is not subject to some basic restrictions of manual inspection, such as fatigue, tiredness, monotony, images excessively repeated inducing to confusion. In other words, automatic inspections avoid physical and psychological effects that the evaluation process has and that can affect the inspectors (Yager, 1984). In terms of advantages and drawbacks, one type of inspection may be more adaptable than the other for each situation considered. This axiom is used to justify the efforts (and investments) made in the study of the particularities of each type of inspection, according to the specific situations for which the inspection is being requested. It is also necessary to take into account the practical experience that can be gained with the use of each inspection type. This information can be fundamental to determine effective characteristics of each evaluation type and their suitability to the case being considered. The use of one of the two inspection types can also be influenced by incidental situations the process may present at a given moment. Thus, for example, if the complete inspection of a lot (all the pieces of the lot must be inspected) is usually carried out, this inspection is associated with fatigue, monotony and inspectors’ boredom. In this situation, there is a clear indication of the opportunity for introducing automatic inspection. On the other hand, a strong market retraction determining significant investment reduction, can render the use of

automatic inspection completely unfeasible, since automatic inspections are almost always performed with the aid of expensive devices. The urgency in rendering the quality evaluation process even, in order to attend specific customers’ demands or to satisfy market sectors, may require the adoption of automatic inspection. The same may happen during the factory’s production peaks. The peak period demands the fastest as well as the most efficient inspection execution. Another situation that requires the same actions is that of emergency changes on the working environment, which may create hostile conditions for the development of a more sophisticated evaluation model. Peculiar situations of the factory, or of some production lines, should be taken into account when the selection of the inspection model is to be made. In cases where quality patterns are well defined, for instance, automatic inspection offers a greater level of appropriateness than in cases where quality patterns are intuitive, approached in a subjective way or they are simply not defined. By the same token, if visual inspection has been emphasized for quality evaluation in cases where small cracks or almost imperceptible stains are important for this evaluation, automated inspection seems to be by far better and preferable than human inspection. Here, the visual perception of the machine is much more necessary than human sight. What is desired here is the use of an accurate vision system, which no human operator can have. Finally, usual situations of the production process should likewise play an important part in the decision-making process. For instance: if the production lines generate big lots, manual inspection would take a long time and many people; products whose evaluation criteria do not change over time (machines can be programmed to perform the inspection, since they do not change frequently); inspections that always demand high concentration and physical or mental effort from the inspectors (maybe it is better to have an automatic device that does not require concentration but, rather, does it mechanically); extremely repetitive inspections (typical for automatic situations); tiresome and tedious (but always important and necessary) decisionmaking activities are typical situations for the implementation of automation. As it can be seen, there are many elements to be taken into account before deciding what type of inspection to use— whether manual or automatic quality evaluation.

3.3.2.3. Module characteristics Considering the specific elements of each type of inspection, we have detected the need of structuring a DSES, that determines the best option to adopt in a certain situation, when it becomes necessary to define the most suitable way of conducting the inspection of samples and lots. This is the objective of the present system, called “AUMA”. It confronts manual inspection with the

E.P. Paladini / Expert Systems with Applications 18 (2000) 133–151

inspection developed by automated devices for the situation in study. In addition to the ten general characteristics listed, this module has the following specifications: (a) Number of rules: 81 (b) Number of qualifiers: 28 (c) Choices: 2 (d) Decision of the system: automatic or manual inspection (e) Example of rule: IF: The process tends to be very repetitive, THEN: Automatic inspection—probability: 8/10. Manual inspection—probability: 21/10. (Rule 46) (f) Example of qualifier: The process tends to generate: 1. Very big production lots. 2. Production lots of reasonable size—when compared to other processes of the factory. 3. Small production lots. (Qualifier 19)

3.3.2.4. Module structure The expert system AUMA comprises four basic areas, involving relative analyses about the nature of the inspection; the productive process; the decision process and the inspectors that act in the quality evaluation. Each area involves the following aspects, among others: (A) As regards the nature of the inspection: 1.1. Need or convenience of a complete inspection. 1.2. Characteristics of the inspection process. 1.3. Inspection type (continuous, complete or sampling, for instance). 1.4. Inspection uniformity in the different areas of the factory. 1.5. Speed of the inspection. 1.6. Speed of the process where inspection interferes. 1.7. Definition of patterns for the inspection. 1.8. Nature of the images used in the inspection process. 1.9. Nature, occurrence and frequency of inspection mistakes. 1.10. Area of the factory and of the product where the inspection is being carried out. (B) As regards the productive process: 2.1. Demand for the quality evaluation. 2.2. Production process type. 2.3. Investment levels associated with the process. 2.4. Nature and size of the production batches. 2.5. Feedback level of the production system. 2.6. Usual forms of process control. (C) As regards the decision process: 3.1. Decision approaches for the evaluation of quality. 3.2. Reliability of the decision (of the evaluation of quality). 3.3. Equipment for the decision-making. 3.4. Characteristics of the decision process (inspector’s profile and working environment).

141

3.5. Operations involved in the decision process. 3.6. Inspection area affected by the decision process. (D) As regards the inspectors: 4.1. Inspector availability for carrying out the quality evaluation. 4.2. Inspectors’ qualification.

3.3.3. Decision supporting expert system. Module 3: inspection by sampling or complete—CEAM subsystem 3.3.3.1. Objectives of the module This module of the DSES (Module 3) determines the most suitable choice in the case of a decision between quality evaluation developed by means of the inspection of the whole lot (complete or ahundred-percent inspection) and quality evaluation characterized by the inspection of representative samples taken from the lot under investigation. 3.3.3.2. Theoretical background Quality inspection is defined as a process, which aims at determining whether a piece, sample or lot complies with certain quality specifications. In this way, the inspection assesses the quality level of a piece or a set of pieces, by comparing it with a pre-established pattern. The inspection is essentially aimed at providing a diagnosis of the product in terms of its quality level. This diagnosis is always focused on the characteristic of quality, which is represented by all of and each of the elementary properties that the product must possess in order to work in total compliance with its project and objectives it was designed to fulfill. There are, fundamentally, three basic classifications, which can be used to organize didactically the different types of inspection. Firstly, the objective of the inspection must be taken into consideration. Here, the inspection can be classified into two categories: one carried out exclusively for acceptance (or rejection) purposes of lots, and another executed with the purpose of correcting the quality levels of a given lot, altering its value. The former has to do with inspection by acceptance; the latter, with rectifying inspection. The second classification is based on the scope of the inspection. Here, the inspection may comprehend only a part of the lot, determined according to well-defined criteria, i.e. a sample, or it can comprehend the whole lot, thereby characterizing the said a-hundred-percent or complete inspection. The third classification has to do with inspection execution. In this case, two types of inspection are used: either the evaluation of the quality characteristics is done by attributes or it is done by variables. This module (CEAM) of the DSES deals with the second classification—that regarding inspection scope. The ACRE module deals with the first classification (the one regarding

142

E.P. Paladini / Expert Systems with Applications 18 (2000) 133–151

inspection objective) and the ATVA module deals with the third (inspection execution). These will be discussed ahead. The inspection may be executed for the whole lot or for part of it. The first situation—the whole lot—typifies complete inspection, whereas the second characterizes inspection by sampling. Complete inspection, or a-hundred-percent inspection, is herein understood as the inspection of a whole lot at one time. Several complete inspections originate the total inspection. Often, this repetition becomes necessary precisely because of the restrictions to complete inspection caused by failures in the process of evaluation of the pieces, which make up the whole lot. This happens, in general, because the scrutiny of a whole lot often causes inspectors to show symptoms of fatigue, boredom or monotony. Such symptoms determine a performance reduction in the different phases of inspection, like (1) piece comparison with a pattern, (2) defect detection and identification activities, (3) efforts for removing defective pieces from the rest of the lot and (4) in the judgement of piece conformation. In short, they compromise the execution of inspection proper. If, on the one hand, inspection by sampling does not offer some of the restrictions found in complete inspection, on the other hand it incurs the risk of mistakes that leads to the making of wrong decisions. Such are the cases where the sample determines the rejection of a lot, which, in fact, has a satisfactory quality level, or the acceptance of sample whose lot does not comply with the quality level required for the process. Inspection by sampling, thus, involves the risk of mistakes caused by an incorrect judgement of a sample or, more frequently, by the fact that the sample does not represent the lot adequately. Each type of inspection has its own advantages. Complete inspection seems to impart more reliability to the evaluation, although it requires more resources and is more costly. Inspection by sampling, on the other hand, tends to reduce costs. However, its reliability in terms of results demands care and attention, which render its execution more complicated. One can see, thus, that each of the two types of inspection has its own characteristics which, added to the peculiarities of the production process, allow the determination of factors which indicate a better adequacy of each of them to the situation in study. The complete inspection analysis comprises several aspects. Firstly, it is worth mentioning that in many cases it is totally unfeasible—for instance, an inspection that destroys the product. Additionally, the variety and the quantity of the products manufactured by a company make it impossible, for the most part, to carry out this type of inspection. There are also cases where the product consists of a continuous mass—as is the case of clay or coal—where it is not possible to define a unitary element of the product on which to perform the quality analysis for that product. In other situations there are restrictions of a physical nature related to the complexity of a given product, its diversity,

composition and structure, or related to production levels. Finally, there are limitations regarding the quantity of resources involved in processes of complete inspection and even the time spent in its execution. In such cases, the retention of lots for inspection purposes may result in delay in the release of relevant parts of the product, which compromises the production planning and flow of the whole factory. Thus, if in addition to these restrictions, we take into consideration that complete inspection does not completely guarantee the quality evaluation of the lot—precisely because of its practical difficulties—we will be able to see that there are a number of points to ponder before choosing complete inspection. Nonetheless, there are situations where complete inspection is necessary or at least convenient. Examples of such situations are: • cases where the formation process of the lots is not known and makes it impossible to get homogeneous lots from which one can take representative samples; • cases where the productive process is totally out of control, which likewise compromises the representativeness of the samples; and • cases where the formation of samples is practically impossible due to the lack of clear-cut criteria and objectives thereto. • Complete inspection is also considered to be necessary in the cases where the product, or any of its characteristics, is crucial and requires full attention and care as in cases where the presence of defects may put the user’s life or physical integrity in danger. Complete inspection can be convenient in situations where: • inspection execution is simple (this is the case of such products as lamps, which are evaluated by means of elementary tests to check whether the product is working); • inspection costs are low; and • products are arranged in such lay-outs that facilitate complete inspection, such as products passing by in a continuous flow and at reasonable speed over a liner belt. Because it minimizes the disadvantages of complete inspection, inspection by sampling comes as a natural alternative to • products with high inspection costs or a high (fast) production rate, which determines the formation of big lots and of high diversity; • situations where it is difficult to define product units or if the production process is under control; • in environments where there is full predictability of the effects of the causes acting upon the process; and • where there is a proven relation between the lot and its formation method. In short, inspection by sampling tends to reduce costs and time, and makes use of fewer resources than complete

E.P. Paladini / Expert Systems with Applications 18 (2000) 133–151

inspection. But the reliability of the samples is closely connected to the way they are structured. And such reliability is defined by the degree of representativeness of the sample in relation to the lot, as well as by the inspector’s action (which, to a certain extent, is largely facilitated by the fact that there are fewer pieces to evaluate). Inspection by sampling also makes possible to determine what is the risk of a decision made for samples being valid for the entire lot. Such reliability is defined in the design phase of the sampling plans, a crucial question for this type of inspection, i.e. the definition of what part is to be taken from the lot in order to represent it in the quality evaluation process. In view of the particularities observed, one realizes the need of structuring a module of the DSES which allows to determine the best option to adopt in a situation where it is necessary to define the most adequate way of evaluating the quality of a product based on its characteristics. This is the objective of this module, which compares the quality evaluation of the whole lot with that of representative parts of it, i.e. samples. 3.3.3.3. Module characteristics In addition to the ten general characteristics previously listed, this module has the following specifications: (a) Number of rules: 140 (b) Number of qualifiers: 63 (c) Choices: 2 (d) Decision of the system: complete inspection or inspection by sampling (e) Example of a rule: IF: The criterion for removing pieces from the lots ensures equal chances of removal to all the pieces THEN: Complete inspection (the whole lot)—probability: 2/10 Inspection by sampling—probability: 8/10 (Rule 87) (f) Example of a qualifier: The handling of the product 1. May damage it; 2. Does not affect its use; 3. Does not cause it any damage; (Qualifier 24) 3.3.3.4. Module structure This module consists of 11 basic areas which involve analyses related to the nature of the inspection, the defects, the product, the lots, the productive process, the analyses of the quality level of the process, the nature of the risks, the inspectors’ actions, the level of information available or to be gathered, the quality evaluation process and the characteristic being evaluated. In broad terms, each area comprises the following aspects, amongst others: (A) As for the nature of the inspection: 1.1. Type of basic test for quality evaluation. 1.2. Inspection costs.

143

1.3. Consequences of the inspection for the product inspected. 1.4. Technical feasibility of the one-hundred-percent inspection. 1.5. General characteristics of the lots under inspection. 1.6. Production phases where the inspection is carried out. 1.7. Type of product where the inspection is carried out. 1.8. Type of characteristic where the inspection is carried out. (B) As for the nature of the defects: 2.1. Characteristics of the occurrence of defects. 2.2. Consequences of undue approval of defective pieces. 2.3. Phase of the process where the defect can be detected. 2.4. Level of defect occurrence (percentage of defective pieces or frequency of defect occurrence). 2.5. Costs of undetected defects. 2.6. Relation between defect occurrence and lots or phases of the productive process. 2.7. Defect distribution. (C) As for the nature of the product: 3.1. Type of product; 3.2. Items which make up the product. 3.3. Consequences of the handling of the product for its functional characteristics. 3.4. Representativeness of the parts of the product. 3.5. Utilization of the product (situations where the product itself is used or where it is used in other products). 3.6. Product performance history. (D) As for the nature of the lots: 4.1. Composition criteria and lot formation processes. 4.2. Composition criteria and sample formation processes. 4.3. Lot size. 4.4. Number of lots. 4.5. Lot–sample relations. 4.6. Processes of determination of inferences about the samples. 4.7. Experimental lot formation. (E) As for the nature of the process: 5.1. Areas to be inspected in the process. 5.2. Process reliability. 5.3. Statistic control of the process. 5.4. Costs of defective piece approval to the process. 5.5. Process improvement level as regards quality. 5.6. Adequacy of the inspection pace to the variations of the process. (F) As for the quality levels of the process: 6.1. Piece input quality level. 6.2. Process average quality level. 6.3. Quality level consistence. (G) Analysis of the nature of the risks: 7.1. Control of the risk of mistakes in the decision related to quality evaluation. 7.2. Risk estimate (both the producer’s and the customer’s).

144

E.P. Paladini / Expert Systems with Applications 18 (2000) 133–151

7.3. Risk levels in effect. 7.4. Administration of the process under risk. (H) As for the inspectors: 8.1. Availability of inspectors. 8.2. Inspectors’ performance during the inspection activities. 8.3. Characteristics required from an inspector for inspection execution. 8.4. Nature and frequency of inspection mistakes. (I) As for the level of information available or to be gathered for quality evaluation: 9.1. Interest in or convenience of gathering high levels of information on the lots. 9.2. Time spent to gather information on the lots. (J) As for the quality evaluation process: 10.1. Number of lots being evaluated. 10.2. Costs of evaluation by sampling. 10.3. Planning, documenting and making the inspection by sampling feasible. 10.4. Accuracy levels required for quality evaluation. (K) As for the characteristic being evaluated: 11.1. Relevance of the characteristic. 11.2. Usual level of defective pieces associated with that characteristic.

3.3.4. Decision supporting expert system. Module 4: acceptance or rectifying inspection—ACRE subsystem 3.3.4.1. Objectives of the module This module determines the most suitable choice in the case of a decision between quality inspection only for acceptance (or rejection) and quality inspection for lot rectification. 3.3.4.2. Theoretical background It is worth pointing out that the decision here involves the purposes of the inspection, i.e. it can be sorted out into two types: lot inspection exclusively for acceptance (or rejection) and inspection for correction for upgrading the quality level of a given lot, therefore altering its value. The first case consists of inspection for acceptance—the inspection is aimed only at detecting defective parts in a lot to determine whether the lot should be accepted in its completeness or rejected, considering thereto maximum values of those defective parts. Thus, this type of inspection is limited to accepting or rejecting the lot based on the analysis of a sample taken from it. Acceptance implies releasing the lot for use; rejection means that it should be returned to the supplier. This type of inspection is called ‘inspection for acceptance’, since it consists only of an evaluation in order to determine what to do with the lot—accept it (which means its habilitation for effective use in the factory) or reject it (which means sending it back to its origin, i.e. returning the lot to the supplier). The second case involves rectifying inspection. If we do not want to return the whole lot, we may carry out an

inspection aiming at replacing defective parts by perfect ones. In this case, we work on a sample of the lot initially. Each defective part found in the sample is replaced by a perfect part. If the number of defective pieces is lower than a given limit, the lot is then accepted and released for use. Here, only those defective pieces from the sample were replaced. If, however, the number of defective parts should exceed of a pre-established limit, than the whole lot will be inspected with replacement of all the defective parts by perfect ones. This is what we call rectifying inspection. There is a fundamental difference between these two types of inspection. In fact, inspection for acceptance determines the quality level of the lot, but it does not go any further than that, whereas rectifying inspection, in addition to determining the quality level, makes it better by means of replacement of defective parts by perfect parts. Of course rectifying inspection shows the same problems as a complete inspection, i.e. there is no guarantee that all the defective parts, whether from the sample or, in case of rejection of this sample, from the whole lot, will be effectively detected and replaced. Therefore, it is said that rectifying inspection tends to improve lot quality, although it is not guaranteed that at the end of the rectifying process the lot will have a 0% rate of defective parts. This happens because of both considering the situation in which the samples were accepted (in this case the rest of the lot has not been analyzed), and observing the natural practical difficulty to detect all of the defective parts of the lot (in those cases of rejection of the original sample). 3.3.4.3. Module characteristics In addition to the ten general characteristics listed earlier, this module has the following specifications: (a) This module consists of an expert system based on rules, having 47 rules and 22 qualifiers. (b) There are two options for decisions here: inspection for acceptance or rectifying inspection (choices). (c) The system decides between the choices above. (d) Example of a rule: IF there are perfect parts in stock and at low cost, THEN Inspection for acceptance—probability: 2/10; Rectifying inspection—probability: 7/10 (Rule 25). (e) As an example of qualifier we have: The inspection is carried out in terms of 1. raw material from various suppliers and easily available; 2. raw material from various suppliers and of limited availability; 3. raw material from exclusive suppliers (Qualifier 28). 3.3.4.4. Module structure The system is made up of four basic areas involving analyses related to the nature of the inspection, of the process and of the lots, and it also takes

E.P. Paladini / Expert Systems with Applications 18 (2000) 133–151

into account the suppliers and raw materials. In broad terms, each area involves the following aspects, amongst others: (A) As to the nature of the inspection: 1.1. Role played by the inspection in the quality of the process. 1.2. Actions resulting from the inspection. 1.3. General objectives and emphasis given by the inspection. 1.4. Scope of the inspection in relation to the productive process. 1.5. Scope of action of the inspection. (B) As to the nature of the process: 2.1. Evaluation of the supplier’s average quality level. 2.2. General characteristics of production planning and control. 2.3. Stocking structure. (C) As to the nature of the lots: 3.1. Relation between lots and samples. 3.2. Use of lots of parts after the quality evaluation decision. (D) As to the suppliers and raw materials: 4.1. Relationship with suppliers in terms of quality control of the lots purchased. 4.2. Raw material reposition levels.

3.3.5. Decision supporting expert system. Module 5: quality evaluation by attributes or by variables—ATVA subsystem 3.3.5.1. Objectives of the module This module of the expert system deals with an essential decision. This decision involves the most suitable choice for the quality evaluation process—a choice between quality evaluation by attributes and quality evaluation by variables. Each of these evaluations has specific tools and diverse results. That is why selecting one of them can be crucial for an adequate quality evaluation. 3.3.5.2. Theoretical background The process of quality analysis begins with the definition of the conceptual basis regarding each evaluation process. The following items were defined as crucial for shaping the knowledge basis of the expert system. Quality evaluation is targeted at those elementary requirements considered to be fundamental for the appropriate performance of the product, i.e. the quality characteristics. The basic contribution of the concept of quality characteristics for the concept of quality itself consists of looking for quality from the elementary aspects that make up the product. Thus, the analysis of product quality entails a detailed study of the product ‘exploded’, i.e. reduced to its smallest parts. Normally, the evaluation of all of the quality characteristics of a product is unfeasible, mainly the bigger or more complex ones. As a consequence, the control of the characteristics tends to be restricted to the most important

145

ones, for which levels of tolerance and inspection norms are established and on which control is exercised. There are two basic ways of carrying out quality control of a product based on the evaluation of its characteristics: control by attributes and control by variables. The first analysis concerns the importance of selecting correctly the inspection method to be adopted. As a basis for evaluation of product quality, a mistake when selecting which type of control to use corresponds to the fixation of an incorrect quality level to a product. Moreover, the methods and techniques of Statistical Quality Control, both at process and product levels, are specific for each case. Inspection by attributes, for example, has remarkable theoretical and practical differences when compared with inspection by variables. Normally, there are significant consequences in terms of costs when control types are used mistakenly. They come from both the use of an expensive control type to obtain information that another, cheaper control type would likewise provide and the costs that result whenever one makes a decision based on imprecise or incorrect information. From the point of view of the methodology typical of each control type, the following remarks are generally valid for the following practical observations: (1) evaluation by attributes tends to offer faster conclusions as opposed to evaluation by variables. When the opposite is true, it is because quality patterns are not clearly defined and, since evaluation relies on a subjective decision basis, a conflict takes place between inspectors having divergent opinions on the same aspect. The delay of evaluation by variables is due to the fact that it requires the utilization of measuring instruments, or tests and laboratory assays only able to provide final results after several hours or even days; (2) evaluation by attributes produces general information about the characteristics being studied, whereas evaluation by variables provides more complete and detailed information. Therefore, the latter tends to give safer and faster clues for correcting defects; (3) performance of evaluation by attributes is simpler and more direct than performance of evaluation by variables, the latter being sometimes sophisticated, depending on the equipment required; (4) evaluation by attributes does not depend on calculations to get to a conclusion, while the opposite is often the case for evaluation by variables; (5) evaluation by attributes tends to use far more samples than evaluation by variables, providing a given inspection with the same level of reliability and significance offered by evaluation by variables; (6) evaluation by variables requires larger investments than does evaluation by attributes, mostly in terms of inspection equipment and materials. The major practical difficulty of evaluation by attributes lies on pattern fixation. The main practical restriction in implementing evaluation by variables consists of working with and having available equipment, laboratory materials, and even measuring and interpreting results. In addition, the need of an initial investment requires a cost-effectiveness

146

E.P. Paladini / Expert Systems with Applications 18 (2000) 133–151

analysis for quality, not always easy to be carried out— either because often data on costs and benefits in terms of quality are unavailable for analysis, or because not always does one know the methods to obtain such data. Finally, it is worth mentioning that, additionally, the equipment requires periodical maintenance, and sometimes special conditions of storage. It has been observed that the issue related to initial investments often precludes the adoption of evaluation by variables, even in situations where it is clearly required and must be given preference over evaluation by attributes. In the face of the specific features observed, the need is detected for elaborating a module of the DSES that makes possible to determine which is the best option for a given situation where it becomes necessary to define the most suitable way of assessing a product’s quality from its characteristics. Such is the objective of the present module, which contrasts the evaluation carried out by attributes with that performed by variables for the situation under investigation. 3.3.5.3. Module characteristics In addition to those ten general characteristics listed above, this module has the following specifications: (a) Number of rules: 93 (b) Number of qualifiers: 34 (c) Choices: 2 (d) Decision of the system: the system decides between evaluation by attributes and evaluation by variables. (e) Example of a rule: IF: The defect must be characterized by its frequency of occurrence, THEN: Evaluation by attributes—probability: 2/10; Evaluation by variables—probability: 8/10 (Rule 3) (f) Example of a qualifier: Substitution of inspectors 1. is intense; 2. is significant; 3. is not relevant; 4. has not been kept track of (Qualifier 30). 3.3.5.4. Module structure The Expert System is made up of 6 basic areas involving analyses related to the nature of the defects, the results of the inspection, the characteristics, the methodology of inspection, the inspectors who will be involved and the productive process as a whole. In broad terms, each area includes the following aspects, amongst others: (A) As for the nature of the defects: 1.1. Classification of the defects. 1.2. Frequency of occurrence of the defects. 1.3. Characterization of occurrence. 1.4. Level of information about the defect (accuracy, reliability and scope).

1.5. Action of a given defect on others that may occasionally occur. (B) As for the results of the inspection: 2.1. How the results of the evaluation can be expressed. 2.2. Scales to represent the results. 2.3. How the results can be obtained. (C) As for the characteristics to be controlled: 3.1. Nature and importance of the characteristics to be controlled. 3.2. Availability for evaluation. (D) As for the methodology of inspection: 4.1. Inspection costs. 4.2. Resources for carrying out the inspections. 4.3. Places of inspection. 4.4. Scope of the decisions resulting from evaluations. 4.5. Studies of defect causes. 4.6. Inspection focus and objectives. 4.7. Evaluation sensitivity level and 4.8. Ways of performing inspections. (E) As for the inspectors: 5.1. Inspectors’ profiles. 5.2. Inspectors’ specialization level. 5.3. Characteristics of their actions upon the productive process as a whole. 5.4. The evaluation process. (F) As for the productive process: 6.1. Consequences of the inspection results on the productive process as a whole. 6.2. Production levels.

4. Application The expert system described was tested in ceramics factories (wall and floor tiles) and in textile factories as well, starting from basic information supplied by the companies. Both tests provided results compatible with the expectations of the staff involved. They have confirmed the suitability of the system’s decision to the decision (kind of inspection) that was being put into practice in an experimental way at that moment. During the application, a follow-up process of the results of successive applications of the system was developed. Experimental applications showed that the system’s decision changes according to well-defined factors. The changes of the results—for instance, when the expert system suggests the use of manual inspection in situations where some time ago automatic inspection used to be proposed— are always a consequence of changes in the decision environment. Some of these changes cannot be followed up. They indicate situations that require control—and, in general—preventive control. The expert system described was also tested in a variety of contexts and we considered it to be well suited to the different cases studied. As a particular example of its application, the system was tested in specific situations where,

E.P. Paladini / Expert Systems with Applications 18 (2000) 133–151

due to the characteristics taken into consideration, a result of the processing was previously known. The previous decision was the same taken by the system in 99.8% of the cases analyzed. The system was applied to well-known situations, with real data obtained from productive processes already studied. Considering the different applications, the expert system was tested on 2304 real cases. These cases refer to practical situations of quality evaluation found in 64 small, medium and large industries surveyed. Each company had an average of six specific situations where this kind of decision was typically required. The system was tested on each one. Contact with the companies was made through the Specialization Courses offered by the Nu´cleo de Garantia da Qualidade (Center for Quality Assurance). The students in such courses, staff of the companies in question, kindly made the arrangements necessary in order for the tests to be carried out. As they were familiar with the process being studied, they were able to check whether the decisions taken by the expert systems matched their expectations. This comparison made possible to validate the system. Out of the 2304 trials, 2214 cases were ranked as situations where the results from the System were the same as those expected by the users; 180 cases were classified as situations where they were different from the results expected by the users - having, thus, a 92.2% accuracy level. Accuracy here means that the System selected the same choice as did the technicians. In view of the tests applied, the accuracy level was considered to be high—over 90%. It must be noticed that there are some favorable factors for such a high value, such as: the situations had characteristics that evinced the tendency towards a given choice or the technicians turned the process into a certain direction. The expert system detected this direction, and answered with the expected choice. On the other hand, there were some unfavorable factors compromising accuracy, such as: • some technicians did not understand precisely what the module intended to do (not always were the technicians able to distinguish between advantages and disadvantages of evaluation by attributes as opposed to evaluation by variables); • many technicians did not know the answer to the question asked, not admitting, however, their ignorance of that issue and coming up with whimsical responses. All of these factors compromised the whole operation of the System and lowered the number of correct answers. There were some mistakes in the selection of answers (possibly something around 2% of the total). Some of these mistakes were corrected when the system was run for the second time; others were not and the system considered them to be true. Relevant aspects of the implementation were considered satisfactory. The main aspects were:

147

• the technicians did not get bored or tired when answering the questions and their answers were not markedly divergent when compared with those offered by the system; • in the cases of mistakes, the variations observed were small; • the system was considered to have a practical use; • it was considered to match the expectations of the technicians who used it—there were no discrepancies in terms of reasoning during the analysis of the question; • the technicians seemed to have left with the impression that the system is structured upon a consistent and well-formulated theoretical basis. When asked whether they trusted the modules, 91.5% of them said yes. When invited to put it into quantitative terms, within a scale ranging form 0 to 10, responses ranged from 7.2 to 9.8, with an average of 8.95. In the documentation of the system one can see examples of its application. In addition to showing how this part of the system works, the example may be used as a basic reference for its operation, serving as an answer key for its effective utilization. The examples in question show the options selected in each qualifier, the solution proposed by the system and the conclusion deriving from this solution. For the cases studied, a result was previously established. The system reached the expected results when applied. We structured another test as well. Since we have a special interest in the labor activities in the operation of expert systems, we have studied their operation by means of a fuzzy model. This model concluded the perfect fitness of the operators for the expert system, with surprising results: • on average, the workers took 89 min to learn to work with the expert system we describe here; • they took about 36 s to determine each answer; • the selection of the situation where the expert system should be applied to is made in (a maximum of) 50 min; • the number of errors in the operation of the models was about 4% of the incorrect responses; • 9% of the formulated questions were not understood in an interval equal to or shorter than 20 s; • and finally, a fundamental piece of information, the result proposed was accepted in 94.5% of the cases. In view of the results of this study, the expert system applied to the problem under investigation was considered to be adequate. Considering the relation between the problem and the decision concerning the use of tools of Quality Management, one comes to the conclusion that the proposal made in this work is valid. It is also worth pointing out that the bigger the number of applications, the greater the possibilities of optimizing this proposal. Therefore, rather than offering a definitive result, this work comes as the beginning of a whole new research line.

148

E.P. Paladini / Expert Systems with Applications 18 (2000) 133–151

5. Implementation procedures The following procedures are important to consider. 5.1. System operation The DSES used in this project makes use of a software called KAPPA, which is a general-purpose expert system development software. 5.2. System application The expert system described here was tested in various contexts and proved adequate to several cases. In particular, as an example of its application, the system was tested in specific situations where, due to the characteristics considered, a given result was expected. The decision previously determined was the same made by the system in all the cases studied. The system was applied to well-known situations and with real data taken from productive processes, which had already been investigated. 5.3. Complementary actions In view of the results of the application of the expert system, it is possible to list a series of actions to complement the inspection which allow both system operation and quality evaluation to be optimized. Such actions are determined by results generated by the several modules and are of importance in order to stress the potentialities of the productive process or minimize the weak points, by correcting failures and eliminating wasteful practices. As an example of these actions, let us consider the application of the EXEC subsystem, which checks the necessity or convenience of inspection development. Once a preliminary evaluation of the process has been performed, the system decides to execute inspection. Nevertheless, it is possible to list complementary criteria which will be added to interrupt the inspection process and allow the material to be released without previous evaluations. Three criteria can be used in this case: 1. If the record of the previous inspections shows an evolution from simple to multiple sampling with excellent acceptance levels, the inspection can be discontinued for well-defined periods (Montgomery, 1998, p. 357). 2. If multiple sampling reveals acceptance levels of around 80% of the lot and the capability of the process is less than 4 standard deviations, the inspection can be discontinued as well. 3. If acceptance occurred up to 10 consecutive times in the cases of reduced multiple sampling plans, the inspection can be carried out for only 1 out of 10 lots to be evaluated. The use of these criteria can be associated with the use of Table 8 of the MIL-STD-105D system, included in the Brazilian norm NBR 5426.

In the cases above, what can be seen is actions which represent the confirmation of the excellent operation level of the process, so that the system becomes adapted to the current reality of the production lines. Likewise, in the case of the CEAM subsystem, which decides between inspection by sampling and complete inspection, it is possible to add criteria, which aim at discontinuing inspection by sampling in order to inspect the entire lot. From the suggested criteria, the following can be listed: 1. Five consecutive lots rejected (Norm NBR 5426). 2. The occurrence of sudden alterations in the process, such as machinery breakdown, change of suppliers or staff replacement. 3. The statistical analysis of processes reveals situations of lack of control. These criteria can be understood as additional assurance rules and their use becomes necessary in order to prevent the occurrence of defects, which may go unnoticed during the inspection. Another approach which can be used, has to do with the monitoring of the results of successive applications of the modules of the system. Experimental implementations show that their decisions are altered according well-defined factors. The change in the results is, thus, always due to these alterations, which can and ought to be monitored, because they indicate situations that require control - almost always preventive. Tests in the operation of the modules show that the sensitivity of the system is high, and its results can be altered with small changes in the decisions made at qualifier level. This approach, however, could be compromised if the whole system had to be reprocessed, which would result in a loss of efficiency. But that does not happen, since in the operation of the modules there are devices which allow some alterations to be processed in some qualifiers. The system saves the previous result as well as all the decisions made, and shows the new results, allowing a comparison between both output sets—the results from the original inputs and the results from the modified qualifiers. From the evaluation of the responses offered in the processing of the two systems—original and modified—and the comparative analysis of the results stem useful information which determines actions to be taken. Additionally, with the use of the ‘WHY’ device, it is possible to observe the direction the system is taking in its analysis and then see what elements are being stressed. In general terms, the test of the effect that changes in the input data cause on the results is developed as follows: the selections are altered for some qualifiers and the others remain the same; next, the data are processed under the new situation and the effect of the changes on the final result is observed. The previous decision values are preserved for comparison with the new values. This procedure has an extra

E.P. Paladini / Expert Systems with Applications 18 (2000) 133–151

advantage: it makes possible to create and analyze decisions made in the qualifiers based on the results obtained. Thus, the role played by a decision in obtaining a result is determined and more or less emphasis is given to this decision, according to its influence on the system’s processing. Let us consider a small example. In the operation of the ACRE subsystem, the use of inspection by acceptance is recommended for lots coming from a given supplier. In this case, the formation of replacement stocks is unfeasible and, by applying Rule 12 of the module, the formation of such stocks is considered to be costly. This factor, combined with others, determines the use of inspection by acceptance. Nevertheless, there was a change of supplier and now the pieces are purchased from a place much closer to the company. As agreed in a contract, the new supplier keeps replacement stocks available and sees to the replacement of defective pieces in a short time. In addition, it is worth mentioning that, based on this situation, the control conditions to alter the final quality of the inspected lots are available. In this way, Rules 1, 3 and 12 have their options altered, i.e., new decisions are made for the corresponding qualifiers. The system then starts to recommend rectifying inspection, thereby changing its original proposal. It becomes evident that the new system has undeniable advantages over the previous one. The convenience of its use, however, was only possible because some working conditions of the productive process varied. The monitoring of these alterations by the expert system allowed the changes of the evaluation procedures of the lots to be made, with evident benefits for the company. 5.4. System basic documentation The description of the DSES is written in depth in four volumes, which make up the system basic documentation. These volumes, available in print or in floppy disks, contain all the information about the structure and the operation of the modules of the system. The content of the volumes is summarized as follows. 5.4.1. System operation This volume provides general information about expert systems, basic knowledge, conditions, rules, qualifiers and choice systems. It also describes how the interaction between the user and the system takes place and the operation with rules, in addition to listing the various resources that the system; basic program has to facilitate the processing. Amongst these resources stand out the information supply about the logic of the operations, storage of data and results, analysis of conclusions, data alteration and reprocessing, output listing and printing. 5.4.2. Rules In this volume are listed all the rules that make up the various modules of the system. The text also includes summarized data about the subject of the module, general

149

information about the results provided by the module, the choices available in each rule, explanatory notes and bibliographical references used to elaborate the rules. 5.4.3. Qualifiers Here are described the qualifiers used in each module and information is given as to the rules where each qualifier is used, as well as the choices which were part of it. 5.4.4. System application This volume puts together experimental tests conducted with all the modules. Such tests regard the use of modules in specific situations, which are characterized by the responses given to the selection of choices of the qualifiers. In each module two or more options are presented. Each option is submitted to the qualifiers and the selected choice is shown. Next, the result presented by the system is given and a brief conclusion is drawn. It is worth mentioning that these examples can be used as ‘answer keys’ for the system operation, i.e. they serve as a test to be applied in each module in order to evaluate its consistence, on the one hand, and its adequacy to specific situations, on the other. 6. Conclusions In addition to attaining to the practical results expected, this paper makes possible to draw conclusions regarding the adequacy of the techniques used to solve the problem in question. Indeed, the characteristics of the tools related to the artificial intelligence (AI) field determined its full applicability to the situation under investigation and were fundamental to the achievement of those results. Besides the adequacy observed and the contribution toward the achievement of the expected results, it is equally important to stress the effectiveness of AI techniques for the treatment of the problem. In fact, the use of AI made possible to structure a reliable, fast and practical procedure for executing quality evaluation. This can be seen through its programming easiness, possibility of a critical analysis of the results and reliability of the information obtained. In a cost–benefit analysis, this aspect may pay off the system implementation costs. Thus, considering the adequacy and effectiveness aspects, we conclude that the application of the selected tools to the problem was correct, allowing the expected results to be attained. Some other considerations are also noteworthy at this point. When studying the concepts of AI and its more usual tools, some authors (like Fu, 1999; Nebendahal, 1987; Waterman, 1995) have established basic criteria, upon whose effective compliance depends the adequacy of the problem under investigation to the techniques and methodologies in question.

150

E.P. Paladini / Expert Systems with Applications 18 (2000) 133–151

Firstly, let us draw attention to the fact that this study involved computer techniques whose characteristics are not found in usual programs. Further new knowledge was brought into the problem and there was a high degree of flexibility in the programs developed as a result thereof. In addition there is no way of predicting the results of the evaluation for many of the cases which were studied. In certain cases, the results of the evaluation may be satisfactory, but not optimal. Nonetheless, this does not invalidate the efforts made or the programs developed; rather, it generates conditions for new research to be done, so that in the near future such research optimizes the results obtained and makes possible new accomplishments in this field. It is evident that, in some cases, the problem is of moderate complexity and somehow limited. This allows an expert to solve it in a not very long period of time. The system, which simulates this expert’s behavior, provides likewise and in reasonable time an acceptable solution for such questions as the selection of the type of inspection to be adopted in a given situation. One of the relevant points to consider here is the fact that we are not proposing a solution that requires broad common sense. Instead, we aim at determining procedures that seek to make objective (expressed by numbers) those procedures which are traditionally developed in a subjective way. Thus, the evaluation proposed dismisses the quest for unanimous positioning (a process which is generally long and painful and which may compromise the achievement of effective solutions). When seeking an objective quality evaluation methodology, what we really want is to determine practical information and accurate data (numerical values, for instance) capable of unequivocally showing the reality of the product in question, without needing to ask for the opinion of third parties or to perform new modes of evaluation, which end up involving other evaluators too. Some other characteristics considered to be desirable in projects using AI techniques can be observed here. Among these, the following stand out: (a) The emphasis on a procedure with almost immediate results, independent from factors which can only be effectively evaluated in a long-term basis. (b) The problem may evolve to situations including multiple elements to be analyzed and, thus, tend to what is commonly called ‘combinatorial explosion’. (c) The facts describing the problem can be determined by the user during the attempts to solve it. (d) We are not dealing with a static situation. Instead, we have a dynamical problem where the evaluation performed can be considered to be in constant mutation. (e) There are a number of situations where the problem can be used; it is not restricted to a single case in particular. The most immediate recommendation to make here is the expansion of the applications we have tried so far. Other

forms of quality evaluation should be considered, as is the case of statistical control processes, diagnosis of activities developed by human operators or models of analysis of performance of products in the market. References Alexander, S. M., & Jagannathan, V. (1986). Advisory system for control chart selection. Computers in Industrial Engineering, 10 (3), 171–177. Besterfield, D. H. (1990). Quality control, Englewood Cliffs, NJ: PrenticeHall. Brink, J. R., & Mahalingam, S. (1990). An expert system for quality control in manufacturing. USF Report,, 455–466. Charboneau, H., & Webster, G. (1997). Industrial quality control, Englewood Cliffs, NJ: Prentice-Hall. Crawford, K., & Eyada, O. (1989). A Prolog based expert system for the allocation of quality assurance program resources. Computers in Industrial Engineering, 17 (1–4), 298–302. Dagli, C. (1990). Expert systems for selecting quality control charts. USF Report,, 325–343. Dagli, C., & Stacey, R. (1988). A prototype expert system for selecting control charts. International Journal of Production Research, 26 (5), 987–996. Dodge, H. F. (1977). Administration of a sampling inspection plan. Journal of Quality Technology, 9 (3), 131–138. Evans, J. R., & Lindsay, W. M. (1987). Expert systems for statistical quality control. Annual International Industrial Engineering Conference Proceedings,, 131–136. Eyada, O. K. (1990). An expert system for quality assurance auditing. ASQ Quality Congress Transactions,, 613–619. Fard, N. S., & Sabuncuoglu, H. (1990). An expert system for selecting attribute sampling plans. International Computer Integrated Manufacturing, 3 (6), 364–372. Fu, K. (1999). Syntactic pattern recognition and applications, Englewood Cliffs, NJ: Prentice-Hall. Gipe, J. P., & Jasinski, N. D. (1986). Expert system applications in quality assurance. ASQ Quality Congress Transactions,, 272–275. Horsnell, G. (1988). Economical acceptance sampling plans. Journal of Royal Statistics Society, A (120), 148. Hosni, Y. A., & Elshennavy, S. K. (1988). Quality control and inspection. Knowledge-based quality control system. Computers in Industrial Engineering, 15 (1–4), 331–337. Juran, J. M. (1999). Juran na Lideranc¸a pela Qualidade, Sa˜o Paulo: Pioneira. Kanagawa, A., & Ohta, H. (1990). A design for single sampling attribute plan based on fuzzy sets theory. Fuzzy Sets and Systems, 37, 13–181. Lee, N. S., Phadke, M. S., & Keny, R. (1989). An expert system for experimental design in off-line quality control. Expert Systems, 6 (4), 238– 249. Montgomery, D. (1998). Introduction to statistical quality control, New York: Wiley. Moore, R. (1995). Expert systems for process control. TAPPI Journal, 6, 64–67. Nebendahal, D. (1987). Expert systems, London: Wiley. Ntuen, C. A., Park, H. E., & Kim, J. H. (1989). KIMS—a knowledge-based computer vision system for production line inspection. Computers in Industrial Engineering, 16 (4), 491–508. Ntuen, C. A., Park, H. E., & Sohn, K. H. (1990). The performance of KIMS image recognition tasks. Computers in Industrial Engineering, 19 (1– 4), 244–248. Paladini, E. P. (1999). Qualidade Total na Pra´tica, Sa˜o Paulo: Atlas. Papadakis, E. (1985). The Deming inspection criterion. Journal of Quality Technology, 17 (3), 121–128. Pfeifer, T. (1989). Knowledge-based fault detection in quality inspection. Software for Manufacturing, IFIP,, 467–476.

E.P. Paladini / Expert Systems with Applications 18 (2000) 133–151 Simmons, D. A. (1990). Practical quality control, Reading, MA: AddisonWesley. Waterman, D. A. (1995). A guide to expert systems, Reading, MA: Addison-Wesley.

151

Whittle, P. (1964). Optimum preventive sampling. Journal of Operations Research American Society, 2 (2), 197–207. Yager, R. R. (1984). On the selection of objects having imprecise qualities. IEEE Transactions on Systems, Man and Cybernetics, 14 (5), 755–761.