Multi-criteria design of an X̄ control chart

Multi-criteria design of an X̄ control chart

Computers & Industrial Engineering 46 (2004) 877–891 www.elsevier.com/locate/dsw Multi-criteria design of an X control chart Yan-Kwang Chena,*, Hung...

149KB Sizes 0 Downloads 49 Views

Computers & Industrial Engineering 46 (2004) 877–891 www.elsevier.com/locate/dsw

Multi-criteria design of an X control chart Yan-Kwang Chena,*, Hung-Chang Liaob a

Department of Business Administration, Ling Tung College, No. 1 Ling Tung Road, Taichung 408, Taiwan ROC Department of Health Services Administration, Chung-Shan Medical University, 110, Sec.1, Jian-Koa N. Road, Taichung 402, Taiwan ROC

b

Available online 19 June 2004

Abstract Control chart designs are widely studied because control charts are not only costly used but also play an important role in improving firms’ quality and productivity. Design of control charts refers to the selection of parameters, including sample size, control-limit width, and sampling frequency. In this paper, a possible combination of design parameters is considered as a decision-making unit; it is characterized by three attributes: hourly expected cost, average run length of process being controlled, and detection power of the chart designed with the selected parameters. Accordingly, optimal design of control charts can be formulated as a multiple criteria decision-making (MCDM) problem. To solve the MCDM problem, a solution procedure on the basis of data envelopment analysis is proposed. Finally, an industrial application is presented to illustrate the solution procedure. Also, adjustment to control chart design parameters is suggested when there are process improvements or process deteriorations. q 2004 Elsevier Ltd. All rights reserved. Keywords: Data envelopment analysis; X control chart; Multi-criteria decision-making; Control chart design

1. Introduction 1.1. Background Statistical process control (SPC) is an effective method for improving a firm’s quality and productivity. The primary tool of statistical process control is the statistical control chart. Engineering implementation of control charts requires a number of technical and behavioral decisions. One important technical decision is the design of the control chart, which refers to the selection of parameters, including sample size, control-limit width, and sampling frequency. Economic design that is based on an economic criterion is one of the popular approaches in today’s control chart design. The objective is to find the sample size, control-limit width, and sampling frequency that minimize the loss in the firm’s profit. * Corresponding author. E-mail addresses: [email protected] (Y.-K. Chen), [email protected] (H.-C. Liao). 0360-8352/$ - see front matter q 2004 Elsevier Ltd. All rights reserved. doi:10.1016/j.cie.2004.05.020

878

Y.-K. Chen, H.-C. Liao / Computers & Industrial Engineering 46 (2004) 877–891

Duncan (1956) developed the first model and applied it to an X control chart. In this model Duncan assumed one monitored the process to detect the occurrence of a single assignable cause that causes a fixed shift in the process and defined the relevant costs over a cycle. The components of the cycle he considered are as follows: (I) Assume the process starts in the in-control state, the time interval that the process remains in control is an exponential random variable with mean 1=l hour, which represents the average in-control time. In other words, the process going to out-of-control state from in-control state is assumed to be a Poisson process with l occurrences per hour. (II) When the process goes to out-of-control state, the probability that this state will be detected on any subsequent sampling is p; which represents the detection power of the chart and can be written as pffi ð ð 1

2k2d n

FðzÞdz þ



21

pffi k2d n

FðzÞdz:

ð1Þ

where FðzÞ is the probability density function of standardized normal distribution, d represents the number of standard deviations s in the shift of process mean m0 ; and k is the control-limit width in terms of standard deviations s: Accordingly, the expected number of sample taken before detecting a mean shift of the process is 1=p: Moreover, the expected time of occurrence of the assignable cause within the interval between two samples is derived as ððjþ1Þh



jh

l e2lt ðt 2 jhÞdt

ððjþ1Þh

¼

le

2lt

dt

1 2 ð1 þ lhÞe2lh ; lð2e2lh Þ

ð2Þ

jh

where h is the time interval between succeeding samples. (III) The average time spent on sampling, inspection, evaluation, and plotting for each sample is a constant g proportion to the sample size n: Thus, the time delayed on this phase is gn: (IV)The time to search the assignable cause and make the process work at in-control state again is a constant D: Therefore, the expected length of a cycle under the parameter combination s ¼ ðn; h; kÞ; denoted by ECT ðsÞ; is ECT ðsÞ ¼

1 þ ðh=p 2 tÞ þ gn þ D l

If one defines that a1 a2 a3 a4 a5

is the is the is the is the is the

fixed cost of sampling; variable cost of sampling; cost of searching for an assignable cause; cost of investigating a false alarm; hourly penalty cost associated with production in the out-of-control state;

ð3Þ

Y.-K. Chen, H.-C. Liao / Computers & Industrial Engineering 46 (2004) 877–891

879

the expected cost per cycle under the parameter combination s ¼ ðn; h; kÞ; denoted by ECC ðsÞ; is ECC ðsÞ ¼

  ða1 þ a2 nÞECT ðsÞ a a expð2lhÞ h þ a3 þ 4 2 t þ gn þ D : þ a5 h 1 2 expð2lhÞ p

ð4Þ

Here a is the probability of a false alarm (type I error), which is the reciprocal of the average run length with the process in control state. It may be represented mathematically as follows:



ð2k 1 ¼2 FðzÞdz ARL0 21

ð5Þ

Accordingly, the expected cost per hour under the parameter combination s ¼ ðn; h; kÞ; denoted by EHC ðsÞ; can be determined by dividing ECC ðsÞ by ECT ðsÞ: The economic design of an X chart is to determine the optimal values of n; h; and k such that cost function EHC ðsÞ achieves its minimum value. Some criticism has been directed towards the economic design model (Woodall, 1986). Saniga (1989) introduced some statistical constraints in the above model of minimizing EHC ðsÞ and solved it by optimization technique. The constraints he introduced involved the limit of probability of a type I error, a; and the detection power of the chart, p; according to pre-selected upper and lower bounds. 1.2. Motivation and objective The strong assumptions on both Duncan and Saniga’s models are that the Quality Control (QC) manager has knowledge about the risk of occurrence of an assignable cause and the various cost parameters to simplify the issue. Unfortunately, some cost parameters are difficult to accurately estimate in practice. For instance, the hourly penalty cost (a5 in (4)), which associates with production in the out-of-control state, is difficult to estimate because it involves an immeasurable diminishment in customer goodwill, besides measurable freight and indemnity. Similarly, the cost of investigating a false alarm ða4 Þ also involves immeasurable portion. For QC manager, measurable portion of cost could be easily and accurately estimated rather than immeasurable one, but the latter is often taken into consideration in his decision process in the meantime. As a result, this study proposes an alternative decision model, which takes both measurable and immeasurable cost into account to deal with the problem of control chart design. EHC ðsÞ is still used in the decision model as economic criterion to evaluate the measurable portion, but the estimation of the cost parameters such as a4 and a5 in EHC ðsÞ is merely for measurable portion. As for immeasurable one, although it is hard to appraise, we believe it is relative to some quality performance of control charts. For example, higher detection power of control charts leads to excellent goodwill. Thus, some quality performance indices are used in this model to give another consideration to the aspect of immeasurable cost. In other words, we formulate optimal design issue of control chats as a constrained multiple criteria decision-making (MCDM) problem. Fig. 1 conceptually draws the feasible solution region posed by Saniga’s constraints, with hatching. From Saniga’s viewpoint—the lower the expenditure cost, the better the solution—solution A is usually chosen as the optimum. In contrast to that, we deem that some solution on the frontier of the feasible solution region, e.g. B, which receives higher quality of products (or lower immeasurable cost) but slightly more expenditure than A may also attract the QC manager. Accordingly, both solutions of

880

Y.-K. Chen, H.-C. Liao / Computers & Industrial Engineering 46 (2004) 877–891

Fig. 1. The feasible region posed by Saniga’s constraints.

A and B should be naturally included for QC manager’s decision space. To seek for those excellent solutions, standing on the frontier of feasible solution region, we use DEA as a tool. 2. Multi-criteria optimal design of control chart 2.1. The multi-criteria model For the QC manager, who looks after both sides of measurable process cost and quality (likely associated with immeasurable cost), the optimal design issue of control charts to be addressed can be thought as a multi-criteria decision-making problem, the solution of which involves trade-offs and compromise. Therefore, one of the most important steps before decision-making is to identify all criteria, the QC manager may consider, to select design parameters n, h, and k. In this study, we extend Saniga’s model (1989) by adding criteria other than economic criterion to the following multiple objective decision-making (MODM) model: max ARL0 ðsÞ max pðsÞ min EHC ðsÞ

ð6Þ

s:t: pðsÞ $ pL

aðsÞ # aU

; design s ¼ ðn; h; kÞ

For the sake of emphasizing that the values of ARL0 ; p; and a are affected by s; we replace the original symbols ARL0 ; p; and a with ARL0 ðsÞ; pðsÞ; and aðsÞ hereafter. Essentially, these three objective functions are conflicting from the viewpoint of statistics. We explain it as follows. First, the first two objective functions are conflicting because, in most cases of design control chart, there exists a tradeoff between lower false alarm rate (the reciprocal of in-control average run length) and failure detection power when seeking design parameters. Lower false alarm rate will enhance the confidence in the control mechanism, while high failure detection power of control chart will help to improve outgoing quality that relates to the consumer directly. It may be possible to keep false alarm rate low and detection power high

Y.-K. Chen, H.-C. Liao / Computers & Industrial Engineering 46 (2004) 877–891

881

Table 1 Performance criteria for (n; h; k) selection DMU

Performance criteria

s ¼ ðn; h; kÞ

Expected cost per hour EHC ðsÞ Average run length with the process in control state ARL0 ðsÞ Detection power of the chart pðsÞ

simultaneously by increasing sample size n (Anderson, Sweeney, & Williams, 1999). However, it would be expensive, so the third conflicting objective by minimization of cost function, EHC ðsÞ; is included. Complex MODM methods involving software could be used to decide the best compromise solution for the foregoing mathematical model. In contrast to that, this research proposes a method for seeking the optimal design parameters, which is still based on the MODM model but requires much less computation. It simply regards a set of discrete and finite combinations of design parameters as the solution space. In other words, the foregoing mathematical model aims to find the corresponding objective values of each combination s: Thus, choosing a parameter combination by its objective values can be considered as a multi-criteria decision-making (MCDM) problem in that a combination s is regarded as a decision-making unit (DMU) with its attributes of average cost EHC ðsÞ; in-control average run length ARL0 ðsÞ; and detection power, pðsÞ (Table 1). The problem to select a good control chart design becomes easier if a single criterion, such as the economic criterion is used. However, when several criteria are taken into account for evaluating designs’ performance, the difficulty in comparing the global performance of design becomes evident. A generic design ‘A’ can be easily compared with another design ‘B’ if ‘A’ performs, along all criteria, better than or equal to ‘B’. However, majority comparisons on designs’ performances are difficult to make because design ‘A’ often performs only partially better than design ‘B’. As a result, a tool for optimally selecting the feasible design on control charts is necessary, and data envelopment analysis (DEA) can be helpful for such a purpose when the design’s global performance is defined as the ratio of achieved quality measures (outputs) to cost expenditure (inputs). 2.2. Data envelopment analysis Data envelopment analysis (DEA) is a linear programming based technique for measuring and comparing the relative efficiency of a set of competing decision-making units (DMU) where the presence of multiple-inputs– multiple-outputs makes the comparisons difficult (Dyson, Thanassoulis, & Boussofiane, 1990). The relative efficiency of the ‘multiple-inputs– multiple-outputs’ in DMU is typically defined as an engineering-like ratio (weighted sum of the DMU’s outputs divided by weighted sum of the DMU’s inputs), i.e. for the generic rth DMU: Er ¼

weighed sum of outputs weighted sum of inputs

So, if a relative efficiency wants to have higher performance, the input data of ratio must have lower value and the output data of ratio must have higher value. In this paper, we utilize the characteristic of DEA, that is, lower input value with higher output value gets higher relative efficiency of DMU, to solve

882

Y.-K. Chen, H.-C. Liao / Computers & Industrial Engineering 46 (2004) 877–891

Fig. 2. Two outputs and single input for DMU s ¼ ðn; h; kÞ:

the issue of control chart design. When applying it, s ¼ ðn; h; kÞ is deemed as a DMU with its single input is the expected cost per hour EHC ðsÞ; and two outputs: ARL0 ðsÞ and pðsÞ (as shown in Fig. 2). The general efficiency measure used by DEA is summarized by Eq. (7) X Oy ðsÞwry y

Er ðsÞ ¼ X

ð7Þ

Ix ðsÞwrx

x

where Er ðsÞ is the efficiency measure of control chart design s; using the weights of assessed design r; Oy ðsÞ are the values of output y for design s; Ix ðsÞ are the values of input x for design s; wry are the most favorable weights assigned to design r for output y; wry are the most favorable weights assigned to design r for input x: In evaluating the relative efficiency of a DMU it is necessary to determine how the weights are to be established. In contrast to other techniques such as statistical method, which give a single common set of weights for each DMU, DEA allows each DMU to choose the set of weights that permits it to appear in the best light. That is, the most favorable set of weights, for each DMU, is adopted to compare with other competing DMU. It improves the shortcoming that assesses the relative efficiency of each DMU by adopting a single common set of weights for each DMU may be not easy and correct in many cases. So, following the spirit of DEA, a specific design with parameters (n; h; k) that leads to slightly higher cost and extremely high quality will be provided a big weight for its quality item to appear in the best light, and this is why DEA is appropriate to be applied to solve the problem of selecting design parameters in a multi-criteria sense. To decide the optimal set of weights for the rth parameter combination (the rth DMU), many mathematical models have been developed in literature. Within them the CCR model developed by Charnes, Cooper, and Rhodes (1978) is most popular. The objective in CCR model is to maximize the relative efficiency value of an assessed design r from among a reference set of design s, by selecting the optimal weights associated with the input and output measures. The maximum relative efficiencies are constrained to 1. The formulation is represented in expression (8). X Oy ðrÞwry y

max Er ðsÞ ¼ X

Ix ðrÞwrx

x

ð8Þ

s:t: Er ðsÞ # 1 wrx ; wry . 0

; other designs s

Y.-K. Chen, H.-C. Liao / Computers & Industrial Engineering 46 (2004) 877–891

883

This non-linear programming formulation (8) is equivalent to the following linear programming (LP) formulation by setting its denominator equal to 1 and maximizing its numerator. X Oy ðrÞw0ry max Er ðrÞ ¼ y

s:t: X

Ix ðrÞw0rx ¼ 1

ð9Þ

x

Er ðsÞ # 1 w0rx ; w0ry

; other designs s

.0

The result of formulation (9) is an optimal efficiency value, Erp ðrÞ; that is at most 1. If Erp ðrÞ ¼ 1; then no other design is more efficient than design r under its own weights. That is, Erp ðrÞ ¼ 1 has design r on the optimal frontier and is not dominated by other design. If Erp ðrÞ , 1; then design r does not lie on the optimal frontier, and there is at least one other design that is more efficient under the optimal set of weights determined by (9). The formulation (9) is employed for each design to calculate design’s efficiency with respect to its own optimal set of weights. As for more details about the theory, application, and software package of DEA, please refer to Charnes, Cooper, and Seiford (1997). Applying DEA technique, the three performance values (EHC ðsÞ; ARL0 ðsÞ; and pðsÞ) of each specific control chart design will be combined to a ratio form, which is called relative efficiency in terms of DEA. 2.3. Proposed solution procedure Once process parameters or cost parameters are estimated, the values of EHC ðsÞ; ARL0 ðsÞ; and pðsÞ for each potential combination s will be calculated according to the formulae (1)– (5), respectively. The calculation work can be facile by means of Microsoft Excel. According to their attributes, a solution procedure with the help of DEA software package such as Frontier Analyst, and Microsoft Excel is proposed to evaluate and compare their performance in terms of the ratio-quality measures to cost expenditure. We summarize it as follows. Step 0 Determining the potential solution. The suitable scope for each design parameter should be initially confined to form a set ðVÞ of potential parameter combinations s: For instance, the scope of sample size n may be confined as from 1 to 50, increased by 1. Greatly large sample size is not taken into account due to high inspection expenditure. Similarly, the scope of sampling time interval h may be set as from 0 to 2 h, increased by 0.1 h, while the scope for control limit width k is from 0 to 4 in terms of standard deviation s: Step 1 Leaching process. Leach unattractive elements of V by the quality constraints aðsÞ # aU and pðsÞ $ pL : The remainders after leaching process are separately collected into a set Qn by their sample size n: Step 2 Partial optimization. Remain the elements with Pareto optimality for each subset Qn : A solution s with Pareto optimization in a set Qn means that there is no other solution in the same set such that s is dominated in terms of statistical properties and cost. For example, assume a particular Qn involves three

884

Y.-K. Chen, H.-C. Liao / Computers & Industrial Engineering 46 (2004) 877–891

solutions saying s1 ; s2 ; and s3 : Their attribute values are as follows EHC ðs1 Þ ¼ 96:700; ARL0 ðs1 Þ ¼ 370:40; and pðs1 Þ ¼ 0:98; EHC ðs2 Þ ¼ 98:470; ARL0 ðs2 Þ ¼ 267:98; and pðs2 Þ ¼ 0:95; EHC ðs3 Þ ¼ 97:403; ARL0 ðs3 Þ ¼ 516:74; and pðs3 Þ ¼ 0:96; then s2 is said to be dominated by s1 and s3 ; and should be deleted from Qn : But solutions s1 and s3 are not dominated by each other, so both are said with Pareto optimization, and should be remained in Qn : Step 3 Global optimization. Merge all the remainders into a set W and select the elements with highest relative efficiency among W. The selected elements will afford to DM to make final decision.

3. Illustration In this section, the proposed model is applied to the industrial case, which is borrowed from Pearn and Chen (1997) with the modifications, assuming process data follows normal distribution and a assignable cause is looked for when a point falls outside the control limits. 3.1. An industrial case In this case, the electronic company, a manufacturer and supplier of aluminum electrolytic capacitors, locates in Taiwan and supplies various kinds of capacitors including non-polarized capacitors and bi-polarized capacitors. Non-polarized capacitors are designed to be used in crossover networks with high-pitch, median pitch, and low-pitch in high-fidelity audio speaker systems. The target value of capacitance, for a particular model of aluminum non-polarized capacitors, was set to 300 (in mF). The samples of finished capacitors were drawn, measured, and monitored by an X control chart to maintain the capacitance level approaching the target value. From previous run, the process shifts occur at random with a frequency of about one every 4 h of operation (i.e. l ¼ 0:25). On the basis of QC technician’s salaries and the cost of test equipment, it is estimated that the fixed cost of a taken sample is $1.00 (i.e. a1 ¼ 1). The estimated variable cost of sampling is $0.1 per capacitor (i.e. a2 ¼ 1) and the time spent to measure and record the capacitance is estimated as 0.01 h (i.e. g ¼ 0:01). When the process goes out of control, the magnitude of the shift is approximately one standard deviation (i.e. d ¼ 1:0). On the average, the time required to investigate an out-of-control signal is 2 h (i.e. D ¼ 2). The cost to search for an assignable cause is $50.0, while the measurable portion in cost to investigate a false alarm is $50 (i.e. a3 ¼ 50:0 and a4 ¼ 50:0). The process is assumed to continue producing the part during investigation and elimination of out-of-control signals, and the measurable penalty cost during this period is about $200 per hour (i.e. a5 ¼ 200). To play the roles of excellent manufacturer and supplier—the company vision, the QC manager was asked to simultaneously regard the aspects of operation cost and product quality on designing control charts. With the proposed model and solution procedure, the optimal values of n; h; and k found by evaluating a wide range of possible solutions are determined through the following steps: in Step 1, a wide range of possible solutions is leached by constraints aU ¼ 0:005 and pL ¼ 0:95; and the remainders are separately collected into a set by their sample size. In Step 2, the solution with Pareto optimization in

Y.-K. Chen, H.-C. Liao / Computers & Industrial Engineering 46 (2004) 877–891

885

Table 2 The set W in the presented illustration n

h

k

Cost

Power

ARL0

21 22 22 23 23 23 24 24 24 24 25 25 25 25 25 26 26 26 26 26 26 27 27 27 27 27 27 27 28 28 28 28 28 28 28 28 29 29 29 29 29 29 29 29 29 30

0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5

2.9 2.9 3.0 2.9 3.0 3.1 2.9 3.0 3.1 3.2 2.9 3.0 3.1 3.2 3.3 2.9 3.0 3.1 3.2 3.3 3.4 3.2 3.4 3.5 2.9 3.0 3.1 3.3 2.9 3.0 3.1 3.2 3.3 3.4 3.5 3.6 2.9 3.0 3.1 3.2 3.3 3.4 3.5 3.6 3.7 2.9

95.236 95.585 95.576 95.948 95.927 95.936 96.323 96.291 96.289 96.312 96.706 96.667 96.654 96.666 96.700 97.096 97.050 97.030 97.033 97.055 97.097 97.409 97.453 97.501 97.442 97.412 97.403 97.442 97.784 97.748 97.733 97.735 97.755 97.790 97.842 97.910 98.129 98.089 98.068 98.065 98.076 98.102 98.143 98.198 98.270 98.477

0.95377 0.96331 0.95453 0.97101 0.96374 0.95504 0.97719 0.97122 0.96399 0.95534 0.98214 0.97725 0.97128 0.96407 0.95543 0.98606 0.98209 0.9772 0.97122 0.96399 0.95534 0.97704 0.96376 0.95507 0.98917 0.98596 0.98197 0.97103 0.99161 0.98903 0.98579 0.98176 0.97679 0.97072 0.96339 0.95463 0.99353 0.99146 0.98885 0.98556 0.98147 0.97644 0.97030 0.96288 0.95402 0.99502

267.98 *D 267.98 * 370.40 267.98 * 370.40 516.74 267.98 * 370.40 516.74 727.66 267.98 * 370.40 516.74 727.66 1034.3 267.98 370.40 516.74 727.66 1034.30 1484.00 727.66 1484.00 2149.30 267.98 370.40 516.74 1034.30 267.98 370.40 516.74 727.66 1034.30 1484.00 2149.30 3142.50 267.98 370.40 516.74 727.66 1034.30 1484.00 2149.30 3142.50 4638.20 267.98 (continued on next page)

886

Y.-K. Chen, H.-C. Liao / Computers & Industrial Engineering 46 (2004) 877–891

Table 2 (continued) n

h

k

Cost

Power

ARL0

30 30 30 30 30 30 30 30 30

0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5

3.0 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8

98.434 98.409 98.400 98.405 98.423 98.455 98.499 98.558 98.632

0.99338 0.99128 0.98861 0.98527 0.98111 0.97599 0.96976 0.96223 0.95325

370.40 516.74 727.66 1034.30 1484.00 2149.30 3142.50 4638.20 6911.00

*

* denotes the parameter combination with relative efficiency score of 1. D denotes the parameter combination with minimum cost.

each set is selected as a candidate for optimal values of n; h; and k (see Table 2). Moreover, in Step 3, all elements chosen in Step 2 are combined to a set W; and the solution (n* ; h* ; k* ) with largest relative efficiency score, which can be calculated by any Software of DEA, is selected as the optimum for parameters of n; h; and k: From Table 2, it is remarkable that no feasible solution can be obtained when n ¼ 1; 2; …; 20: In addition, the following combinations of design parameters n; h; and k : ðn; h; kÞ ¼ ð21; 0:4; 2:9Þ ðn; h; kÞ ¼ ð22; 0:4; 2:9Þ ðn; h; kÞ ¼ ð23; 0:4; 2:9Þ ðn; h; kÞ ¼ ð25; 0:4; 2:9Þ ðn; h; kÞ ¼ ð30; 0:4; 3:8Þ

with cost ¼ 95.236, power ¼ 0.95377, and ARL0 ¼ 267:98; with cost ¼ 95.585, power ¼ 0.96311, and ARL0 ¼ 267:98; with cost ¼ 95.948, power ¼ 0.97101, and ARL0 ¼ 267:98; with cost ¼ 96.706, power ¼ 0.98214, and ARL0 ¼ 267:98; with cost ¼ 98.632, power ¼ 0.95325, and ARL0 ¼ 6911:00;

receiving the relative efficiency score of 1 are consequently preferred and thus offer to the QC manager for reference. Then he/she may choose the first or second parameter combination if criterion of little expenditure is important in his/her mind. In fact, the first combination is also an optimal solution to traditional economic-statistical model. Similarly, if he/she concerns long run quality of firm’s product more than looking for minimal expenditures, the third or fourth (even the fifth) parameter combination whose average cost is slightly higher than that of the first will be selected. 3.2. Sensitivity analysis Continuing the above example, we shall study the effects of model parameters and statistical constraints on the solution to the aforementioned illustration. Table 3 shows the effects of model parameters on the solution to the X control chart design. Increasing the fixed cost of sampling ða1 Þ or the variable cost of sampling ða2 Þ increases the time interval between consecutive sampling and average cost. The cost for searching an assignable cause ða3 Þ and the cost to investigate a false alarm ða4 Þ are both relatively robust to the optimal design. Large value of hourly penalty cost during investigation and elimination of out-of-control signals ða5 Þ leads to more frequent

Y.-K. Chen, H.-C. Liao / Computers & Industrial Engineering 46 (2004) 877–891

887

Table 3 Effects of model parameters on the solution to the illustration

a1

0.1

1.0

10

a2

0.01

0.1

1 a3

25

50

100

a4

25

n

h

k

Cost

Power

ARL0

21 22 23 30 21 22 23 25 30 21 22 23 24 30

0.3 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.5 0.8 0.8 0.8 0.8 0.8

2.9 2.9 2.9 3.8 2.9 2.9 2.9 2.9 3.8 2.9 2.9 2.9 2.9 3.8

92.952 93.335 93.698 96.489 95.236 95.585 95.948 96.706 98.632 110.12 110.27 110.44 110.64 112.45

0.95377 0.96311 0.97101 0.95.325 0.95377 0.96331 0.97101 0.98214 0.95325 0.95377 0.96331 0.97101 0.97719 0.95325

267.98 267.98 267.98 6911.00 267.98 267.98 267.98 267.98 6911.00 267.98 267.98 267.98 267.98 6911.00

21 22 23 24 30 21 22 23 25 30 21 23

0.3 0.3 0.3 0.3 0.3 0.4 0.4 0.4 0.4 0.5 1.1 1.2

2.9 2.9 2.9 2.9 2.9 2.9 2.9 2.9 2.9 3.8 2.9 3.1

89.652 89.806 89.971 90.144 91.283 95.236 95.585 95.948 96.706 98.632 120.47 122.46

0.95377 0.96331 0.97101 0.97719 0.99502 0.95377 0.96331 0.97101 0.98214 0.95325 0.95377 0.95504

267.98 267.98 267.98 267.98 267.98 267.98 267.98 267.98 267.98 6911.00 267.98 516.74

21 22 23 24 30 30 21 22 23 25 30 21 22 30

0.4 0.4 0.4 0.4 0.5 0.5 0.4 0.4 0.4 0.4 0.5 0.4 0.4 0.5

2.9 2.9 2.9 2.9 3.7 3.8 2.9 2.9 2.9 2.9 3.8 2.9 2.9 3.8

91.477 91.829 92.196 92.574 94.910 94.987 95.236 95.585 95.948 96.706 98.632 102.75 103.10 105.92

0.95377 0.96331 0.97101 0.97719 0.96223 0.95325 0.95377 0.96331 0.97101 0.98214 0.95325 0.95377 0.96331 0.95325

267.98 267.98 267.98 267.98 4638.30 6911.00 267.98 267.98 267.98 267.98 6911.00 267.98 267.98 6911.00

21 22 23 30

0.4 0.4 0.4 0.5

2.9 2.9 2.9 3.8

95.103 95.452 95.815 98.629

0.95377 267.98 0.96331 267.98 0.97101 267.98 0.95325 6911.00 (continued on next page)

888

Y.-K. Chen, H.-C. Liao / Computers & Industrial Engineering 46 (2004) 877–891

Table 3 (continued)

50

100

a5

100

200

400

l

0.25

0.50

d

0.8

1.0

3.0

n

h

k

Cost

Power

ARL0

21 22 23 25 30 21 22 23 30

0.4 0.4 0.4 0.4 0.5 0.4 0.4 0.4 0.5

2.9 2.9 2.9 2.9 3.8 2.9 2.9 2.9 3.8

95.236 95.585 95.948 96.706 98.632 95.503 95.851 96.214 98.684

0.95377 0.96331 0.97101 0.98214 0.95325 0.95377 0.96331 0.97101 0.95325

267.98 267.98 267.98 267.98 6911.00 267.98 267.98 267.98 6911.00

21 22 23 30 21 22 23 25 30 21 22 23 30

0.6 0.6 0.6 0.7 0.4 0.4 0.4 0.4 0.5 0.3 0.3 0.3 0.3

2.9 2.9 2.9 3.8 2.9 2.9 2.9 2.9 3.8 2.9 2.9 2.9 3.8

54.509 54.704 54.908 56.419 95.236 95.585 95.948 96.706 98.632 173.55 174.14 174.74 179.43

0.95377 0.96331 0.97101 0.95325 0.95377 0.96331 0.97101 0.98214 0.95325 0.95377 0.96331 0.97101 0.95325

267.98 267.98 267.98 6911.00 267.98 267.98 267.98 267.98 6911.00 267.98 267.98 267.98 6911.00

21 22 23 25 30 21 22 23 27 30

0.4 0.4 0.4 0.4 0.5 0.4 0.4 0.4 0.4 0.5

2.9 2.9 2.9 2.9 3.8 2.9 2.9 2.9 3.2 3.8

95.236 95.585 95.948 96.706 98.632 133.020 133.360 133.720 135.200 136.470

0.95377 0.96331 0.97101 0.98214 0.95325 0.95377 0.96331 0.97101 0.97704 0.95325

267.98 267.98 267.98 267.98 6911.00 267.98 267.98 267.98 727.66 6911.00

33 34 35 40 21 22 23 25 30 3 3 4 5

0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.5 0.3 0.3 0.3 0.3

2.9 2.9 2.9 3.4 2.9 2.9 2.9 2.9 3.8 3.2 3.3 4.0 4.0

99.889 100.190 100.510 102.250 95.236 95.585 95.948 96.706 98.632 86.336 86.330 86.721 87.129

0.95502 0.96120 0.96659 0.95151 0.95377 0.96331 0.97101 0.98214 0.95325 0.97704 0.97103 0.97725 0.99662

267.98 267.98 267.98 1484.00 267.98 267.98 267.98 267.98 6911.00 727.66 1034.30 1578.70 1578.70

(continued on next page)

Y.-K. Chen, H.-C. Liao / Computers & Industrial Engineering 46 (2004) 877–891

889

Table 3 (continued)

g

0.001

0.01

0.1

D

0.2

2

20

n

h

k

Cost

Power

ARL0

21 22 23 30 21 22 23 25 30 21 22 30

0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.5 0.5 0.5 0.6

2.9 2.9 2.9 3.8 2.9 2.9 2.9 2.9 3.8 2.9 2.9 3.8

91.945 92.138 92.347 94.096 95.236 95.585 95.948 96.706 98.632 119.73 120.86 129.17

95.377 96.331 97.101 95.325 0.95377 0.96331 0.97101 0.98214 0.95325 0.95377 0.96331 0.95325

267.98 267.98 267.98 6911.00 267.98 267.98 267.98 267.98 6911.00 267.98 267.98 6911.00

21 22 23 30 21 22 23 25 30 21 22 23 30

0.3 0.3 0.3 0.3 0.4 0.4 0.4 0.4 0.5 1.4 1.4 1.4 1.6

2.9 2.9 2.9 3.8 2.9 2.9 2.9 2.9 3.8 2.9 2.9 2.9 3.8

52.480 53.043 52.998 57.952 95.236 95.585 95.948 96.706 98.632 173.330 173.400 173.470 174.030

0.95377 0.96331 0.97101 0.95325 0.95377 0.96331 0.97101 0.98214 0.95325 0.95377 0.96331 0.97101 0.95325

267.98 267.98 267.98 6911.00 267.98 267.98 267.98 267.98 6911.00 267.98 267.98 267.98 6911.00

sampling and scramble increment of average cost. When average rate of process is at out-of-control state ðlÞ; the average cost will be increased. The large magnitude of process mean shift ðdÞ has significant effects on the design. The effects include enhancement of detection power of the control chart and average run length with the process in control state and reduction of average cost and sample size. Finally, when the time spent on searching an assignable cause ðDÞ dramatically increases, sampling interval will also be zoomed. Table 4 shows the effects of statistical constraints on the solution to the optimal design of the X control chart. When aU increases, the sample size decrease and the control-limits become narrower, but sampling interval is rarely robust. In addition, when pL increases, the sample size increases, but both the sampling interval and control-limit width are robust.

4. Conclusions In this paper, a model for the design of an X control chart from a multiple criteria point of view has been presented. With this model, a set of design parameters (n; h; k) for the X chart is chosen

890

Y.-K. Chen, H.-C. Liao / Computers & Industrial Engineering 46 (2004) 877–891

Table 4 Effects of statistical constraints on the solution to the mentioned illustration

aU

0.0005

0.005

0.05 pL

0.90

0.95

0.99

n

h

k

cost

power

ARL0

27 28 29 30 30 21 22 23 25 30 18 19

0.4 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.5 0.4 0.4

3.5 3.5 3.5 3.5 3.8 2.9 2.9 2.9 2.9 3.8 2.5 2.5

97.501 97.842 98.143 98.455 98.632 95.236 95.585 95.948 96.706 98.632 94.559 96.848

0.95507 0.96339 0.97030 0.97599 0.95325 0.95377 0.96331 0.97101 0.98214 0.95325 0.95930 0.96848

2149.3 2149.3 2149.3 2149.3 6911.00 267.98 267.98 267.98 267.98 6911.00 80.52 80.52

18 19 20 21 22 22 21 22 23 25 30 28 29 32

0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.5 0.4 0.5 0.5

2.9 2.9 2.9 2.9 2.9 3.4 2.9 2.9 2.9 2.9 3.8 2.9 2.9 3.3

91.031 92.770 94.204 95.377 95.585 95.846 95.236 95.585 95.948 96.706 98.632 97.892 98.129 99.080

0.91031 92.770 94.204 95.236 95.585 95.846 0.95377 0.96331 0.97101 0.98214 0.95325 0.99161 0.99353 0.99078

267.98 267.98 267.98 267.98 267.98 1484.0 267.98 267.98 267.98 267.98 6911.00 267.98 267.98 1034.3

based on data envelopment analysis and provides the QC manager a variety of choices to arrive at the requirement of long run quality of product or minimal cost concurrently. An industrial case is presented to illustrate the solution procedure. From the results of sensitivity analysis, it indicates that: (1) the design parameter of sample size needs to be adjusted according to the process mean shift. (2) the design parameters of sampling interval needs to be adjusted according to the time required to search an assignable cause. (3) The design parameters for the X chart are moderately robust when the other model parameters are not too high. (4) As the upper bound of probability of type I error increases, the sample size decreases and the control limits become narrower. (5) As the lower bound of detection power of a control chart increases, the sample size increases.

Y.-K. Chen, H.-C. Liao / Computers & Industrial Engineering 46 (2004) 877–891

891

References Anderson, D. R., Sweeney, D. J., & Williams, T. A. (1999). Statistics for business and economics (7th ed). South-Western. Charnes, A., Cooper, W. W., & Rhodes, E. (1978). Measuring the efficiency of decision making units. European Journal of Operational Research, 2, 429– 444. Charnes, A., Cooper, W. W, & Seiford, L. M. (1997). Data envelopment analysis: theory, methodology, and application (3rd ed). Dordecht: Kluwer. Duncan, A. J. (1956). The economic design of X charts to maintain current control of a process. Journal of the American Statistical Association, 51, 228– 242. Dyson, R. G., Thanassoulis, E., & Boussofiane, A. (1990). Data envelopment analysis. In L. C. Hendry, & R. W. Eglese (Eds.), Tutorial papers in operational research. United Kingdom: Operational Research Society. Pearn, W. L., & Chen, K. S. (1997). Capability indices for non-normal distributions with an application in electrolytic capacitor manufacturing. Microelectronic Reliability, 37(12), 1853– 1858. Saniga, E. M. (1989). Economic statistical control chart designs with an application to X and R charts. Technometrics, 31, 313– 320. Woodall, W. H. (1986). Weakness of the economic design of control charts. Technometrics, 28(4), 408– 409.