Reducing the cost of quality through test data management

Reducing the cost of quality through test data management

1 Engineering Costs and Production Economics, 18 ( 1989) l-10 Elsevier Science Publishers B.V.. Amsterdam - Printed in The Netherlands REDUCING TH...

812KB Sizes 0 Downloads 34 Views

1

Engineering Costs and Production Economics, 18 ( 1989) l-10

Elsevier Science Publishers B.V.. Amsterdam - Printed in The Netherlands

REDUCING

THE COST OF QUALITY THROUGH DATA MANAGEMENT

TEST

Paul N. Manikas GenRad, Inc., Production Test Division, Concord, MA 0 1742 (U.S.A.)

and Stephen

G. Eichenlaub

Graduate School of Business Adminstration,

Harvard University, Boston, MA (U.S.A.)

ABSTRACT

One of the best ways to improve the productivity of a manufacturing operation is through a systematic approach to quality management. This approach emphasizes fault prevention rather than fault detection. By shifting to a prevention-oriented view of quality management, the costs associated with providing a quality

product are reduced. This paper evaluates the factors that influence the cost of quality, identijies the major areas that provide the leverage of improvement, and illustrates how better management oftest data translates into measurable improvement in process quality, resulting in lower cost-of-goods-sold.

INTRODUCTION

which is the goods and services produced. At the most fundamental level, productivity is the ratio of output to input:

Competitive pressures have forced electronics manufacturers to look for ways to improve their productivity in order to maintain or increase profit margins. The challenge facing managers is the selection of an appropriate course of action from a vast array of proposed solutions. To make an intelligent evaluation of the merits of potential productivity improvement projects, a good starting point is defining the objective in terms of the specific results desired. In other words, just what does “increased productivity” mean? Increased

productivity

defined

Productivity in pure economic terms is a measurement of the relationship of input, such as capital, raw materials and labor, to output,

0167-188X/89/$03.50

Productivity=7

output mput

This basic equation is the backbone of all productivity measurement and principles, yet it fails to tell the whole productivity story. Measuring productivity is more than just counting the total output and dividing it by the total cost of the input. What should also be considered is the impact of product quality on marketability, customer satisfaction, warranty costs, and inventory levels. Product quality improvements can lead to increased profit margins. Output should therefore be considered in terms of quantity plus quality. As a result, productivity is increased when product

0 1989 Elsevier Science Publishers B.V.

2

quality improves decreases. QUALITY

COSTS

while cost per unit output

LESS

There are as many definitions of quality as there are people offering them. Quite simply, a “quality product” is one that consistently meets the customer’s expectations. To ensure delivery of quality products, manufacturers must build quality into their products and services. In other words, it takes a “quality process” to produce a “quality product”. The demand for quality amounts to a shift from the detect-andfix approach toward and prevention-oriented view of quality management. From a production perspective, this means a company-wide commitment to eliminate errors at every stage of the product development process: engineering, process design, and manufacturing. It also means working closely with suppliers to eliminate defects from all incoming parts, subassemblies, and materials. Eliminating defects in the process will eliminate defects in the product and will thus reduce the number of products requiring repair. The rework and retest costs associated with repair are extra costs which reduce profit margins, and they are simply a direct result of not building the product right the first time. According to Juran, “The costs resulting from defects are a gold mine from which profitable digging could be done”. A reduction of defects resulting from improvements in process quality yields substantial cost savings. Quality isn’t a cost sink; it’s a profit source. Therefore, quality improvement of the manufacturing process should be considered the first step toward increased productivity, a step which will yield a higher quality product at a lower unit cost. Measuring

quality

Quality

improvement

is measured by the Cost of Quality costs include all expenses incurred to assure and assess conformance with (COQ ). Quality

design specifications and to pay the consequences of inferior quality. The COQ concept provides a tool to estimate the cost reduction from a program of productivity improvement and to quantify the results of process improvement. To apply this concept in practice, it is necessary to understand the fundamentals of the total cost of quality concept. The total cost of quality

concept

The total COQ is the sum of three principal cost elements: the cost to appraise quality, the cost to prevent defects, and the cost due to the existence of failures. Appraisal costs are expenses incurred to assure the product conforms to design specifications and is lit for use. They can result from incoming inspection, product inspection and test, and maintenance and operation of test equipment. Prevention costs involve expenses incurred to eliminate or reduce failure costs. They include costs to train production personnel how to avoid the production of defective products. Another type of prevention cost includes the expenses associated with activities to collect, analyze, and report quality data and to correct process malfunctions. Failure costs result from defects which are found prior to (internal costs) and after (external costs) shipment to the customer. They include the expenses to rework and retest defective units, to scrap subquality material, to honor product warranties, and to pay the consequences of dissatisfied customers. In most companies, the total of the quality costs is a very large sum of money, sometimes 20% of the sales revenue. The graph of Fig. 1 provides some useful insight into the principal quality costs. The Total Cost of Quality (TCOQ) has the following characteristics: ( 1) When little is being spent to prevent and detect defects, many defective products are produced. The result is a high failure cost and, hence, a high total cost of quality. (2) To improve conformance, appraisal and

3 failure costs while optimizing the cost of control, In striving to reduce failure costs, care must be taken not to increase total quality costs. This can occur when appraisal and prevention costs are increased disproportionately to reduce failure costs. Reduction of quality costs is not an end in itself; it is a means to the end of improving the overall company profitability.

COST OF FAILURE

OUALITY

IMPROVEMENT MANAGEMENT 100%

QUALITY

OF

CONFORMANCE

Fig. I. Optimum cost-of-quality

100% GOOD

OEFECTIVE

model.

prevention measures are initiated and increased. Failure costs decline, and more importantly, the total cost of quality decreases. Moving from left to right on the ‘Quality of Conformance’ axis, even a small increment in the prevention costs can significantly reduce the total cost of quality. (3) Due to the variability inherent within every production process, there is a point of diminishing returns where substantial costs are required to identify or prevent the few remaining defects. Although the failure costs continue to decrease, the total cost of quality increases. (4) The minimum point of the TCOQ curve represents the optimum cost of quality (OCOQ ). All points on the curve to the left of the optimum indicate a potential application for a defect prevention project. All points to the right are characterized by excessive or undue costs of control. In this instance, the improvement project should study all costs of prevention and appraisal for possible reduction or elimination of any unnecessary costs. (5 ) When TCOQ > OCOQ, the operating costs are much higher than they could be, which means that the profit margin on each unit of the product is much lower than it should be. At an intuitive level, reaching the optimum cost of quality is achievable by minimizing the

THROUGH

TEST DATA

Lasting improvement comes only from a program of action that has a clear-cut goal. The primary objective of a productivity improvement project is to reduce costs systematically, while improving quality over the life cycle of the product through process refinement. Automation per se is not the solution. If you do not eliminate defects in the process, you will just end up with a pile of defective products. What is required is a program that collects relevant data and performs analysis in such a way as to reveal the true causes of the defects. Armed with these structured data, managers can fine-tune the process to reduce or eliminate the causes. Leveraging test information

Within the electronic manufacturing environment, information from the test and repair process serves as the basis for action to improve product quality and overall productivity. Most manufacturers, recognizing the imhave this information, portance of implemented manual data collection systems. Action is seldom taken on these data, however, because of the drawbacks inherent in this method of collection: - vulnerability to human error - inability to handle large amounts of data in a timely manner - reliance upon each operator for data consistency

Fig. 2. Block diagram of a state-of-the-a~

test data management

- incomplete recording of the nature of the data Full utilization of this information has become feasible with recent advances in both computer technology and high-speed communications networks. Figure 2 depicts a state-ofthe-art Test Data Management (TDM) system where test and repair data are automatically collected from each test stage via direct connections to a local area network or by entering the data into an intelligent terminal. Integration of this information at a centralized data base provides real-time visibility into the condition of the manufacturing process. An automated data collection and management system allows managers to make decisions faster and with more precision, rather than by the seat-of-the-pants. The principal goal of the TDM approach is fault prevention. Many defects can be prevented from recur~ng because vital information on the nature and causes of defects can be fed-back to earlier stages of production or design, or to component suppliers, and can be used to systematically adjust the process up-

system.

stream. This more extensive utilization of test information leads to improvements in both quality and productivity. The role of failure analysis

To apply this information as a lever to reduce the number of defects, it is necessary to know not only the kinds of defects and their probability of occurrences, but also the apparent causes of those defects. Since every defect has multiple possible causes, it is of no use just to list the total number of defects. Failure analysis must play an active role in achieving process improvements, where analysis is the process of determining the origin or cause of a test failure of determining the origin or cause of a test failure observed on the device under test. The basic requirement is to find the number of defects caused by each source so that you can take appropriate action. In all cases, managers have to produce results with limited manpower, time, and capacity; thus, it is necessary to structure the analy-

5

sis in such a fashion that the areas worthy of attention are clearly evident. Most industrial processes produce items whose fault characteristics fall into a distribution which can be described by the Pareto principle (also known as the 80/20 rule): 80% of the defects are traceable to 20% of the possible failure causes. The analysis should point out those vital few problem areas which contain the bulk of the opportunity for improvement in process quality. A recommended approach for this type of analysis is to group the major possible causal factors of manufacturing process defects into such categories as bareboard, assembly, component, and solder defects. These defect categories can then be divided by their nature into defective items. For example, the assembly defect category could include the following items: reversed component, missing component, wrong component, bent lead, and incorrect wiring. By analyzing the data in this fashion, the resulting defect summary report pinpoints the manufacturing area where defects exist, followed by a detailed defect analysis to determine the failure mechanisms which account for most of the defects. Correcting these process problems will result in improved production yields and a reduction in bad boards (that must be repaired). Failure analysis serves an important role in a TDM system by shifting emphasis from defect detection to defect prevention, with the ultimate goal to improve product quality and overall productivity. MEASURING

TDM

necessary to focus on the hard savings. The Cost of Quality concept can be applied here as a measurement tool to estimate the cost savings. By applying straightforward analytical techniques, the variables that impact the COQ were studied to determine which can be improved by TDM. We identified four keys areas that provide a good, yet conservative estimate of the realizable cost savings from an implementation of the TDM strategy. These key cost areas are: 1. Analysis and repair costs 2. Deferred capacity expansion 3. Work-in-process inventory carrying costs 4. Automatic data collection Let us examine each area in detail to identify where the cost savings are derived. Analysis

and repair costs

Major cost savings can be realized by reducing the labor associated with analyzing and repairing defective boards. The diagram in Fig. 3 depicts a typical test and repair loop and defines the parameters which constitute the analysis/repair cost. The mathematical expression for the analysis-and-repair labor cost per board is: A/R cost per board = = (Faults

per board)

x

fault) x (Repair loops) =F(AfR)LR

(Analysis+Repair x

(Labor rate)

EFFECTIVENESS

The benefits from a TDM system can be organized into two categories, “hard” and “soft”. The hard benefits are those that produce measurable savings from established baseline data or high confidence estimates. The soft benefits are those that are of a qualitative nature, such as better management control and improved interdepartmental communications. To measure the effectiveness of the TDM strategy as an approach to improved productivity, it is

Fig. 3. Typical test and repair loop.

time

per

6 3. Repair time (R ) : Automatic correlation of each defect cause to the desired repair action eliminates any second guessing by the repair operator. 4. Repair loops (L): The number of times that a faulty board circulates around the repair loop is largely a function of the accuracy and resolution of the diagnostic message. Misinterpreting these messages can cause 5-l 5% more boards to pass around the loop. Defect summary and repair performance reports from a TDM system will identify board types with unusually high repair loop counts and will indicate inaccuracies in the test program. To estimate the cost savings per board, let cy, /I, and rl/equal the fractional improvements in faults per board, analysis and repair time, and repair loops, respectively. As a result, the analysis and repair labor cost per board with a TDM system is:

The labor rate applied in this formula is an average value of the burdened hourly labor rates for analysis and repair, to simplify the evaluation. This cost area can be a substantial fraction of the total cost of quality because it is a recurring cost for each defective board coming out of manufacturing. A TDM system will generate cost savings by reducing each of four variables: 1. Faults per board (F) : The cost of analysis and repair is tremendously sensitive to faults per board for two reasons. First, each fault on a board typically requires a separate analysis and repair action. Second, as F rises, good board yield falls and more boards will fail, requiring rework and retest (see the Appendix). Process improvements aimed at eliminating the major causes of board defects will greatly reduce this variable. 2. Analysis time (A): A TDM system will generate an empirical database that relates failure messages to the defect causes. Providing this information to the analysis operator aids the selection of the appropriate defect cause, thus reducing analysis time.

I

Fig. 4. A test data management

CURRENT

A/R cost per board (with TDM) =F(l-a)(A+R)(l-P)L(l-y/)B

To simplify this equation conservatively, let x equal the average value of cy, p, and w. The formula now becomes:

REPAIR

system reduces repair labor costs.

COST (WITIIOUT

TDM):/

7

A/R cost per board (with TDM) =F(ASR)LB( =F(A+R)LB(.x,)~

1-x)3 where (1 -x)=x,

Therefore, we can calculate the cost savings per board: A/R cost savings per board = [F(A+R)LB] =F(A+R)LB(

- [F(A+R)LB(x~)~]

I -xc3)

= (current A/R cost per board) ( 1-xc ‘)

Figure 4 shows how a percentage improvement in the variables, which determine the analysis/repair labor cost per board, translates into cost savings. For example, if the current cost of analysis and repair is $1 O/board, an 8% improvement will yield a cost reduction of $2.20 per board. An 8% improvement via TDM has proven to be quite reasonable. The total cost savings per year is then calculated by multiplying this figure by the annual board volume. In many cases, this dollar savings alone is enough to have the TDM system pay for itself in less than one year.

board rises, good board yield falls and more boards will fail, requiring retest and reducing throughput (see the Appendix). Reducing the number of faults per board and the number of repair loops will have a positive impact on the capacity of the test system. The cost savings equal (cost of add’1 tester) x (cost of capital) x (length of deferral). The first two terms are straightforward. The third term can be estimated using the compound interest formula: FV=PV(

1 +i)”

where: FV= maximum capacity, PV= current throughput, i= annual board volume growth rate, and n = years to reach capacity. Solving for n: log( FV/PV)

n=log( I+i)

Reducing the faults per board and repair loop parameters decreases the total board test requirements by a factor p:

“PV

PV (with TDM) (without TDM)

Deferred capacity expansion

At each test stage, tester capacity is available to test production boards plus repaired boards. Reducing the number of boards requiring retest will expand tester capacity to accommodate growth in the production board volume. This means that the purchase of additional testers can be deferred, freeing capital for alternative uses. In addition, reducing the number of boards requiring rework and retest frees up manufacturing floor space for more productive uses. The number of retests is a function of the number of faulty boards multiplied by the number of times each faulty board circulated around the repair loop. A statistical relationship exists between the average faults per board and the good board yield: as the faults per

where Olpll. The increase in tester capacity, An, can be calculated: An = n (with TDM ) - n (without

= =

log(FV/pPV) log(l+i)

-

TDM )

log(FV/PV) log(l+i)

log(PV) -log(pPV) log( l+i)

=-log( l/P) log( l+i)

Figure 5 shows how the percentage decrease in the total test time required for production and repaired boards translates into capacity increase at various annual board volume growth rates.

8

1

1

5

10

Fig. 5. Decreased test time from the implementation

1

#

15

20

1 25

of a test data management system defers the purchase

ofadditional testers.

LET. REDUCTION IN NUMBER OF REPAIR LOOPS NUMBER OF TEST STAGES = 3 AVG. NUMBER OF BOARDS IN EACH REPAIR AVG TIME CARRYING

Work-in-process

LOOP

COST = ZWMONTH VALUE

AVERAGE

FIRST PASS BOARD

PER BOARD

LOOP = 200

= I DAY

AVERAGE

SAVINGS

Fig. 6. Example of work-in-process

IN REPAIR

= 54

= 7 x IO-‘/DAY

OF EACH BOARG

= SlOO

YIELD

= (05K31(200)(1)(7 = S1.26

= 404

x lo-‘KlOO)il-04)

cost savings from TDM implementation.

inventory

Ideally, a company would prefer not to hold any inventories since it means tying up capital in goods that cannot improve the company’s earnings, as a new piece of machinery can. By financing the inventories, the company foregoes the opportunity to earn a better return by using its cash in some other, income-generating way. These opportunity costs are typically a large portion of inventory carrying costs. To control work-in-process inventory levels, the manager must find ways to shorten production-cycle times, or ways to break existing bottlenecks. Within the test and repair environment, a bottleneck exists at board repair.

Since analysis and repair time is a small fraction of the total time that a board spends in the repair loop, the major savings in WIP carrying costs comes from reducing the number of times that a (bad) board passes around the repair loop (as measured by w) . The savings per bad board can be quantified as: (number of test and repair loops) x (ave. number of boards in loop) X (ave. time in loop) X (carrying cost in %) x (value of a board)

To convert this figure into savings per board, multiply by the percentage of bad boards ( 1 -Yield). The savings can vary over a wide range, from less than ten cents to over two dollars per board. The example shown in Fig. 6 demonstrates typical savings.

9 Automatic

data collection

est return on efforts to improve productivity. Leveraging test information to line-tune process quality results in a substantial reduction of the cost associated with building a high quality product that consistently meets your customers’ expectations. The TDM approach to productivity improvement has many attractive features and no doubt the future holds even more promise for the expanded capabilities of TDM systems.

The TDM approach provides savings by producing quality reports automatically. These reports are available in real-time for use by management as well as floor supervisors. The cost savings can be quantified as the elimination of the labor-hours previously used to manually collect data, enter data into the plant computer, and compile reports. In addition, there is often a reduced load on the plant’s existing computer since a dedicated computer maintains TDM data. This frees up plant computer capacity for other uses, deferring computer expansion requirements, thus saving money. For example, if a TDM system allows the equivalent of two full time people to perform other tasks, then the annual savings might be 2 x ($40K burdened yearly labor cost per person) = $80K/year. An additional one-time savings would also occur if the TDM system lightened the load on the plant computer to the point where a $1 OOK memory expansion project could be deferred for one year. The savings in this area might be ($ IOOK) x ( 18%/year cost of capital) x ( 1 year) = $18K.

APPENDIX

CONCLUSIONS

Derivation

In today’s profit-oriented climate, the companies that can produce and deliver the highest quality goods and services at the lowest cost are the most successful. For most electronic manufacturers, the Test Data Management approach is more effective for improving product quality and overall productivity than token robots or other isolated quick lixes. The reason: TDM will enable manufacturers to create higher quality products and get them to market faster, with higher profit margins. TDM accomplishes this by fully utilizing the information generated during the test and repair of faulty boards. With automated data collection, fault analysis highlights those vital few process problems which promise the great-

In order to arrive at the relationship between faults per board and the yield of good boards, imagine that D faults are randomly and uniformly distributed over N boards. If we call the average number of faults per board F, then F= D/N. The probability that one of those faults will fall upon a given board is 1/N. The probability that one of those faults will not fall upon a given board is - 1/N. Assuming that there are D faults on these N boards, then the probability that all D faults will not fall upon a given board is

FURTHER

READING

Davis, B., 1982. The Economics of Automatic Testing. McGraw-Hill. Ishikawa, K., 1982. Guide to Quality Control. Asian Productivity Organization. Juran, G. and Bingham. 19xx. Quality Control Handbook. McGraw-Hill Book Company,3rd edn. MacAloney, B. and Littlejohn, P., 1982. Manufacturing productivity automated vs manual test data management systems. In: 1982 International Test Conference Digest of Papers, Philadelphia, PA, November 15- 18. Parsons, R., 1974. Statistical Analysis: A Decision-Making Approach. Harper and Row. Solecky, P. and Itsu, F., 1982. Board diagnosis: A current assessment and direction for future improvement. In: 1982 International Test Conference Digest of Papers, Philadelphia, PA, November 15-18.

of the yield formula

Y=(l-l/N)”

This is the probability

that the board

is free

10

Fig. A. 1. Relationship between faults per board and the yield of good boards.

from defects, i.e. the yield. Substituting for D, the above equation can be written Y=(l-l/N)NF

It is convenient to remove the dependence on the lot size N. This can be done by going to the limiting case where N goes to infinity: lim (Y= ( l/e)“=eeF n--m

If the fraction of faults detected by a particular test stage is less than one, the yield of apparently good boards will be greater than that predicted by the above equation. This apparent yield can be calculated from the same equation by multiplying F by the fraction of faults covered, referred to as the fault coverage. Figure A.1 shows how yield varies with faults per board and with fault coverage.