Analyzing variability in continuous processes

Analyzing variability in continuous processes

European Journal of Operational Research 156 (2004) 312–325 www.elsevier.com/locate/dsw Production, Manufacturing and Logistics Analyzing variabilit...

370KB Sizes 1 Downloads 134 Views

European Journal of Operational Research 156 (2004) 312–325 www.elsevier.com/locate/dsw

Production, Manufacturing and Logistics

Analyzing variability in continuous processes Kumar Rajaram a

a,*

, Andreas Robotis

b

The Anderson School, University of California Los Angeles, Box 951481 Los Angeles, CA 90095-1481, USA b INSEAD, Boulevard de Constance 77305 Fontainebleau Cedex, France Received 16 May 2002; accepted 20 December 2002

Abstract We analyze the impact of variability on a continuous flow production process. To perform this analysis, we consider an n-stage, serial continuous process in which variability is introduced at each stage. We develop a continuous time model to capture the propagation of variability through the system and use this model to calculate the mean and the variance of the distribution of the output from this process. These results are then used to determine the optimal decisions for variability reduction when designing and operating these processes. Ó 2003 Elsevier B.V. All rights reserved. Keywords: Continuous processes; Production; Variability; Process design; Operational improvement

1. Introduction Continuous flow production processes are prevalent in several major manufacturing industries. For instance, a majority of processes in the intermediate food processing, pharmaceutical, paper, chemical and petrochemical industries fall into this category. These processes are constructed to reliably produce large volumes of well-established products using conventional and tested technology. Since building continuous processes requires major capital investments, it is crucial that they

* Corresponding author. Tel.: +1-310-825-4154; fax: +1-310206-3337. E-mail addresses: [email protected] (K. Rajaram), [email protected] (A. Robotis).

consistently produce high volumes of output at the correct quality level. Output variability at these processes has a significant impact on the process economics defined by operational costs, such as raw material, labor, production, energy and holding costs, and productivity measures such as yield. In addition, output variability affects market effectiveness (measured by product quality, delivery time to customers and breadth of the product portfolio), environmental factors and, finally, the development of process knowledge, which is crucial to innovation and to improvement at these processes. As a direct result, there have been systematic and broad-ranging initiatives to reduce output variability in these processes. Broadly speaking, these initiatives can be classified as technological or operational. Technological initiatives include new process technology and automation to control these systems. Operational initiatives include

0377-2217/$ - see front matter Ó 2003 Elsevier B.V. All rights reserved. doi:10.1016/S0377-2217(03)00044-4

K. Rajaram, A. Robotis / European Journal of Operational Research 156 (2004) 312–325

developing an effective interface between the operators and the process control system (Rajaram and Jaikumar, 2000, 2002a), running these processes continuously and shutting down only a few times a year for scheduled maintenance, producing basic grade products in long campaigns to minimize product switchovers (Rajaram and Karmarkar, 2002b), meeting customer demand by blending these basic grade products (Karmarkar and Rajaram, 2001) and finally using techniques of statistical process control to monitor and remove root causes of process variability (Carr, 1999). To understand the impact of these initiatives, the first author conducted a focused study from 1993 to 2000 at over 30 continuous process operations in the petrochemical refining, food processing and pharmaceutical industries at plants located in Europe, North America and Asia. We found that, despite significant capital and organizational investments at these processes, there was significant output variability. Corrected for demand seasonality and production sequences, we found that the coefficient of variation of daily output ranged from 10% to 50%, averaging around 25%. The reasons for this output variability ranged from technological reasons including choice of process technology and process automation systems to operational factors such as operational procedures, operator reaction to output variability at individual stages, procedures for handling disruptions and managerial attitudes to problem solving and process development. We found that processes with more advanced process control systems that provided more flexibility to change parameters displayed a greater degree of output variability. In addition, output variability was not generated from obvious sources such as technological or control failures and breakdowns. Rather, we found that small levels of variability at each stage due to operational procedures seemed to result in amplified output variability. These observations motivated us to analyze the impact of variability in continuous processes. Our focus is to analyze how variability at individual stages is propagated in the process and how it impacts the variability of the output. To perform this analysis, we use variance as a measure of

313

variability and model the propagation of variability using a continuous time model. This model is used to calculate the mean and variance of the distribution of the output from this process. These results are then used to determine the optimal decisions for variability reduction when designing and operating these processes. There have been several streams of research that have analyzed the impact of variability on an operating system. A comprehensive summary is provided by Hopp and Spearman (1996). However, much of this analysis uses queuing models and approximations designed for discrete processes, with finite and random inter-arrival and service times. The performance measure is the variability in departure or completion time of each individual product. The propagation of variability through a system has been studied in the more general context of industrial dynamics by Forrester (1961) and more recently on its effect on supply chain management by Lee et al. (1997). However, we have not found any literature that addresses this problem in the context of continuous process manufacturing. In addition, the queuing model-based analysis is not directly applicable as it is not an adequate physical representation of these processes, which have continuous streams of product that are not naturally divisible into individual units, very small and constant inter-arrival times and variability at individual stages are reflected in output rates rather than service times. Further, in continuous processes, the more natural performance measure is the level of variability in the product flow rate at the output of these processes. Consequently, to capture the unique characteristics of this problem, we develop a continuous time model of this process. This paper is organized as follows. In the next section, we present the model for variability propagation and use it to calculate the mean and variance of the output distribution. We use these results to quantify the cost impact of variability in these processes. In Section 3, we use the results to understand the design and operational implications for variability reduction. In Section 4, we present an example to illustrate these ideas. In the concluding section, we summarize the main lessons

314

K. Rajaram, A. Robotis / European Journal of Operational Research 156 (2004) 312–325

of our work and present directions for future research. 2. Modeling the propagation of variability To model the propagation of variability, we consider an n-stage, serial continuous flow production process, in which stages are indexed by i 2 I ¼ f1; . . . ; ng. Each stage has its own characteristics that transform the input into an output for the downstream stage. The input to the process is a continuous stream of raw material to the first stage. We assume that this input is a stationary stochastic process with flow rate distribution u with mean Eu . The variability of the flow rate distribution and the operation of the first stage induce variability at this stage that we refer to as noise. We represent noise as stationary distribution with mean 0 and variance Vð1Þ . In addition, let Vi be the noise generated due the operation of the ith stage of this process, where i ¼ 2; . . . ; n. The noise at each of the other stages is also represented as a stationary distribution with mean 0 and variance Vi . We assume that noise across stages is not correlated since different stages are often controlled by independent control systems and procedures. The output of this process corresponds to the output from stage n. We represent the output flow rate distribution by the random variable y with mean E0 and variance V0 . These notations are shown in Fig. 1. At each stage, we shall assume that this process follows linear first order dynamics, implying that there is only one stream of raw material entering the process and one stream of product exiting the process at the last stage. In addition, this implies that there can be no catastrophic changes in steady state and that once the process is started up and after reaching steady state, the output is the same regardless of the initial starting conditions. These assumptions are consistent with the design and Noise 1 V(1) Input u

Stage 1 K(1)

Noise 2 V(2) Stage 2 K (2)

x_ i ðtÞ ¼ ai xi ðtÞ þ bi ui ðtÞ; yi ðtÞ ¼ xi ðtÞ;

Output Y

Fig. 1. A multistage process with output variability.

ð1Þ

where x_ i ðtÞ ¼ ðdxi ðtÞ=dtÞ, while ai and bi are constants. If F ðtÞ is a piecewise continuous function and jF ðtÞj < Me@t , where t P 0 and M, @ and T are positive constants, then the Laplace transform of F ðtÞ is defined as Z 1 LfF ðtÞg ¼ f ðsÞ ¼ est F ðtÞ dt: 0

By taking the Laplace transform of (1), we can transform the dynamics from time domain to frequency domain. This transformation is convenient for our analytical exposition. The dynamics of each stage can then be represented by the transfer function bi ; s þ ai yi ðsÞ ¼ Hi ðsÞui : Hi ðsÞ ¼

ð2Þ

It is well known (Ogata, 1996) that the output of this system at any instant of time is Z t bi ui eai ðtt0 Þ dt: yi ðtÞ ¼ yi ðt0 Þeai ðtt0 Þ þ eai ðtt0 Þ t0

ð3Þ Expanding the second term in Eq. (3) we get yi ðtÞ ¼ yi ðt0 Þeai ðtt0 Þ þ

bi ui bi ui ai ðtt0 Þ  e : ai ai

ð4Þ

Since we are concerned about the steady state performance of these processes after start up at which point t  t0 , the term eai ðtt0 Þ goes to zero exponentially. Thus, we can neglect this term in (4) and the output at the ith stage of this system is yi ðtÞ ¼

Noise n V(n) Stage n K (n)

steady state operating conditions of the industrial continuous processes we consider in this paper. The dynamics at the ith stage of a first order linear system, with state xi , input ui and output yi , can be described by the equation

bi ui : ai

ð5Þ

Since from (2), yi ðsÞ ¼ Hi ðsÞui , we get Hi ðsÞ ¼

bi ¼ Ki ai

8i

ð6Þ

K. Rajaram, A. Robotis / European Journal of Operational Research 156 (2004) 312–325

where Ki P 0 is a constant independent of s. We define Ki Q as the characteristic constant of stage i and K ¼ ni¼1 Ki as the characteristic constant of the process. The characteristic constant typically depends upon the physical nature of the stage or process. For instance, for stages corresponding to unit processes such as evaporation or filtration for which there is mass reduction, Ki is less than one. On the other hand, for stages that involve mass gain due to reactions such as those found in catalytic crackers or reactors, Ki is greater than one. Finally, for processes such as refining and distillation, typically K is less than one, while for processes such as extraction and mixing, K is greater than one. More details on these types of processes can be found in Perry et al. (1984) and Smith et al. (1993). The mean and variance of the output distribution for a process whose dynamics are represented by (1) is given by the following propositions. Proposition 2.1. The mean of the output distribution y is given by E0 ¼

n Y

ð7Þ

Ki Eu :

i¼1

Proof. All proofs are provided in the Appendix A.  Proposition 2.2. The variance of the output distribution y is given by V0 ¼

n X i¼1

Vi

n Y

Kp2 :

ð8Þ

p¼i

2.1. Cost implications of output variability To understand the cost implications of variability, we consider a continuous process that is configured to produce an output of y units/time. However, due to the variability in the process, actual output/time is a distribution y with density and distribution functions f ðyÞ and F ðyÞ respectively. Deviations of the actual output from the configured output y lead to inefficiencies, which in turn lead to higher operational and environmental

315

costs. To capture these costs, we let O represent the unit costs of producing in excess of y and U represent the unit costs of producing below y . In addition, we assume that this process is required to meet a demand level D units/time. The unit cost of not meeting demand is represented by a backorder cost b, while the cost of producing more than demand results in a unit holding cost h. We assume that O, U , b, h P 0. The expected costs of deviating from the configured output level and demand is represented by Cðy ; DÞ ¼ Ey ½Cðy ; D; yÞ

ð9Þ þ

þ

where Cðy ; D; yÞ ¼ Oðy  y Þ þ U ðy  yÞ þ bðD  yÞþ þ hðy  DÞþ , and ðaÞþ ¼ maxð0; aÞ. Expanding the terms in (9) we get Cðy ; DÞ ¼ ðO þ U Þy F ðy Þ  Oy  UE0 þ ðO þ U Þ Z 1  yf ðyÞ dy þ ðb þ hÞDF ðDÞ y Z 1  hD  bE0 þ ðb þ hÞ yf ðyÞ dy: D

ð10Þ A commonly used distribution for noise is the Gaussian or normal noise distribution. Under the assumption of Gaussian noise, it is well known that if the input at a linear system is a stationary Gaussian process, then the output is also a stationary Gaussian process (Leon-Garcia, 1994). Thus, if we assume that the noise at each stage is normally distributed with mean 0 and variance Vi ¼ r2i then the output distribution of the process is mean Pn l20Q¼n E0 ¼ Qnalso normally distributed with 2 2 K E , and variance r ¼ 0 i¼1 i u i¼1 ri p¼i Kp . Under this distribution, " " # Z 1 y  l0 yf ðyÞ dy ¼ r0 / r0 y " #!# y  l0 þ l0 1  U ; r0 ð11Þ " " # Z 1 D  l0 yf ðyÞ dy ¼ r0 / r0 D " #!# D  l0 þ l0 1  U ; r0

316

K. Rajaram, A. Robotis / European Journal of Operational Research 156 (2004) 312–325

where

3. Design and operational implications for variance reduction 1 2

/ðkÞ ¼ e2k

Wðy Þ ¼ ðO þ U þ b þ hÞy F ðy Þ þ ðO þ hÞðl0  y Þ " " # y  l0 þ ðU þ O þ b þ hÞ r0 / r0 " ## y  l0  l0 U : ð13Þ r0

In this section, we consider the various design and operational initiatives often employed to reduce variability in a continuous process. Broadly speaking, such initiatives can be classified as designbased techniques, including choosing the sequencing of stages and choice of gains at each stage, or operation-based techniques, which aim at variance reduction while operating the process. Our aim is to determine the optimal decisions to reduce output variability under each of these initiatives. We first consider the problem of designing the best sequence of stages when the characteristic constants and operational variability at each stage are given. Sequencing of stages is important in the design of continuous flow processes, as the sequence of the characteristic constants itself can contribute to variance amplification of the output. Moreover, continuous processes produce only a few high volume products that go through all the stages in the same sequence, and thus optimal sequencing of stages could impact the entire production cycle. This is in contrast to discrete environments such as job shop processes where the objective is to optimize a measure of time rather than output. In addition, there is a large variety of products produced and all products do not have to go through all the stages in the same sequence. Consequently, much of the research in this area has focussed on determining the best production sequence of products (and not process stages) in order to optimize a time measure such as mean flow time, mean waiting time or mean lateness (French, 1982). Thus, to the best of our knowledge, the problem of analytically determining the optimal sequence of stages to minimize output variability has not be addressed in the literature. To address this problem, let ðiÞ represent the position of the stage in the process. Proposition 3.1 describes an important property of the optimal variance minimizing sequence.

Using the proof of Proposition 2.3, it is easy to verify that Wðy Þ is also increasing in r0 . Next, we analyze the design and operational procedures commonly used to achieve variability reduction.

Proposition 3.1. The output variance for the process is minimized for the sequence for which any two stages located at position ðiÞ and ði þ jÞ satisfies the following:

and UðkÞ ¼

Z

k

1 2

e2z dz: 1

Let Wðy ; DÞ ¼ Cðy ; DÞ under the Gaussian noise distribution. Then, using (10) and (11) we get Wðy ; DÞ ¼ ðO þ UÞy F ðy Þ þ Oðl0  y Þ þ ðU þ OÞ " " # " ## y  l0 y  l0  r0 /  l0 U r0 r0 þ ðb þ hÞDF ðDÞ þ hðl0  DÞ      D  l0 D  l0  l0 U : þ ðb þ hÞ r0 / r0 r0

ð12Þ Proposition 2.3. Wðy ; DÞ is increasing in r0 . Proposition 2.3 illustrates the importance of variability reduction in reducing expected backorder and holding costs along with expected operational and environmental costs of this process. In developing Wðy ; DÞ, we have assumed that y 6¼ D, since, to achieve process efficiencies, products are often produced in continuous flow processes at constant levels and inventoried to meet cyclical changes in demand. However, it is plausible that y ¼ D in certain settings. In these situations, (12) simplifies to

K. Rajaram, A. Robotis / European Journal of Operational Research 156 (2004) 312–325

ðVðiÞ  VðiþjÞ Þ

iþj Y

2 2 2 KðiÞ < VðiÞ KðiÞ  VðiþjÞ KðiþjÞ ;

p¼i

8i ¼ 1 to n;

8j ¼ i þ 1 to n:

Proposition 3.1 suggests that a stage at position ðiÞ should precede a stage at position ði þ jÞ, if the scaled difference of variance for stages ðiÞ and ði þ jÞ is smaller than the difference in the scaled variance between these stages. Here, the scaling factor for the difference in variance is the product of the squared characteristic constant for stages ðiÞ, ði þ jÞ and all stages between them, while the scaling factor for the variance at any given stage is its squared characteristic constant. If this condition is not satisfied, then the stage at ði þ jÞ should precede the stage at ðiÞ. Thus, given a set of process stages and technological constraints that preclude certain sequences, we can use this property to develop the best sequence for minimizing output variance. In practice, due to tight technological constraints, it often may not be possible to implement the optimal variance minimizing sequence. Hence, we consider the case in which we are required to determine the optimal characteristic constant at each stage given the sequence of stages. This problem can be addressed by solving the following non-linear optimization problem (P). n n X Y ðPÞ Z ¼ min Vi Kp2 ðK1 ;K2 ;...;Kn Þ

i¼1

p¼i

subject to: n Y

Ki ¼ K;

ð14Þ

i¼1

a i 6 K i 6 bi ;

8i ¼ 1 to n:

ð15Þ

The objective of problem (P) is to minimize the output variance by choosing the appropriate characteristic constant at each stage. Constraint (14) ensures that the characteristic constants across all the stages are chosen to meet the required characteristic constant of the process. Constraint (15) ensures that the characteristic constants of each stage are within its design parameters ai , bi > 0. Problem (P) falls under a

317

specialized class of non-linear optimization problems known as a geometric programs (Duffin et al., 1967). To solve this geometric program, we let Ki ¼ eYi and take the logarithm of the objective and constraints to get the Transformed Problem (TP). ( Pn ) n X 2 Y T ðTPÞ Z ¼ min Log Vi e p¼i i ðY1 ;Y2 ;...;Yn Þ

i¼1

subject to: n X Yi ¼ LogðKÞ;

ð16Þ

i¼1

Logðai Þ 6 Yi 6 Logðbi Þ;

8i ¼ 1 to n:

ð17Þ

Proposition 3.2. (TP) is a convex optimization problem. In light of Proposition 3.2, we can solve (TP) using standard techniques of convex optimization embedded in commercial software such as Matlab (Mathworks, Inc., 1998). The solution to this problem ðY1 ; Y2 ; . . . ; Yn Þ can then be used to calculate the optimal characteristic constant at, stage  i as Ki ¼ eYi . In the next section, we demonstrate the calculation of the optimal characteristic constant for the illustrative example. Observe from (6), that a required value of Ki can be achieved by the appropriate choice of constants bi and ai . Constant bi is determined by the design parameters of the stage such as its size, processing rate and time to reach steady state. Constant ai is influenced by the control mechanism employed at the stage and aspects such as the location and size of the buffer associated with the stage (Ogata, 1996). The choice of the constant used to change Ki , if necessary, depends upon the economics and nature of the stage. For instance, if capacity expansion is expensive, then ai is used to effect the change in the characteristic constant. On the other hand, if changes in ai could lead to further variability in operational parameters such as product quality and yield, bi is used to change Ki . Again, it may not be practically possible to change the characteristic constants of the process stages even when the sequence of these stages in

318

K. Rajaram, A. Robotis / European Journal of Operational Research 156 (2004) 312–325

the process is specified. To address this case, we consider the problem of determining the best stage for variance reduction, given the sequence and characteristic constant of each stage in the process. This stage is identified by the following proposition. Proposition 3.3. The following conditions establish the best stage for variance reduction: 1. When Ki > 1 8i, the highest impact of variance reduction is achieved at the first stage. The best sequence of variance reduction is from the first to last stage. 2. When Ki 6 1 8i, the highest impact of variance reduction is achieved at the last stage. The best sequence of variance reduction is from the last to first stage. 3. For a general process in which Ki P 0, the highest impact of variance reduction occurs at stage j for Qn which Dj ¼ maxi¼i...n p¼i Kp2 . Proposition 3.3 suggests that when all stages in the process have a characteristic constant greater than 1, the greatest impact of variance reduction is achieved at the first stage in the process. This is intuitive, since from Proposition 2.2 it can be observed that under these conditions, variability in the first stage is enlarged by all other stages of the process. Conversely, when all of the characteristic constants are less than 1, since variability is naturally dampened in the process, the greatest reduction can be achieved in the final stage, where the variability is dampened by only that stage. Finally, when we have a mixed process with different constants, the best stage for variance reduction can occur at any stage in the process. This stage is identified by the third part of this proposition. Identification of the best stage for variability reduction is critical since this could help focus and increase the economic benefits of methods such as statistical process control (Wheeler and Chambers, 1992), design of experiments (Montgomery, 2001) and robust process control (Rajaram et al., 1999) that are used in practice to reduce variability in these processes. It is important to note that Propositions 3.1–3.3 are derived for steady state operating conditions

for these processes represented by the linear first order dynamics. However, it could be instructive to develop the results corresponding to these propositions during transient or non-linear process states. This could provide insights into eliminating or reducing the propagation of variability during transient states in the process such as start ups, product switchovers or regeneration of individual stages. However, developing close form expressions for these cases may not be analytically tractable and, thus, one may have to resort to numerical methods such as simulation in these situations. We will further examine the transient case in future research. 4. Illustrative example In this section, we present an example to illustrate Propositions 2.1 and 2.2 on output mean and variability respectively, and to illustrate Propositions 3.1–3.3 on the design and operational implications of variability reduction. We also use simulation to demonstrate the validity of using these propositions on a serial continuous process with first order dynamics. In this example, we consider a three stage system for which Eu ¼ 20, K1 ¼ 1=3, K2 ¼ 3=4, K3 ¼ 4 and assume Gaussian noise at each stage with mean 0 and V1 ¼ 2, V2 ¼ 3, V3 ¼ 4. We consider the following cases. Case 1. Optimal sequencing of stages. Table 1 lists the value of the mean and variance at the output of this three-stage process when we use Propositions 2.1 and 2.2 and the simulation for various sequences of the process steps. Table 1 Theoretical and estimated mean and variance for Case 1 Sequence

Theoretical mean

Estimated mean

Theoretical variance

Estimated variance

(1,2,3) (1,3,2) (2,1,3) (2,3,1) (3,1,2) (3,2,1)

20 20 20 20 20 20

19.9 20 19.8 20 20 20

93 39.7 70.6 10.3 5.8 4.4

92.5 39.6 70.5 10.2 5.7 4.4

K. Rajaram, A. Robotis / European Journal of Operational Research 156 (2004) 312–325

319

Fig. 2. Process output for Case 2.

Fig. 3. Process output for Case 2, a ¼ 4.

This table suggests that the difference between the theoretical values of mean and variance derived using Propositions 2.1 and 2.2 respectively are very close and that the lowest output variance sequence corresponds to Kð1Þ ¼ K3 ¼ 4, Kð2Þ ¼ K2 ¼ 3=4, Kð3Þ ¼ K1 ¼ 1=3 and Vð1Þ ¼ V3 ¼ 4, Vð2Þ ¼ V2 ¼ 3, Vð3Þ ¼ V1 ¼ 2 so that Proposition 3.1 is satisfied for all stages. Fig. 2 represents the dynamic output corresponding to this sequence and the sequence Kð1Þ ¼ K1 ¼ 1=3, Kð2Þ ¼ K2 ¼ 3=4, Kð3Þ ¼ K3 ¼ 4 and Vð1Þ ¼ V1 ¼ 2, Vð2Þ ¼ V2 ¼ 3, Vð3Þ ¼ V3 ¼ 4, which has a dynamic output with the greatest variance since Proposition 3.1 is violated at all the stages. This shows that a dramatic reduction of output variability can be achieved by designing the process with the optimal sequence. For instance, a 95% reduction in output variability can be achieved when the sequences in stages are changed from the worst to the best configuration in our example.

characteristic constants in the example (i.e., K1 ¼ 1=3, K2 ¼ 3=4, K3 ¼ 4) was V0 ¼ 4:41. Thus, this represents a 4.86% reduction in output variability from the base case, when we use optimally designed characteristic constants. The dynamic output corresponding to the base case and the optimally designed characteristic constants are shown in Fig. 3. To understand the importance of the sequence of the stages in the process, we also consider the sequence for which the greatest variance is generated with the optimal characteristic constants. By Proposition 3.1, this highest variance sequence occurs at Kð1Þ ¼ K1 ¼ 0:0625, Kð2Þ ¼ K2 ¼ 4 and Kð3Þ ¼ K3 ¼ 4. The dynamic output corresponding to this sequence, also shown in Fig. 3, shows that the output variance of the process is greatly amplified under this sequence. Thus, in addition to determining optimal characteristic constants, it is crucial to sequence these stages in the optimal manner. Finally, we also computed the optimal characteristic constants for various values of bi , the upper bound on the characteristic constant at each stage. To simplify the exposition, we assumed that bi ¼ b and performed this analysis for several values of b. The percentage variance reduction from the base case for the various values of b are summarized in Table 2. This analysis shows that as the constraints for the design of the optimal characteristic constants at each stage become more relaxed, there is

Case 2. Optimal design of characteristic constants. When V1 ¼ 2, V2 ¼ 3, V3 ¼ 4, ai > 0, bi ¼ 4 8i and K ¼ 1, we find the optimal system characteristic constants fK1 ; K2 ; K3 g by solving problem  (TP) using Matlab and then setting Ki ¼ eYi . This    procedure results in K1 ¼ 4, K2 ¼ 4, K3 ¼ 0:0625. The output variance corresponding to this optimal solution is V0 ¼ 4:19. As discussed in Case 1 and shown in Table 1, the output variance under the

320

K. Rajaram, A. Robotis / European Journal of Operational Research 156 (2004) 312–325

Table 2 Percentage variance reduction for Case 2 Upper bound on characteristic constant at each stage: b

Ki

Variance reduction with optimal sequence versus design sequence (%)

4 5 6 7 8 9 10 None

4, 4, 0.0625 5, 5, 0.04 6, 6, 0.027778 7, 7, 0.020408 8, 8, 0.015625 9, 9, 0.012346 10, 10, 0.01 194.5, 0.079, 0.065081

4.86 6.5 7.37 7.88 8.22 8.49 8.61 9.1 Fig. 4. Process output for Case 3.

greater potential to reduce output variability. However, this potential is limited, as the rate of variability reduction rapidly flattens out as we increase the upper bound on the characteristic constant at each stage. Case 3. Variance reduction at individual stages. In this case, we let the initial values for Vi to be V1 ¼ 2, V2 ¼ 3, V3 ¼ 4. We then systematically reduce Vi by one unit, starting first at stage 1, then at stage 2 and finally at stage 3, and we find the percentage of variance reduction achieved compared to the results of Case 1. Table 3 lists the percentage of variance reduction at the output of this three-stage process when we use Proposition 2.2 and the simulation for various sequences of the process steps. This table suggests that the greatest percentage of variance reduction corresponds to the sequence Kð1Þ ¼ K3 , Kð2Þ ¼ K2 , Kð3Þ ¼ K1 , and Vð1Þ ¼ V1  1, a

result consistent with Proposition 3.3. Fig. 4 represents the dynamic output corresponding to this sequence and the sequence Kð1Þ ¼ K1 , Kð2Þ ¼ K2 , Kð3Þ ¼ K3 , Vð1Þ ¼ V1  1, which has a dynamic output with the lowest percentage of variance reduction. This shows that when the stage for decreasing variance is chosen optimally, this could lead to a reduction of around 25% in output variability from the worst case decision, even when stages are sequenced optimally.

5. Conclusions In this paper, we consider an n-stage, serial continuous flow production process in which variability is generated at each stage. We analyze how variability is propagated in the process and calculate its impact on the process output. To perform this analysis, we assume that the process

Table 3 Percentage variance reduction at output per unit variance reduction at a given stage Sequence (1,2,3) (1,3,2) (2,1,3) (2,3,1) (3,1,2) (3,2,1)

First stage

Second stage

Third stage

Theoretical

Estimated

Theoretical

Estimated

Theoretical

Estimated

1.0889 3.2940 1.4111 22.6192 13.1550 38.1113

1.2180 3.3750 1.5705 22.7664 13.2634 38.0505

9.4802 28.6776 2.4265 1.3668 22.6221 2.3032

9.3951 28.6349 2.3669 1.3199 22.5331 2.2620

17.3453 1.8446 22.4747 12.6652 1.4556 4.2132

17.3470 1.8381 22.4501 12.6733 1.4779 4.3184

K. Rajaram, A. Robotis / European Journal of Operational Research 156 (2004) 312–325

follows linear first order dynamics, which is represented using a continuous time model. This model is then used to calculate the mean and variability of the output at each stage and of the entire process. We also represent the cost impact of variability on this process and show that if the variability generated at each stage follows a Gaussian distribution, then expected over-production, under-production, backorder and holding costs are increasing with the level of output variability. The results of the model are used to determine the optimal decisions for variability reduction, while designing and operating these processes. For instance, the lowest output variability can be achieved by sequencing stages using Proposition 3.1. Next, for a given sequence of stages and required process characteristic constants, we formulate a geometric program to determine the optimal values of the characteristic constant at each stage in order to minimize output variability, subject to upper and lower bounds on the characteristic constants at each stage. Finally, given the sequence of stages and the characteristic constant at each stage, we identify the stage at which a decrease in variance would lead to the greatest reduction in output variance. We presented an example to illustrate the analytical results on the mean and variance of the output distribution and to calculate the optimal decisions for variability reduction when designing and operating these processes. We also conducted a simulation to validate these results for a dynamic first order system. The results of the simulation show that the mean and variance of the output distribution are almost identical to the analytical predictions, and significant variance reduction in output can be achieved if optimal decisions for design and operation are employed. For instance in our example, a 95% reduction in output variability can be achieved when the sequences in stages are changed from the worst to the best configuration. Similarly in this example, a 4.86% reduction in output variability can be achieved with the optimally designed characteristic constants. We also found that as the constraints for the design of the optimal characteristic constants at each stage become more relaxed, there is greater

321

potential to reduce output variability. However, this potential is limited, as the rate of variability reduction rapidly flattens out as we increase the upper bound on the characteristic constant at each stage. Finally, when the stage for decreasing variance is chosen optimally, this could lead to a reduction of around 25% in output variability from the worst case decision. The work described in this paper provides several future research directions. First, it could be interesting to extend this the model for variability propagation to an n-stage serial processes with recycles, and mixed processes with both serial and parallel stages. Second, this model could also be extended to include the impact of drifts in the process due to attrition of equipment and the control system. This would involve developing a model with time varying dynamics. Third, it could be useful to develop a relationship between the costs of changing the characteristic constant in terms of reassignment of buffers, reconfiguring the control system or changes in the capacity of the process. This relationship could then be used to develop an economic model of the process to determine the optimal level of variability reduction and the most cost-effective alteration in the process to achieve this level. Fourth, it could be instructive to derive results analogous to Propositions 3.1–3.3 during transient states of the process. This, in turn, could provide better insights into preventing or reducing the propagation of variability during these states. Finally, since we derive the output variance of a process at steady state and since the continuous time production model is a generalization of the discrete time production model, it is plausible that similar design and operational implications for variance reduction could also hold in a high-volume discrete manufacturing environment. Analytically verifying these implications in such environments could be a challenging, but fruitful area of future research.

Acknowledgements The authors thank Professors Arthur Geoffrion, Uday Karmarkar and two anonymous referees for several helpful comments. The research of the first

322

K. Rajaram, A. Robotis / European Journal of Operational Research 156 (2004) 312–325

author was partially supported by the Center for International Business Education and Research (IBER), UCLA.

Using (A.4) we can rewrite (A.2) as E½yi  ¼ E½yði1Þ 

Z

1

hi ðsÞ ds ¼ E½yði1Þ Hi ð0Þ:

1

ðA:5Þ

Appendix A Proof of Proposition 2.1. For each stage i the mean of the output distribution yi is given by (LeonGarcia, 1994) Z 1  E½yi ðtÞ ¼ E hi ðsÞxi ðt  sÞ ds 1 Z 1 ¼ hi ðsÞE½xi ðt  sÞ ds: 1

From the above equation observe that since the input xi is a stationary process, its mean E½xi ðt  sÞ is constant, and we can take the expectation outside the integral. Thus we have Z 1 E½yi ðtÞ ¼ E½xi  hi ðsÞ ds: ðA:1Þ 1

Observe that the output of the stage ði  1Þ is the input to the stage i; therefore, we can write Z 1 E½yi  ¼ E½yði1Þ  hi ðsÞ ds: ðA:2Þ 1

Next, consider the Fourier transform of a function gðtÞ that is defined as Z 1 Gðf Þ ¼ F ½gðtÞ ¼ gðtÞej2pft dt 1

where f is the frequency, and j2 ¼ 1. The corresponding inverse Fourier transform (used in the proof of Proposition 2.2) is defined as gðtÞ ¼ F 1 ½Gðf Þ

Z

1

Gðf Þej2pft df :

1

Then the Fourier transform of the of the transfer function hðtÞ is given by Z 1 H ðf Þ ¼ F ½hðtÞ ¼ hðsÞej2pf s ds: ðA:3Þ 1

This implies that Z 1 H ð0Þ ¼ hðsÞ ds: 1

ðA:4Þ

Observe that the mean of the output of each stage is a constant since Hi ð0Þ is constant. Since the mean of the output of each stage i, is the mean of the input scaled by Hi ð0Þ, we have: E½yð1Þ  ¼ Hð1Þ ð0ÞEu ; E½yð2Þ  ¼ Hð2Þ ð0ÞE½yð1Þ  ¼ Hð2Þ ð0ÞHð1Þ ð0ÞEu ; .. . E0 ¼ E½yn  ¼

ðA:6Þ n Y

Hi ð0ÞEu :

i¼1

Using (6), we next substitute Hi ð0Þ ¼ Ki in (A.6) to get the desired result.  Proof of Proposition 2.2. Let yi represent the output distribution of stage i with yn ¼ y representing the output of the process. By assumption, the input is a stationary stochastic process so that it has constant mean and variance. A less restrictive class of input stochastic processes that have similar characteristics with stationary processes is the Wide Sense Stationary (WSS) process. By definition a process xðtÞ is called WSS if the mean is constant so that mx ðtÞ ¼ m for all t, and the covariance is only a function of t1  t2 implying that Rx ðt1 ; t2 Þ ¼ Rx ðsÞ, where s ¼ t1  t2 . It is easy to observe that if a process is stationary as by our assumption, it is also WSS since all stationary processes have a time invariant mean and variance satisfying the requirements for it to be WSS. In addition, if the input to a linear system is a WSS process, then the output will also be WSS. To see this, first note that in Proposition 2.1 we had already proven that the mean of the output of each stage is a constant. It remains to be shown that the covariance is only a function of t1  t2 . To prove this result, let t1 ¼ t. Then by definition t2 ¼ t1 þ s. The covariance of the output is

K. Rajaram, A. Robotis / European Journal of Operational Research 156 (2004) 312–325

Ryi ðsÞ ¼ E½yi ðtÞyi ðt þ sÞ  Z t Z tþs hi ðsÞhi ðrÞxi ¼E 1

¼

Z

1



 ðt  sÞxi ðt þ s  rÞ ds dr t Z tþs hi ðsÞhi ðrÞRxi ðs þ s  rÞ ds dr: 1

1

ðA:7Þ

E½yi ðtÞyi ðt þ sÞ ¼

1 1

Z

hi ðsÞhi ðrÞRsi

1 j2pf s

 ðs þ s  rÞe

ðA:8Þ

ds dr ds:

1

1

 ðuÞej2pf ðusþrÞ ds dr du Z 1 Z 1 ¼ hi ðsÞej2pfs ds hi ðrÞej2pfr dr 1 1 Z 1  Rxi ðuÞej2pfu du ¼ Hi ðf ÞHi ðf ÞSxi ðf Þ 1

2

¼ jHi ðf Þj Sxi ðf Þ:

where Sxi ðf Þ ¼ F ½Vi . To get the covariance of the output, we take the inverse Fourier transform of Syn ðf Þ, so that ðA:12Þ

ðA:13Þ

Using (6), we next substitute Hi ðf Þ ¼ Ki in (A.11) to get ( ) n n Y X 2 Syn ðf Þ ¼ Kj Sxi ðf Þ: ðA:14Þ i¼1

j¼i

Substituting Sxi ðf Þ ¼ F ½Vi  in (A.14) we get ( ) n n Y X 2 Syn ðf Þ ¼ Kj F ½Vi : ðA:15Þ i¼1

j¼1

From (A.12) and (A.15) we get " # ( ) n n X Y 1 2 Ryn ðtÞ ¼ F Kj F ½Vi  : i¼1

ðA:9Þ

Changing the variables and letting u ¼ s þ s  r, we have Z 1 Z 1 Z 1 Syi ðf Þ ¼ hi ðsÞhi ðrÞRxi 1

j¼i

Vyn ¼ Ryn ð0Þ:

1

This result follows from noting that (A.8) depends only on s, where s ¼ t1  t2 . Next, consider the Fourier transform of Ryi ðsÞ, also known as the spectral density. In light of the previous result, the spectral density of the output can be written as Z 1 Syi ðf Þ ¼ Ryi ðsÞej2pf s ds 1 Z 1 Z 1 Z 1 hi ðsÞhi ðrÞRxi ¼ 1

i¼1

while, by definition,

1

 ðs þ s  rÞ ds dr:

1

jH2 ðf Þj2 fjH1 ðf Þj2 Sx1 ðf Þ þ Sx2 ðf Þg. Using the same logic for every stage, the spectral density of the output is given by ( ) n n Y X 2 Syn ðf Þ ¼ jHj ðf Þj Sxi ðf Þ ðA:11Þ

Ryn ðtÞ ¼ F 1 ½Syn ðf Þ;

As t ! 1, the above expression becomes Z

323

ðA:16Þ

j¼1

Note that if c is constant and fi ðtÞ, i ¼ l; . . . ; n, are continuous functions, then for the Fourier and inverse Fourier transform we have: (1) F 1 ½cf ðtÞ ¼ cF 1 ½f ðtÞ, (2) F 1 ½f1 ðtÞ þ    þ 1 1 fn ðtÞ ¼ F ½f1 ðtÞ þ    þ F ½fn ðtÞ, and (3) F 1  ½F ðf ðtÞ ¼ f ðtÞ (Leon-Garcia, 1994). Using these facts, from (A.16) we get ( ) n n n n Y Y X X 2 Ryn ðtÞ ¼ Kj F 1 ½F ½Vi  ¼ Vi Kj2 : i¼1

j¼1

i¼1

j¼i

ðA:17Þ ðA:10Þ

Observe that at stage i ¼ 1, the spectral density of 2 the output is given by jH1 ðf Þj Sx1 ðf Þ, where Sx1 ðf Þ ¼ F ½V1 . The spectral density of the input to stage i ¼ 2 is the spectral density of the output of stage i ¼ 1, plus the spectral density of V2 . Thus the spectral density of the output at stage i ¼ 2 is

Observe that Ryn ðtÞ is a constant independent of time and thus Ryn ð0Þ ¼ Ryn ðtÞ. From (A.13) and (A.17) we get the desired result.  Proof of Proposition 2.3. To prove this result, we need to show that ðoWðy ; DÞ=or0 Þ P 0. Observe that

324

K. Rajaram, A. Robotis / European Journal of Operational Research 156 (2004) 312–325

h h i h ii 0 0 o r0 / y l  l0 U y l r0 r0 oWðy ; DÞ ¼ ðU þ OÞ or0 or0 h h i h ii 0 0 o r0 / Dl  l0 U Dl r0 r0 þ ðb þ hÞ : or0 ðA:18Þ First consider the term h h i h ii y l0 0 o r0 / y l U  l 0 r0 r0 

or0 2

2

y l0 r0

1 ðy  l0 Þ 12 ¼e þ r0 e 2 r30  2 2 y l 12 r 0 ðy  l0 Þ 0 þ l0 e P 0: 2 r0 12



y l0 r0

2

ðA:19Þ

Using the same approach, we can show that h h i h ii Dl0 0 o r0 / Dl U  l 0 r0 r0 P 0: ðA:20Þ or0 Since O, U , b, h P 0, from Eqs. (A.18)–(A.20) we get: oWðy ; DÞ P 0: or0 This shows that Wðy ; DÞ is increasing in r0 .  Proof of Proposition 3.1. The proof follows by noting that total output variance can be reduced by interchanging the stage at position ðiÞ in the sequence with the stage at position ði þ jÞ if the condition does not hold. Hence, this condition must hold in an optimal variance minimizing sequence.  Proof of Proposition 3.2. To show this result, we first rewrite function ( Pn ) n n  Pn  X X 2 Y Log Vi e p¼i i ¼ Log e2 i¼1 Yi þLogVi i¼1

xj ¼ 2

n X

Yi þ LogVi :

i¼1

The Hessian at x is r2 f ðxÞ ¼

1 ðI T zÞ2

ððI T zÞdiagðzÞ  zzT Þ:

Observe that the function f ðxÞ is smooth. We next show that r2 f ðxÞ P 0. Let zðex1 ; . . . ; exn Þ. Then, for all vectors u we have 0 ! ! X X 1 T 2 2 @ u r f ðxÞu ¼ zi ui z i 2 ðI T zÞ i i !2 1 X ui z i A :  i

From the Cauchy–Schwarz inequality, we have 2 that ðaT aÞðbT bÞ P ðaT bÞ for any vectors a, b. Applying the Cauchy-Schwarz inequality to the pffiffiffi pffiffiffi vectors with components ai ¼ ui zi and bi ¼ zi , T 2 2 we have that u r f ðxÞu P 0. Thus, r f ðxÞ P 0, and f ðxÞ is convex. The result follows by noting that x0j s are a linear function of the Yi0 s and all the constraints are linear.  Proof of Proposition 3.3. From Proposition 2.2, we get oV0 2 ¼ ðKi . . . Kn Þ : oVi When Ki P 1 8i, this derivative is maximized when we consider a change at the first stage or i ¼ 1. This implies that the greatest impact of variance reduction occurs at the first stage. When Ki < 1 8i, this derivative is maximized when we consider a unit change at the last stage or i ¼ n. This implies the greatest impact of variance reduction occurs at the last stage. The final result follows by setting Dj ¼ maxi¼1...n ðoV0 =oVi Þ. 

i¼1

as f ðxÞ ¼ Log

where

References n X j¼1

exj

Carr, G., 1999. SPC for continuous processes. Chemical Engineering 106 (6), 98–101.

K. Rajaram, A. Robotis / European Journal of Operational Research 156 (2004) 312–325 Duffin, R.J., Petterson, E.L., Zener, C., 1967. Geometric Programming––Theory and Applications. John Wiley & Sons Inc., New York. Forrester, J., 1961. Industrial Dynamics. MIT Press, Cambridge, MA, and John Wiley & Sons, New York. French, S., 1982. Sequencing and Scheduling: An Introduction to the Mathematics of the Job-Shop. John Wiley & Sons, New York. Hopp, W.J., Spearman, M.L., 1996. Factory Physics: Foundations of Manufacturing Management. Irwin, Boston. Karmarkar, U.S., Rajaram, K., 2001. Grade selection and blending to optimize cost and quality. Operations Research 49 (2), 271–280. Lee, H.L., Padmanabhan, V., Whang, S., 1997. Information distortion in a supply chain: The Bullwhip effect. Management Science 43 (4), 546–558. Leon-Garcia, A., 1994. Probability and Random Processes for Electrical Engineering, second ed. Addison-Wesley Publishing Company Inc, New York. Mathworks, Inc., 1998. The Mathlab Curriculum Series: Users Guide. Prentice Hall, Englewood Cliffs, NJ. Montgomery, D.C., 2001. Design and Analysis of Experiments, fifth ed. John Wiley & Sons, New York.

325

Ogata, K., 1996. Modern Control Engineering. Prentice-Hall, New York. Perry, R.H., Green, D.W., Maloney, J.O., 1984. PerryÕs Chemical Engineers Hand Book, fourth ed. McGraw-Hill, New York. Rajaram, K., Jaikumar, R., Behlau, F., Van Esch, F., Heynen, C., Kaiser, R., Kuttner, A., Van De Wege, I., 1999. Robust process control at CerestarÕs refineries. Special Issue: Franz Edelmann Award Papers, Interfaces 29 (1), 30–48. Rajaram, K., Jaikumar, R., 2000. Incorporating operator– process interactions in process control: A framework and an application to glucose refining. International Journal of Production Economics 63, 19–31. Rajaram, K., Jaikumar, R., 2002a. An interactive decision support system for on-line process control. European Journal of Operational Research 138, 554–568. Rajaram, K., Karmarkar, U.S., 2002b. Product cycling with uncertain yields: Analysis and application to the process industry. Operations Research 50 (4), 680–691. Smith, J.C., McCabe, W.L., Harriot, P., 1993. Unit Operations of Chemical Engineering, fifth ed. McGraw-Hill, New York. Wheeler, D.J., Chambers, D.S., 1992. Understanding Statistical Process Control, second ed. SPC Press, Knoxville, TN.