Batch sizing under learning and forgetting: Steady state characteristics for the constant demand case

Batch sizing under learning and forgetting: Steady state characteristics for the constant demand case

Operations Research Letters 36 (2008) 589–593 Contents lists available at ScienceDirect Operations Research Letters journal homepage: www.elsevier.c...

593KB Sizes 0 Downloads 56 Views

Operations Research Letters 36 (2008) 589–593

Contents lists available at ScienceDirect

Operations Research Letters journal homepage: www.elsevier.com/locate/orl

Batch sizing under learning and forgetting: Steady state characteristics for the constant demand case Sunantha Teyarachakul a,1 , Suresh Chand b,∗ , James Ward b a

College of Business and Industry, Minnesota State University Moorhead, Moorhead, MN 56563, United States

b

Krannert School of Management, Purdue University, West Lafayette, IN 47907, United States

article

info

Article history: Received 1 December 2005 Accepted 27 January 2008 Available online 5 February 2008

a b s t r a c t We analyze steady-state characteristics of batch production time for a constant-demand lot sizing problem with learning and forgetting in production time. We report a new type of convergence, the alternating convergence, in which the batch production time alternates between two different values. © 2008 Published by Elsevier B.V.

Keywords: Learning and forgetting Batch sizing

1. Introduction This paper considers an environment where the firm produces items in batches and there is learning while producing units within a batch and forgetting in the break between two successive batches. We examine the long-run (i.e. steady state) characteristics of the batch production time when constant-lot-size batches of a single product are produced and the demand rate is constant. A few papers have previously considered convergence of production time under similar assumptions; see e.g., [1–4]. Although these papers differed in the characterization of forgetting functions they used, they all reported that batch production time converged to a unique value. Sule [4], Axsater and Elmaghraby [1] and Elmaghraby [2] used forgetting models in which the amount of forgetting is unbounded. Globerson and Levin [3] used a forgetting function that does not allow the amount of forgetting to exceed the amount of learning. We follow Globerson and Levin’s approach and expand their computational results by providing a mathematical proof for the existence of convergence. We also report a new type of convergence, called the alternating convergence, where, as the number of lots produced approaches infinity, the batch production time alternates between two different values. We support our finding by providing sufficient conditions for the existence of this alternating convergence.



Corresponding author. E-mail addresses: [email protected] (S. Teyarachakul), [email protected] (S. Chand). 1 Present affiliation: Essec Business School, 95021 Cergy Pontoise Cedex, France. 0167-6377/$ – see front matter © 2008 Published by Elsevier B.V. doi:10.1016/j.orl.2008.01.007

2. Learning and forgetting functions We use Wright’s [6] power learning function and Globerson and Levin’s [3] exponential forgetting function in our study. The mathematical relationship that describes Wright’s learning function is: T (x) = T (1)x−m , where x is the cumulative volume, T (x) is the production time for the xth unit, and m is the learning curve parameter. Defining m = − log(p)/ log(2), 100(1 − p) is the percentage decline in the unit production time with a doubling of the number of units. Note that m = 0 implies T (x) = T (1) for all values of x, and there is no learning. Large m (or small p) implies faster learning. For our analysis to proceed, we need to allow x to take noninteger values, and we need to define T (x) as the ‘‘instantaneous’’ per-unit production time at the instant when The xth unit is completed. The interpretation for non-integer x is that the portion of work on the xth unit corresponding to the non-integer part of x has been completed (for example, if x = 4.2, then 20% of the work on the 5th unit has been completed). The interpretation for T (x) is that if the work were to progress at the rate it was being done at the completion of the xth unit, then it would take T (x) amount of time to complete one unit. In other words, if rx is the instantaneous production rate (e.g., units/hour) at the time of completing the xth unit, then T (x) = 1/rx . To keep the analysis simple and without sacrificing any major insights, we assume x ≥ 1. Thus, x = 1 is defined as the state with no learning. The learning function that we use in our analysis is T (x) = T0 x−m where T0 = T (1) is the initial instantaneous per-unit production time at the state with no learning or prior to any learning. We next present the forgetting model that we use. To keep the presentation

590

S. Teyarachakul et al. / Operations Research Letters 36 (2008) 589–593

simple, we will use ‘‘production time’’ in place of ‘‘instantaneous production time.’’ Forgetting occurs when there is an interruption in production. In the context of this paper, forgetting occurs during the time between two successive batches of the same product. Let F [T , I ] denote the per-unit production time after an interruption of I time units, given a production time of T at the start of the interruption. We have F [T , I ] > T with forgetting. We would like our forgetting function to satisfy the following fundamental properties. δ(F [T ,I ]−T )

>0 (1) The amount forgotten is increasing in I. That is, δI for all T < T0 . (2) All previous learning is forgotten if the interruption is long enough. Specifically, limI →∞ F [T , I ] = T0 for all T < T0 . Further, the time it takes to forget all that has been learned is increasing in the amount learnt. (3) F [T , I ] is increasing in T ; that is, δ Fδ[TT ,I ] > 0 for all I > 0. We use the following exponential function introduced in [3] that satisfies all of these fundamental properties: F [T , I ] = T + (1 − e−bI )(T0 − T ). In this function, the coefficient, b, controls the rate of forgetting. The larger the value of b, the greater is the amount forgotten, and for b = 0, F [T , I ] = T , so there is no forgetting. As b approaches infinity, F [T , I ] approaches T0 , and there is complete forgetting. The exponential forgetting function captures a rapid initial decrease in performance followed by a gradual leveling off. 3. Combining the effect of learning and forgetting over a production cycle Assume a constant demand rate of d and that the process is operating under an equal-spaced and constant lot size (ESCLS) policy. Under this policy, a fixed batch size q is produced every q R = d periods. The production cycle starts at the beginning of the production of the first unit in the batch and ends at the end of the ‘‘idle’’ time prior to the start of the next batch. R is the length of the production cycle. We assume that R is large enough so that the total production time of a batch is always less than R (e.g. qT0 < R). In each cycle we have both learning and forgetting. We wish to construct a function for the experience level that would capture the combined effect of learning and forgetting from one cycle to the next. Specifically, if x is the experience level at the beginning of a cycle, then we wish to develop a function G(x) such that G(x) is the experience level at the beginning of the next cycle. Note that the experience level x in our model is the cumulative production volume in Wright’s learning function: it captures the effect of learning and forgetting over the past history of production. Consider the nth batch or cycle. Denote the production time at the start of the nth batch by Tn . Since T (x) is a strictly decreasing function, its inverse, T −1 (t ), is well defined and also strictly decreasing: T −1 (t ) =



T0 t

 m1

.

Let xn = T −1 (Tn ), noting that xn captures the cumulative learning and forgetting. We refer to xn as the ‘‘experience level’’ corresponding to Tn . Immediately after the nth batch is produced the production time is given by T (xn + q). The learning during the batch (expressed as a decrease in unit production time) is given by L(xn ) = T (xn ) − T (xn + q) m = T0 (x− − (xn + q)−m ). n

The idle time in the cycle until the start of the next batch is given by I (xn ) = R − p(xn ), where p(xn ) is the total production time of a batch that starts with experience level xn , and is given by p(xn ) = T0

Z

xn +q

x−m dx

xn

=

T0 1−m

[(xn + q)1−m − x1n−m ].

The forgetting (expressed as an increase in unit production time) over the idle time I (xn ) is given by F (xn ) = (T0 − T (xn + q)) 1 − e−bI (xn ) .



Combining L(xn ) and F (xn ) gives the production time, N (xn ), at the start of the next batch N (xn ) = Tn+1

= T (xn + q) + (1 − e−bI (xn ) )(T0 − T (xn + q)) = T0 [(xn + q)−m + (1 − e−bI (xn ) )(1 − (xn + q)−m )]  = T0 1 − (1 − (xn + q)−m )   T0 [(xn + q)1−m − x1n−m ]) . × exp −b(R − (1 − m) Using T −1 , we can translate N (xn ) into its equivalent unit (e.g. xn+1 ). Define G(x) = T −1 (N (x)). G(x) gives the experience level, expressed in units, at the start of a batch given that x was the experience level at the start of the previous batch. Using R = q/d in the expression for N (xn ), it is easy to see that

" N (x) = T0 1 − (1 − (x + q)−m )





× exp −b

q d



T0 1−m

[(x+ q)

1−m

1−m

−x

 ]

,

and

G(x) =



N (x)

−1/m

T0



= 1 − (1 − (x + q)−m )   q × exp −b − d

T0 1−m

−1/m [(x + q)1−m − x1−m ] .

Finally, define G1 (x) = G(x) Gk (x) = G(Gk−1 (x)) for k = 2, 3, 4, . . . . 4. Properties of G (x) All of the results in this section assume an ESCLS production policy, and, as defined in Section 3, G(x) gives the experience level at the start of a batch given that x was the experience level at the start of the previous batch. Property 1. The amount of learning, L(x), strictly decreases in x. Proof. L(x) = T0 (x−m −(x + q)−m ) is continuous and the derivative ∂ L(x) of L(x) is ∂ x = T0 m{(x + q)−m−1 − x−m−1 } < 0. Thus, L(x) strictly decreases in x. 

S. Teyarachakul et al. / Operations Research Letters 36 (2008) 589–593

Property 2. The amount forgotten, F (x), strictly increases in x. Proof. Similarly to the proof of Property 1, a proof for this property easily follows from F (x) = (T0 − T (x + q))(1 − e−bI (x) ).  Result 1. There exists a unique fixed point of G(x). Proof. Let x∗ denote a fixed point of G(x) if one exists. That is, x∗ is such that G(x∗ ) = x∗ , or equivalently T (G(x∗ )) = T (x∗ ) since T (x) is strictly monotone. Recall that T (G(x)) = T (x) − L(x) + F (x). Hence, L(x∗ ) = F (x∗ ). To prove the existence of a unique fixed point of G(x), we need to show that there exists one and only one x∗ such that L(x∗ ) = F (x∗ ). From Properties 1 and 2, we know that L(x) strictly decreases and F (x) strictly increases in x. Hence, there can be at most one value of x∗ such that L(x∗ ) = F (x∗ ). G(x) ≥ 1 for all x ≥ 1, since forgetting cannot exceed the total amount of learning. If G(1) = 1, 1 is the unique fixed point; otherwise, L(1) > F (1). Since as x → ∞, L(x) → 0, there must exist some x˜ such that L(˜x) < F (˜x). Hence, there must exist some x∗ , 1 < x∗ < x˜ , such that L(x∗ ) = F (x∗ ).  We use x to denote the unique fixed point of G(x) through the rest of the paper. The significance of the guaranteed existence of a unique fixed point is that if the sequence {Gk (x), k = 1, 2, 3, . . .} converges to a unique value, then the value is x∗ . However, convergence to x∗ is not always guaranteed. We develop conditions when the sequence {Gk (x)} is guaranteed to converge to x∗ . We also develop conditions when the even sequence {Gk (x), k = 2, 4, 6, . . .} and the odd sequence {Gk (x), k = 1, 3, 5, . . .} converge to different unique values. To prove these results, we need to prove several properties of the functions G(x) and G2 (x). Recall that G2 (x) ≡ G(G(x)). Note that since G(x) is continuous (see the definition of G(x) in Section 3), G2 (x) is also continuous. ∗

Property 3. Both limx→∞ G(x) and limx→∞ G2 (x) exist. Proof. As x → ∞, the batch production-time 1−0m ((x + q)1−m −x1−m ) → 0. Using this in expression for G(x) in Section 3, we get

Proof of 5.1: For all x < x∗ , G(x) ≤ G(x∗ ) = x∗ . For all x > x∗ , G(x) ≥ G(x∗ ) = x∗ . The proof of Property 5.2 is similar to that of Property 5.1.  It is intuitive that the sequence Gk (x), k = 1, 2, . . . converges to x∗ under Property 5.1. However, we need to specify some more conditions to get such a convergence under Property 5.2. When the sequence does not converge to x∗ , we identify conditions for convergence for odd and even sequences. We need additional properties to prove these results. ∂ G(x) Property 6 considers the impact of ∂ x on G2 (x). We need Lemma 1 to obtain Property 6. Lemma 1. If

(i) limx→∞ G(x) = (1 − e (ii) limx→∞ G (x) 2

T

=

−bR+b 1−0m (((1−e−bR )

e

−1 m

) , and −1 [1 − (1 − ((1 − e−bR ) m + q)−m )

+q)1−m −(1−e−bR )

We can see that both limits are finite.

−1 (1−m) m

) −m1

]



∂ G(x) ∂x

< 0 ∀x, then

∂ G2 (x) ∂x

> 0 ∀x.

2 ∂ G2 (x) = ∂∂GG((xx)) ∗ ∂ G∂(xx) . ∂x 2 ∀x, ∂∂GG((xx)) < 0 and ∂ G∂(xx)

Proof. By chain rule, Since 0. 

∂ G(x) ∂x

< 0

∂ G2 (x)

∂ G(x)

An implication of Property 6 is that when ∂ x < 0, the subsequence {Gk (x) : k = 2, 4, 6, . . .} cannot cross any fixed point of G2 (x). Property 7. If x0 is a fixed point of G2 (x), then (1) G(x0 ) is also a fixed point of G2 (x) and (2) if G(x0 ) = x00 , then x0 = G(x00 ). Proof. By the definition of fixed point x0 , G2 (x0 ) = x0 ⇔ G(G2 (x0 )) = G(x0 )

⇔ G2 (G(x0 )) = G(x0 ). So, G(x0 ) is also a fixed point of G2 (x). Let G(x0 ) = x00 . G(x0 ) = x00 ⇔ G(G(x0 )) = G(x00 )

⇔ x0 = G(x00 ),

because G2 (x0 ) = x0 . 

Property 8 extends Property 7. Note that since x∗ is a fixed point of G(x), it is also a fixed point of G2 (x). Let x0 be a fixed point of G2 (x). Let x00 = G(x0 ) be another fixed point of G2 (x).

Proof. Consider any x < x∗ . By Property 1, L(x) > L(x∗ ) and by Property 2, F (x) < F (x∗ ). Thus F (x) < F (x∗ ) = L(x∗ ) < L(x). Since the amount of learning is greater than the amount of forgetting, G(x) > x. Similarly, x > x∗ leads to G(x) < x. 

(1) x0 = 6 x∗ ⇔ x00 = 6 x∗ , and (2) x0 = 6 x∗ ⇔ x00 = 6 x0 . Proof. x0 6= x∗ ⇔ G(x0 ) 6= x∗ ,

(Only x = x∗ satisfies G(x) = x) ⇔ x00 6= x∗ 0 ∗ x 6= x ⇔ G(x0 ) 6= x0 , (G(x) = x is true only for x = x∗ ) ⇔ x00 6= x0 . 

∂ G(x)

5.2 If ∂ x < 0 ∀x, then for all x < x∗ , G(x) ≥ x∗ and for all x > x∗ , G(x) ≤ x∗ .

>

Proof. ∂ x > 0 ∀x by Lemma 1. For any x < x0 , G2 (x) < G2 (x0 ) = x0 . Similarly, for any x > x0 , G2 (x) > G2 (x0 ) = x0 . 

Property 8. If x0 is a fixed point of G2 (x), then

∂ G(x) Property 5. 5.1. If ∂ x ≥ 0 ∀x, then for all x < x∗ , G(x) ≤ x∗ and ∗ for all x > x , G(x) ≥ x∗ .

∂ G2 (x) ∂x

Then for all x < x0 , G2 (x) < x0 and for all x > x0 , G2 (x) > x0 .

Property 4. For all x < x∗ , G(x) > x and for all x > x∗ , G(x) < x.

An implication of Property 4 is that G(x) relative to x is in the direction of x∗ . However, G(x) can be larger than x∗ (overshoot) or less than x∗ (undershoot). The following property identifies conditions for this.

< 0. Thus,

Property 6. Let x0 denote any fixed point of G2 (x), and suppose that ∂ G(x) < 0 ∀x. ∂x

T

−bR −m1

591

Property 9. The number of fixed points of G2 (x) is odd.

592

S. Teyarachakul et al. / Operations Research Letters 36 (2008) 589–593

Proof. Since x∗ is a fixed point of G2 (x), the number of fixed points is at least one. Consider the number of fixed points of G2 (x) that are not equal to x∗ . From Properties 7 and 8, if x0 < x∗ is a fixed point of G2 (x), then x00 = G(x0 ) is also a fixed point such that x00 > x∗ . Further if x1 and x2 are two distinct fixed points of G2 (x), both less than x∗ , then G(x1 ) and G(x2 ) are also distinct fixed points of G2 (x), both greater than x∗ . That is, all fixed points of G2 (x) other than x∗ must come in pairs {x, G(x)} that are on opposite sides of x∗ . Hence the number of fixed points, including x∗ must be odd.  ∂ G(x) ∂x

Property 10. If x∗ .

≥ 0 ∀x, then G2 (x) has exactly one fixed point, Fig. 1. Possible shapes of G2 (x) when G2 (x) has 3 fixed points.

Proof. We prove Property 10 by contradiction. Assume that G2 (x) has more than one fixed point. Then, at least one of the fixed points is not equal to x∗ . Let x0 be a fixed point of G2 (x) such that x0 6= x∗ . Suppose x0 < x∗ . x0 < G(x0 ) by Property 4, and G(x0 ) < x∗ , ∂ G(x) because ∂ x ≥ 0 ∀x (See Property 5). This leads to G(x0 ) < G2 (x0 ) by Property 4. Thus, x0 < G(x0 ) < G2 (x0 ) so that G2 (x0 ) 6= x0 . This contradicts the fact that x0 is a fixed point of G2 (x). A similar contradiction is reached if we suppose that x0 > x∗ .  Properties 11 and 12 are based on the number of fixed points of G2 (x). Property 11. Suppose that G2 (x) has only one fixed point, x∗ . Then, G2 (x) > x for all x < x∗ and G2 (x) < x for all x > x∗ . Proof. Since x∗ is the only point where G2 (x) = x, the curve y = G2 (x) crosses the 45◦ line y = x only once. G2 (x) is continuous, G2 (1) ≥ 1, and limx→∞ G2 (x) is finite (Property 3), it must be that the case that G2 (x) > x for all x < x∗ and G2 (x) < x for all x > x∗ .  Property 12. If G2 (x) has 3 fixed points, x− , x∗ and x+ , where x− < x∗ < x+ , such that G2 (x) crosses the line y = x(45◦ line) at each fixed point then (1) G2 (x) > x for x < x− and for x∗ < x < x+ , and (2) G2 (x) < x for x− < x < x∗ and for x+ < x. Proof. Each of the three fixed points of G2 (x) can be viewed as a point where (G2 (x) − x) changes sign. Thus, either (See Fig. 1) Case1: G2 (x) > x for x < x− and for x∗ < x < for x− < x < x∗ and for x+ < x, or Case2: G2 (x) < x for x < x− and for x∗ < x < for x− < x < x∗ and for x+ < x.

x+ , and G2 (x) < x x+ , and G2 (x) > x

Since limx→∞ G (x) is finite (Property 3), it must be Case 1: G (x) > x for x < x− and for x∗ < x < x+ , and G2 (x) < x for x− < x < x∗ and for x+ < x. In other words, given the conditions in Property 12 (G2 (x) has 3 fixed points and G2 (x) crosses the 45 degree line at each of these fixed points), only Case 1 exists and Case 2 does not.  2

2

We now have sufficient properties of G(x) and G2 (x) to prove the existence of the convergence of x. We will first discuss the possible types of convergence and then give results on the existence of a convergence. Recall that p(x) is the total production time of a batch that starts with experience level x. Note that as k → ∞, the convergence of the batch production-time {p(Gk (x))} is determined by the T convergence of {Gk (x)} because p(Gk (x)) = 1−0m ((Gk (x) + q)1−m −

(Gk (x))1−m ).

4.1. Convergence properties We now identify the possible types of convergence of {Gk (x)}, and match each type with the properties of G(x) considered above. Single-point convergence: {Gk (x)} converges to a single point, x∗ . We have identified 2 sub-types of single-point convergences, ‘‘one-sided’’ and ‘‘double-sided’’. When {Gk (x)} converges to x∗ and either all terms in the sequence are less than or equal to x∗ or all terms are greater than or equal to x∗ , we label the sequence as one-sided convergence to x∗ . When {Gk (x)} converges to x∗ and for all k, Gk+1 (x) < x∗ if and only if Gk (x) > x∗ , we label the sequence as double-sided convergence to x∗ . That is, the elements of the sequence alternate between being greater than and less than x∗ . Alternating Convergence: The ‘‘odd’’ subsequence, {GD (x)} = {Gk (x), k = 1, 3, 5, . . .}, and the ‘‘even’’ subsequence, {GE (x)} = {Gk (x), k = 2, 4, 6, . . .} converge to two different values. One converges to x− the other to x+ where x− and x+ are fixed points of G2 (x) and x− < x∗ < x+ . Next, we will provide sufficient conditions for these types of convergences. ∂ G(x)

Result 2. If ∂ x to x∗ .

≥ 0 ∀x, then {Gk (x)} has one-sided convergence

Proof of Result 2. By combining Properties 4 and 5, we get x < G(x) ≤ x∗ for x < x∗ and x > G(x) ≥ x∗ for x > x∗ . Hence, {Gk (x)} is monotonically increasing or decreasing and bounded by x∗ , and therefore must converge. Suppose {Gk (x)} converges to some xˆ 6= x∗ . It must be that xˆ is a fixed point of G(x), but this cannot be since x∗ is the unique fixed point. Thus, {Gk (x)} must converge to x∗ , and the convergence is one-sided.  ∂ G(x)

Result 3. If ∂ x < 0 ∀x and G2 (x) has exactly one fixed point, x∗ , then {Gk (x)} has double-sided convergence to x∗ . Proof of Result 3. The Proof of Result 3 is similar to that of Result 2. The odd and even subsequences, {Gk (x), k = 1, 3, 5, . . .}, and {Gk (x), k = 2, 4, 6, . . .} are bounded by x∗ and one is monotonically increasing and the other is monotonically decreasing (by Property 11). Both subsequences must converge to a fixed point of G2 (x). Since there is only one such fixed point, x∗ , they must converge to x∗ . Further, (by Property 5) one subsequence converges from below x∗ , and the other subsequence converge from above x∗ .  ∂ G(x)

Result 4. Suppose that ∂ x < 0 for all x, and that G2 (x) has 3 fixed points where it crosses the line y = x(45◦ line) 3 times at each of its fixed points x− , x∗ and x+ , where x− < x∗ < x+ (See case 1 in Fig. 1). For all x 6= x∗ , {Gk (x)} has alternating convergence to (x− , x+ ).

S. Teyarachakul et al. / Operations Research Letters 36 (2008) 589–593

Proof of Result 4. The three fixed points divide the line into 4 intervals: I1 : x < x− , I2 : x− < x < x∗ , I3 : x∗ < x < x+ , and I4 : x+ < x. From Property 6, the subsequences {GD (x)} and {GE (x)} are entirely contained in one of these intervals, and from Property 12, they are monotone in that interval. Suppose one of the subsequences is in I1 . The subsequence is increasing (Property 12) and bounded from above by x− . Hence, a limit exists, and since that limit must be a fixed point of G2 (x), the limit must be x− . Now suppose one of the subsequence is in I2 . The subsequence is decreasing and bounded from below by x− . Again the limit point must be x− . Similar reasoning leads to a limit of x+ when either subsequence is in I3 or I4 . If x < x∗ , then by Property 5.2, G(x) > x∗ , G2 (x) < x∗ , 3 G (x) > x∗ , and so on. Hence, all elements of {GE (x)} are in I1 or I2 so that {GE (x)} converges to x− , and {GD (x)} is in I3 or I4 and converges to x+ . Similarly, if x > x∗ , {GE (x)} converges to x+ and {GD (x)} converges to x− . Note that alternating convergence occurs for all x 6= x∗ , but not for x = x∗ since x∗ is a fixed point of G(x).  Although we could not find any examples where G2 (x) crosses the line y = x more than 3 times, we could not exclude that possibility. Also note that our convergence results (Results 2–4) are ∂ G(x) ∂ G(x) based on the property that ∂ x ≥ 0 ∀x or ∂ x < 0 ∀x. We have ∂ G(x)

not considered cases in which ∂ x changes sign. Norte that in Teyarachakul’s thesis [5], she generalizes many of the convergence results presented here for Globerson and Levin’s [3] exponential forgetting function and Wright’s learning curve to more general classes of learning and forgetting functions. 5. Optimal batch size q∗ We present a function for the average per period cost given a lot size of q. The function is evaluated at the limit of {Gk (x)}. We only consider problem parameters for which {Gk (x)} converges to x∗ . In our model, costs include a fixed setup cost per batch, the labor

593

cost that is proportional to the batch production time, and a linear inventory holding cost per time period. Let S be the setup cost per batch, h be the inventory holding cost per unit per period, w be the labor cost per period of work, and d be the demand rate per period. Define ACS (q) as the average setup cost per period, ACH (q) as the average inventory holding cost per period, and ACL (q) as the average production labor cost per period. Define AC (q) as the total of these three costs per period. We have ACS (q) =

Sd q

,

and ACL (q) =

ACH (q) =

wp∗ (q)d q

hq 2

,

, where

p∗ (q) is the batch production-time corresponding to experience level x∗ (q) at the start of each batch in steady-state. ACS (q) is decreasing and convex in q and ACH (q) is linearly increasing in q. In general, ACL (q) is neither convex nor concave, so that AC (q) could be non-convex, as illustrated. The nonconvexity of AC (q) imposes difficulties in obtaining the optimal ∂ AC (q) lot size q∗ , since the first order condition ( ∂ q = 0) could lead to a sub-optimal solution. We may need to use a search algorithm to find the optimal q. References [1] S. Axsater, S.E. Elmaphraby, A note on EMQ under learning and forgetting, AIIE Transactions 13 (1981) 86–90. [2] S.E. Elmaghraby, Economic manufacturing quantities under condition of learning and forgetting, Production Planning and Control 1 (1990) 196–208. [3] S. Globerson, N. Levin, Incorporating forgetting into learning curves, International Journal of Operations and Production Management 7 (1987) 80–94. [4] D.R. Sule, The effect of alternate periods of learning and forgetting on economic manufacturing quantity, AIIE Transactions 10 (1978) 338–343. [5] S. Teyarachakul, The impact of learning and forgetting on production batch sizes, Thesis, Purdue University, 2003. [6] T.P. Wright, Factors affecting the cost of airplanes, Journal of Aeronautical Sciences 3 (1936) 122–128.