RETRACTED: Apply Fuzzy Learning Effect with Fuzzy Processing Times for Single Machine Scheduling Problems

RETRACTED: Apply Fuzzy Learning Effect with Fuzzy Processing Times for Single Machine Scheduling Problems

Journal of Manufacturing Systems 42 (2017) 244–261 Contents lists available at ScienceDirect Journal of Manufacturing Systems journal homepage: www...

1MB Sizes 6 Downloads 87 Views

Journal of Manufacturing Systems 42 (2017) 244–261

Contents lists available at ScienceDirect

Journal of Manufacturing Systems journal homepage: www.elsevier.com/locate/jmansys

Apply Fuzzy Learning Effect with Fuzzy Processing Times for Single Machine Scheduling Problems Hamed Asadi Department of Industrial Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran

a r t i c l e

i n f o

Article history: Received 8 August 2016 Received in revised form 9 December 2016 Accepted 14 December 2016 Keywords: Scheduling Fuzzy learning effect Fuzzy processing time Fuzzy mixed Integer nonlinear programming Likelihood profile Chance constrained programming

a b s t r a c t Most common objectives in single-machine scheduling problems are minimize makespan, total completion time and total weighted completion time, respectively. In this paper, we consider single machine scheduling problems under position-dependent fuzzy learning effect with fuzzy processing times. Furthermore, we show that these three problems are polynomially solvable under position dependent fuzzy learning effect with fuzzy processing times. In order to model the uncertainty of fuzzy model parameters such as processing time and learning effect, we use an approach called as likelihood profile that dependent on the possibility and necessity measures of fuzzy parameters. For three objective functions, we build Fuzzy Mixed Integer Nonlinear Programming (FMINP) models those are dependent chance constrained programming technique for the same predetermined confidence levels. Furthermore, we present polynomially solvable algorithms for different confidence levels for these problems. © 2017 The Society of Manufacturing Engineers. Published by Elsevier Ltd. All rights reserved.

1. Introduction Single machine scheduling problems are significant to understand scheduling theory and make an induction easily for decision makers. Therefore, each scheduling problem is actually a set of single machine scheduling problems. Single machine scheduling problems provide a chance to simplify the system as much as possible to help a decision maker understand all system components and requirements. The complexity of a scheduling problem is generally  defined with how much time its solution algorithm takes to find the global optimal solution. For some scheduling problems such as 1|rj| Cj, 1|sjk|Cmax and Pm|prmp| wJC [1], even if their all parameters are deterministic values, they are in the class of NP-hard. Using non-deterministic model parameters may be a contribution to the complexity level of the problem. At most of the researches in the literature, processing times are defined as crisp values. However, using crisp value is not always suitable for the real life’s uncertainty. In some situations, processing time is needed to define on closed interval to model the uncertainty because processing time of a job may be chanced due to a worker’s performance or machine’s working speed. Therefore, processing times can be defined in form of triangular fuzzy numbers these are pessimistic, most likely and optimistic processing times, respectively. The expression of learning effect implies that continuous repetition of similar jobs leads workers or the system to make the task being processed faster than previous. Each repetition of similar jobs on a machine or a processer has a time-decreasing effect on processing time. This is a well-known phenomenon called as learning effect in the literature. At most of researches in the literature, learning effect is declined as a negative crisp value on an interval of [−1,0] and the readers can encounter with it in two types; those are job-dependent and positioned pendent. Learning is a much more hard term to define in deterministic form than processing time. A system may not continuously learn at the same proportion, this circumstance leads an uncertainty for learning effect. There is no certain way to adjust a system learn in same proportions at each repetition of similar jobs. In spite of trying to force the system learn in same proportion at each repetition, we can define learning effect on an interval. If we use the same reasoning when we model processing time in form of triangular fuzzy numbers, then there will be three marked learning effect ratios on the interval; these are pessimistic, most likely and optimistic values, respectively. The concept of fuzzy processing time on single machine scheduling problems has already studied by some researchers in the literature. The first study with fuzzy processing time on single machine was conducted by Liao and Liao [2]. In their study, they define due dates and processing times in forms of fuzzy numbers to demonstrate decision maker’s satisfaction level that a job’s completion is enough or not when comparing its due date. They also proposed polynomially solvable mathematical solution approaches to their model when the objective

E-mail address: Hamed [email protected] http://dx.doi.org/10.1016/j.jmsy.2016.12.004 0278-6125/© 2017 The Society of Manufacturing Engineers. Published by Elsevier Ltd. All rights reserved.

H. Asadi / Journal of Manufacturing Systems 42 (2017) 244–261

245

is to maximize the minimum satisfaction level of job completions. Itoh and Ishii [3] put forth a model consisting of fuzzy due dates and processing times to minimize the number of tardy jobs under uncertainty. Chanas and Kasperski [4] used L-R type fuzzy numbers to model processing times and due dates. In their study, there are two objective functions: minimization of maximum lateness and maximization of the minimum satisfaction level with respect to fuzzy precedence relation. Wang et al. [5] used the membership function of triangular fuzzy processing times to build a job completion likelihood profile for each job. By using this new approach, they proposed a fuzzy change constrained programming for any given confidence level that is defined by the decision maker before. Their approach is redesigned in our study to fit the single machine problem under fuzzy learning effect. Chanas and Kasperski [6] proposed two similar single machine scheduling problems with fuzzy processing time and fuzzy due date. In their study, the objective functions of problems are to minimize the maximum expected value of fuzzy tardiness and to minimize the expected value of a maximum fuzzy tardiness. Sung and Vlach [7] studied a single machine scheduling problem when the objective is to minimize the number of late jobs under uncertainty with fuzzy processing times and fuzzy due dates. Chanas and Kasperski [8] used possibility and necessity measures of fuzzy processing times to find the optimum schedule for 1|| wC problem. Kasperski [9] put forth a general formulation of a fuzzy single machine scheduling problem and he showed generalized Lawyer’s algorithm as a solution approach. Cheng and Cheng [10] proposed a single batch processing machine problem with non-identical job size by using fuzzy processing times. Furthermore, they built an ant colony optimization by using Metropolis criterion to minimize the makespan. Li et al. [11] considered a single machine due date assignment problem with fuzzy processing times and general precedence constraint. They proved if the problem is handled without general precedence constraint, then a polynomial time algorithm can be proposed as solution policy. However, including of general precedence constraint in the problem makes the problem NP-Hard. Moghaddam et al. [12] proposed a single machine problem that has two objectives. These objectives are minimizing the total weighted tardiness and minimizing the makespan. They constructed a fuzzy multi objective linear programming model to solve their model. Li et al. [13] considered a single machine scheduling problem when the objective is to minimize the total weighted possibilistic mean value of the weighted earliness/tardiness costs. Bentrcia et al. [14] proposed a single machine problem including discounting costs with fuzzy processing time, fuzzy weighting coefficients and fuzzy discount cost. They suggested genetic algorithm and pattern search approaches by using possibility theory for their model. In the scheduling literature, Biskup’s study [15] is a milestone for single machine scheduling problem under learning effect in a deterministic environment. In our study, we use Biskup’s position-dependent learning effect. There are a few researches use learning effect on fuzzy processing time on single machine environment. Ahmadizar and Hosseini [16] proposed a single machine scheduling model under positiondependent learning effect with fuzzy processing times. Also, they proved the shortest processing time (SPT) dispatching rule can be an optimum policy for the model with triangular fuzzy processing time. Ahmadizar and Hosseini [17] proposed a single machine scheduling problem when the objective is to minimize the makespan under position-dependent learning effect with fuzzy processing times.The researchers developed two different polynomial time algorithms for the problem. Gur Mosheiova Jeffrey B Sidney [25] shows that in the new, possibly more realistic setting, the problems of makespan and total flowtime minimization on a single machine, a due-date assignment problem and total flow-time minimization on unrelated parallel machines remain polynomially solvable. Kuolamas [26] shows that the O(n log n) (where n is the number of jobs) shortest processing time (SPT) sequence is optimal for the single-machine makespan and total completion time minimization problems when learning is expressed as a function of the sum of the processing times of the already processed jobs. They then show that the two-machine flowshop makespan and total completion time minimization problems are solvable by the SPT sequencing rule when the job processing times are ordered and job-position-based learning is in effect. Finally, they show that when the more specialized proportional job processing times are in place, then our flowshop results apply also in the more general sum-of-job-processing-times-based learning environment. Yunqiang Yin [27] focuses on the single-machine scheduling problems with the objectives of minimizing the makespan, total completion time, total weighted completion time, discounted total weighted completion time, and maximum lateness based on the proposed model, respectively. It shows that they are polynomially solvable and optimal under certain conditions. Additionally, it presents some approximation algorithms based on the optimal schedules for the corresponding single-machine scheduling problems and analyzes their worst case error bound. Behdin Vahedi-Nouri et al. considers a single machine scheduling problem with the learning effect and multiple availability constraints that minimizes the total completion time. To solve this problem, a new binary integer programming model is presented, and a branch-andbound algorithm is also developed for solving the given problem optimally. Since the problem is strongly NP-hard, to find the near-optimal solution for large-sized problems within a reasonable time, two meta-heuristics; namely, genetic algorithm and simulated annealing are developed. According to the results, the developed approach has potential to yield near-optimum solutions in large-sized problems. To the best of our knowledge, this is the first study that considers non-monotonic time-dependent processing times, deterioration and learning effect simultaneously in a single-machine scheduling problem with the objective of makespan, total completion time and total weighted completion time minimisation. Most of the scheduling problems are NP-hard problems. Because of this, only small size problems can be solved optimally by the branch-and-bound, the dynamic programming and the integer programming models. But, for the real applications in industry to deal with large size problems can be necessary. The researchers have studied to develop a methods for solving these type problems. However, high computational requirements of branch and bound procedures and high memory requirements of dynamic programming techniques make them inapplicable for large scale problems. Since the scheduling problem without learning effect in a single machine is a NP-hard problem; the problem tackled here is also a NP-hard problem. To the best of our knowledge, no works exists on the under position-dependent fuzzy learning effect with fuzzy processing times. Due to the ill-known quantities within the model, the determination procedures of optimal solutions in the conventional way is not an affordable task and more elaborated frameworks should be developed. In this context, we introduce a solution approaches for the proposed fuzzy scheduling problem in order to obtain an exact or a satisfactory near optimal solution. There are two main drawbacks in fuzzy scheduling: 1) the design of fuzzy scheduling is usually performed in an ad hoc manner where it is often difficult to choose some of the scheduling parameters; and 2) the fuzzy scheduling constructed for the nominal plant may later perform inadequately if significant and unpredictable plant parameter variations occur. This paper we illustrate these problems on a two-link tested by: 1) developing, implementing, and evaluating a fuzzy scheduling for the single machine scheduling problems under position-dependent fuzzy learning effect and 2) with fuzzy processing times. Next, we show how to develop and implement a fuzzy model

246

H. Asadi / Journal of Manufacturing Systems 42 (2017) 244–261

Fig. 1. Membership Function of Triangular Fuzzy Number.

reference learning controller for the single machine scheduling problems and illustrate that it can automatically synthesize a rule-base for a fuzzy scheduling that will achieve comparable performance to the case where it was manually constructed, and automatically tune the fuzzy scheduling so that it can adapt to variations in the planning. The considered single machine scheduling with learning effect problem can be solved optimally for small size problems with up to 25 jobs by the proposed Fuzzy Mixed Integer Nonlinear Programming. That is why we must applied meta heuristic methods to solve large size problems. In this paper, we consider single machine scheduling problems under position-dependent fuzzy learning effect with fuzzy processing times. We study three objectives are to minimize makespan, total completion time and total weighted completion time, respectively. Furthermore, we show that these three problems are polynomially solvable under position dependent fuzzy learning effect with fuzzy processing times. In order to model the uncertainty of fuzzy model parameters such as processing time and learning effect, for first time we use an approach called as likelihood profile that dependent on the possibility and necessity measures of fuzzy parameters. For three objective functions, we build Fuzzy Mixed Integer Nonlinear Programming (FMINP) models those are dependent chance constrained programming technique for the same predetermined confidence levels. Furthermore, we present new polynomially solvable algorithms for different confidence levels for these problems. 2. Definition and assumptions Assumptions • Processing times and learning effect coefficients are in form of triangular fuzzy numbers. Due to this situation, actual processing times and completion times are also in form of triangular fuzzy numbers. • All jobs have to be processed on single machine. Each job release date is equal to 0. This scheduling characteristic means all jobs are ready to be processed at the beginning. • Preemption is not allowed and there is no idle time (unforced idleness) between jobs while they are being processed on the machine. Definitions ˜ of the real axis with a membership function  ˜ (x):R → [0,1] Definition 1. A fuzzy number is defined by a convex normalized fuzzy set A A and a fuzzy number must follow below conditions. •  ˜ (x) is normal, i.e. ∃x0 ∈ R such that  ˜ (x0 ) = 1, A A   •  ˜ (x)is quasi-concave, i.e.  ˜ (x + (1 − ) y) ≥ min  ˜ (x) ,  ˜ (y) , where x, y ∈ R and ␭ ∈ [0,1], A A A A ˜ is upper semi-continuous on R. ˜ ∈ R, A • ∀A Definition 2.

˜ is positive (non-negative) number if ˜ (x) = 0 ∀x< 0 condition is provided. A fuzzy number A A

Definition 3.

˜ is negative (non-positive) number if { ˜ (x) = 0∀x > 0 condition is provided. A fuzzy number A A

Definition 4.

˜ The support of every fuzzy number (A),supp (A) =





xεR : A˜ (x) > 0 is bounded.

˜ is presented with three point on the real axis such that, A ˜ = AL , AC , AR and this fuzzy number’s memDefinition 5. If a fuzzy number A ˜ is defined as triangular bership function is increasing on the interval [AL , AC ] and decreasing on the interval of[AC , AL ], this fuzzy number A fuzzy number on the interval of [AL , AR ] (See Fig. 1). AL implies the left side value (optimistic value) having zero membership value. AC implies the center value (most likely) and its membership value is one. AR implies the right side value (pessimistic value) and its membership value ˜ can be expressed by Eq. (1). is zero. Membership function of triangular fuzzy number A

(x) A

˜ =

⎧ x − AL ⎪ L (x) = , ifAC ≥ x ≥ AL ⎪ ⎪ A AC − AL ⎪ ⎨ AC − X

, ifAC ≥ x ≥ AL RA (x) = ⎪ ⎪ AR − AC ⎪ ⎪ ⎩ 0, otherwise

(1)

H. Asadi / Journal of Manufacturing Systems 42 (2017) 244–261

247

(x) A

In Eq. (1), LA (x)implies the left side function of piecewise linear membership function  ˜ and it is an increasing function defined on (x) A

the interval of [AL , AC ].RA (x) implies the right side function of piecewise linear membership function  ˜ and it is a decreasing function defined on the interval of [AC , AR ].

˜ with ␣-cut level is described by the notation of Aa = {x ∈ Definition 6. For any ␣ on the interval of [0,1], A triangular fuzzy number A R:A˜ (X) ≥ a}. The calculation of Aa can be shown in Eq. (2).







Aa = [AL + AC − AR − AC ˛

˜ = AL , AC , AR on the interval of [AL , AR ]andB˜ = BL , BC , B Let us denote two triangular fuzzy numbersA

Definition 7. [BL , BR ]

(2)



R

on the interval of

the summation, subtraction,

˜ and B˜ can be shown in Eqs. (3)–(6). multiplication and division operations of A ˜ A(+) B˜ = (AL + BL , AC + BC , AR + BR )

(3)

˜ A(−) B˜ = (AL + BR , AC − BC , AR − BL )











˜ B ˜ = min AL /, BL /, AL /, BR /, AR /, BL /, AR /, B A(·)

 R

(4)



˜ B ˜ = (min AL ∗ BL , AL ∗ AR ∗ BL , AR ∗ BR AC ∗ BC , max AL BL , AL ∗ BR , AR ∗ BL , AR , BR ∗ ) A(·)



AC /, BC , max AL /, BL , AL /, AR ∗ BL , AR , B

(5)



R

(6)

˜ B˜ and C˜ are fuzzysets those having membership functions Definition 8. Let us denote ⊗ operation on real axis for fuzzy numbers. A, (x) (Y ) (z) ˜ ˜ ˜  ˜ , B and  ˜ respectively. C = A⊗B is anequation defined on real axis. By using Zadeh’s extension principle [18] in below Eq. (7), we A

C

(z) C

reach  ˜ . (Z)

(Z)

C = 

˜ ˜ B) (A⊗

=



SUPP x⊗y=z

(x) A

(y) B

˜ , ˜

min

(7)

Definition 9. Possibility and necessity are substantially significant measures for evaluatingfuzzy numbers. Possibility theory was proposed ˜ a triangular fuzzy number by Zadeh [19] to measure the chance of a fuzzy event. Necessity measure is a dual of possibility measure. Let Ais ˜ = AL , AC , AR and it has ␮A(x) ˜ A membership function. Possibility and necessity measures can be shown in Eq. (8) and (9) for any r real number, respectively.

 (A≤r)

=

SUPP x≤r

(x) A

˜ =

 NA (A ≤ r) = 1 −

⎧ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎩

SUPP x>r

0r ≤ AL LA (r) =

 (x) A



=

r − AL , ifAC ≥ x ≥ AL AC − AL

⎧ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎩

(8)

1r ≥ AC 0r ≤ AC 1 − RA (r) =

r − AC C A ≤ r ≤ AR AR − AC

(9)

1r ≥ AR

Definition 10. Although, possibility and necessity are dual part of each other, each measure is not dual by itself. Liu and Liu [20] proposed the credibility measure as an average of possibility and necessity measures to define a self-dual measure. For any r real number, the credibility measure can be shown in Eq. (10).

⎧ 0, r ≤ AL ⎪ ⎪ ⎪ ⎪ (r) ⎪ LA ⎪ AL r L C ⎪ ⎪ ⎨ 2 = 2 Ac − AL − 2 Ac − AL , A ≤ r ≤ A 1 CrA (A ≤ r) = ( = 2 ( A (A ≤ r) + NA (A ≤ r)) ⎪ (x) 2 − RA ⎪ AR + r − 2Ac C ⎪ R ⎪

,A ≤ r ≤ A = ⎪ ⎪ 2 2 AR − AC ⎪ ⎪ ⎩ 1, r ≥ AR

Let  be a nonempty set, and () is the power set of . Each element of () is an event. The chance of fuzzy event A ∈ () is defined with credibility measure Cr{.}. The axioms of credibility measure [20] can be shown below.

 

• (Normality) Cr  = 1     • (Monotolicity) Cr A ≤ Cr B , wheneverA ⊆ B

 







• (Self Duality) Cr A + Cr AC = 1foranyA ∈ P       

  • (Maximality)Cr ∪iAi = SuppCr Ai foranycollection Ai inP  withC Ai ≤ 0.5.

(10)

248

H. Asadi / Journal of Manufacturing Systems 42 (2017) 244–261

Fig. 2. Possibility Distribution.

Fig. 3. Necessity Distribution.

Definition 11. Let  is a fuzzy number with a membership function ␮ (x) and credibility measure Cr .The credibility distribution function (see [21]) of fuzzy number ␰˚ : R → [0, 1] can be shown in Eq. (11).



˚ (x) = Cr( ∈ | ≤ x)



(11)

(x) shows the credibility measure for fuzzy number  under the condition  ≤ x. Liu [22] proved credibility distribution function (x) is a non-decreasing function on real axis with (−∞) = 0 and (∞) = 1. ˜ = AL , AC , AR the credibility distribution function, where For a triangular fuzzy number A A ≤ r and r is a crisp number can be shown in Eq. (12) by using Eq. (10).

⎧ 0, r ≤ AL ⎪ ⎪ ⎪ ⎪ (r) ⎪ LA ⎪ AL r L C ⎪ ⎪ ⎨ 2 = 2 Ac − AL − 2 Ac − AL , A ≤ r ≤ A 1 (x) = CrA (A ≤ r) = = 2 ( A (A ≤ r) + NA (A ≤ r)) ⎪ (x) 2 − RA ⎪ AR + r − 2Ac C ⎪ ⎪

, A ≤ r ≤ AR = ⎪ ⎪ 2 2 AR − AC ⎪ ⎪ ⎩

(12)

1, r ≥ AR

Definition 12. Eq. (10) proves that (x) is strictly increasing and continuous function for any x ∈ R. In order to take a function’s inverse, that function has to be an one-to-one function and must be continuous on a defined interval. (x) function is an one-to-one function and it is mapping elements of X on defined interval [AL, AC] to elements of ␣ on the interval [0,1] as shown in Figs. 3 and 4. The inverse of (x) function is called as inverse credibility distribution function of fuzzy number A and is demonstrated as −1 (a). Shortly, this relationship is defined by ˚ (x) = ˛ ⇔ ˚−1 (a) = x. The inverse function of credibility distribution −1 (a)for a triangular fuzzy number A can be shown in Eq. (13) by using Eq. (12).



−1 (a) =





2a Ar − AC + Al , 0 ≤ a ≤ 0.5 Ar − (2 − 2a) (Ar − Ac ) , 0.5 ≤ a ≤ 1

(13)

˜ = {10, The relationships between possibility, necessity, credibility and inverse of credibility distributions for a triangular fuzzy number A 23, 32} can be shown in Figs. 2–5 respectively. When we examine the distribution functions of possibility, necessity and credibility for an fuzzy event, the relationship can be clearly seen as Pos {·} > Cr {·} > Nec {·} . Definition 13. In order to model and solve the problems with probabilistic parameters under uncertainty, Charles and Cooper [23] introduced the technique of chance constrained programming. Liu and Iwamura [24] extended chance constrained programming from stochastic to fuzzy environment. Furthermore, they presented some crisp equivalents of chance constraints in fuzzy environment. Their touch on the technique of chance constrained programming is in fallowing form: maxf (x)









s.t. : Pos |gi x,  ≤ 0, i = 1, 2, . . ., p

≥␣

H. Asadi / Journal of Manufacturing Systems 42 (2017) 244–261

249

Fig. 4. Credibility Distribution.

Fig. 5. Inverse Credibility Distribution.

Fig. 6. First Model Outputs for Same Confidence Level ␣.

where x is a decision vector,  is a vector of fuzzy parameters, f(x) is the return function, gi (x, ) are constraint functions, i = 1,2, . . ., p, ␣ is a predetermined confidence level, Pos{·} denotes the possibility of the event {·}. So a point x is feasible if only if the possibility measure of the

set{| x,  ≤ 0, i = 1, 2, . . ., p} is at least ␣ [24]. Liu and Iwamura [24] suggested converting chance constraints to their crisp equivalent to solve chance constrained programming problem with fuzzy parameters (Figs. 6–8). Pos{␰|gi(x, ␰) ≤ 0} ≥ ai chance constraints for i = 1,2, . . ., p may be written as Pos{ i |hi (x) ≤  i } ≥ ␣i where hi (x) are decision vector of x (linear or nonlinear) and  i are fuzzy numbers with membership functions ␮( i ) i = 1,2, . . ., p, respectively. It is clear that, for any given confidence level (0 ≤ ␣i ≤ 1), there exists some values K␣i such that Pos{ i |K␣i ≤  i } = ␣i . Liu and Iwamura [24] mentioned that the possibility P{ i |K␣i ≤  i } will increase if K␣i replaced with smaller numbers K  ˛i since



Pos{␰i|K␣i ≤ ␰i} = Supp{␮ i |K␣i ≤ ␰i}



≤ Supp{␮ i |K␣ i ≤ ␰i} = Pos{␰i|K␣ i ≤ ␰i}

250

H. Asadi / Journal of Manufacturing Systems 42 (2017) 244–261

Fig. 7. 1|P˜r = P˜i ra˜ |

Fig. 8. 1|P˜ r = p˜ i r a˜ |





C Outputs for Same Confidence Level ␣.

C Outputs for Same Confidence Level ␣.

By using the knowledge of Po{i|K˛i ≤ i} = ˛i and ˛i =  (K˛i) ,Liu and Iwamura [24] showed K␣i = ␮−1(␣i) for unimodal membership functions ␮(i) i = 1,2, . . ., p. However, there can be more than one K␣i for an unique ␣i value. Liu and Iwamura [24] suggested taking the maximum value of possible K␣i values. Liu and Iwamura’s [24] approach for chance constrained programming may consist of possibility, necessity and credibility measures. In our study, we use credibility based chance constrained programming. If we put forth our touch to chance constrained programming in fallowing form: maxf (x)





s.t. : C{␰|gi x,  ≤ 0, i = 1, 2, . . ., p} ≥ ␣, where Cr{·} denotes the credibility of the event {·}. Moreover, converting of credibility based chance constraints to crisp equivalents must be done for solving the problem.

C{|gi x,  ≤ 0} ≥ ˛i chance constraints i = 1,2, . . ., p may be rewritten as C{|hi (x) ≤ 0} ≥ ˛i. For a given confidence level interval [0,1] and a fuzzy number  on the interval [L, R] where L and R denote least and largest crisp values of , there is only an unique K˛i that ensures Cr{i|K˛i ≤ i} = ˛i. Therefore, the inverse function of credibility distribution, introduced before, may be crisp equivalent for credibility based chance constraint and we get K˛i = ˚−1 (˛i|K˛i ≤ i). 3. Fuzzy mixed integer nonlinear programming models We consider Biskup’s learning effect [15] expression (P[r] = Pira) with fuzzy processing times and fuzzy learning effect when the objectives are to minimize makespan, total completion time and total weighted completion time, respectively. By using credibility based chance constrained programming, we can write our models in following form:

H. Asadi / Journal of Manufacturing Systems 42 (2017) 244–261

251

Min Cmax s. t.: Cmax = min (Cmax, S = 1, 2, . . .n!) S S Cmax ≥ C[N] , S = 1, 2, . . .n! S S S ≥ C[r−1] + P[r] , S = 1, 2, . . .n! C[r]

S = P[r]

n 

Xi,r p˜ i r a˜ r , S = 1, 2, . . . n!, i = 1, 2, . . ., nandr = 1, 2, . . ., n

i=1 n 

Xi,r = 1, r = 1, 2, . . .n

i=1 n 

Xi,r = 1, i = 1, 2, . . .n

i=1

C(P˜ i ≤ Pi , i = 1,2, . . ., n) ≥ ai C(˜ai ≤ ar , r = 1,2, . . ., n) ≥ ar where Cmax denotes the minimum makespan value of all possible schedules in set of S = 1,2, . . . n, ␣i and ␣r predetermined confidence levels for ith job’s processing time and rth position’s learning effect, respectively. Pi is the most allowable processing time of ith job and ar is the most allowable learning effect at rth position. [r]S is the actual completion time at rth position and its dependent on previous actual processing time and actual processing time at current position. The credibility based chance constrained programming models for total completion time and total weighted time can be easily written as above form. In order to fix our credibility based chance constrained programming model to fuzzy mixed integer nonlinear programming model, we should convert credibility based chance constraints to their crisp equivalents. Wang et al. [1] introduced the job completion likelihood profile ␮A for a job with fuzzy processing time A. They presented ␮A is the likelihood of that job to be completed within a certain allocated processing time x and they presented the inverse function ␮A−1 of denotes the processing time of the job under predetermined confidence level constraint. In their study, the the likelihood profile ␮A. −1 A (a) function is also similar to the inverse of credibility distribution likelihood profile is an average of possibility and necessity measures. −1 A (a) function. The methods are seen as the same while calculating credibility measure and its inverse function. Therefore, we can use both of the likelihood profile and credibility measure terms. As a contribution, the learning effect is another factor effecting actual processing time at rth position. Originally, the job completion likelihood profile implies that the job will be completed within a certain allocated processing time x without any outsider effect. When we build our model, we use two likelihood profiles those are the job completion likelihood profile without learning effect and the learning effect likelihood profile at given position for a given predetermined confidence level. Then, we use two inverse functions of two likelihood profiles to calculate actual processing time at rth position for predetermined confidence levels. In our fallowing models, processing times and learning effect coefficients are in forms of triangular fuzzy number. Model 1 with same confidence levels:

Indexes i r

index of jobs i = 1, . . ., N index of position numbers in the sequence r = 1, . . ., N

Parameters pli Optimistic processing time of ith job (Left side value of triangular fuzzy number) Most Likely processing time of ith job (Center value of triangular fuzzy number) pci Pessimistic processing time of ith job (Right side value of triangular fuzzy number) pRi

alr acr aRr wi ␣ A

Optimistic learning effect coefficent at rth position (Left side value of triangular fuzzy number) Most Likely learning effect coefficent at rth position (Center value of triangular fuzzy number) Pessimistic learning effect coefficent at rth position (Right value of triangular fuzzy number) objective function weight coeeficent of ith job completion common confidence level The time when machine can process jobs

Decision Variables Cr Actual job completion time of job at rth position in the sequence Actual job processing time of job at rth position in the sequence pr Ci Actual job completion time of ith job Make span of the schedule Cmax

 Xi

1, ifjobiisinrthpositionofthesequence 0otherwise

252

H. Asadi / Journal of Manufacturing Systems 42 (2017) 244–261

pi (X) −1 pi (a) ar (X) −1 ar (a)

The job processing likelihood profile of ith job Processing time of ith job The learning likelihood profile of rth position Learning effect of rth job



1, if 0 ≤ ˛ ≤ 0.5

Y:

0, if 0.5 ≤ ˛ ≤ 1

cmax N 

Minimize (14.a) = 1wi Ci

Minimize (14.b)

= 1Ci

Minimize (14.c)

i N  i

Subject to: Cr = Pr + Cr−1 ∀r



(15)

N

Pr = (

Xi,r ∗ −1 pi (a))r

−1 p (a) i

∀r

(16)

i=1

C[0] = A N 

(17)

Xi,r = 1∀r

(18)

Xi,r = 1∀i

(19)

i=1 N  r=1















L

L





C L L R R C −1 ∀i pi (a) = Y ∗ 2a ∗ Pi − Pi + Pi + (1 − Y ) ∗ Pi (2 − 2a) ∗ (Pi − Pi

(20)

C R R −1 ar (a) = Y ∗ 2a ∗ ar − ar + ar + (1 − Y ) ∗ ai (2 − 2a) ∗ (ai − ai

(21)

Ci =

N  r=1

C

∀r

Xi,r ∗ Cr ∀i





Xi,r andYε 0, 1 ∀i, r

(23)

−1 Ci , Cr , Pr , −1 pi (a) , ar (a) ≥ o∀i, r

(24)

Model 2 with different confidence levels: I R

index of jobs i = 1,. . .,N index of position numbers in the sequence r = 1,. . .,N

Parameters optimisticprocessingtimeofith job(Leftsidevalueoftriangular fuzzynumber) pLi pCi

Mostlikelyprocessingtimeofith job(centervalueoftriangularfuzzynumber)

wi A api aar

optimistic learning effect coefficentatr th position(Left sidevalue of triangular fuzzy number) Most likely learning effect coefficentatr th position(centervalueof triangular fuzzy number) pessimistic learning effect coefficenta t r th position(Rightsidevalue of triangularfuzzy number) objective function weigh coeficent of ith job completion the time when machine can process jobs confidenence level for ith job’s processing time confidenence level for learningeffectcoefficentatr th position

pRi aLr aCr aRr

(22)

pessimistic processingtime ofith job(Right side value of triangular fuzzy number)

Decision variables Actual job completion time of job at r th position in the sequence Cr pr Actual job processing time of job at r th position in the sequence makespan of the schedule Cmax

H. Asadi / Journal of Manufacturing Systems 42 (2017) 244–261

 Y pi (x) −1 pi



api

253

1, ifjobiisinr th positionofthesequence 0, otherwise

profile of ith job

the job processing likelihood th processing time of i

job with api confidence level

the learning likelihood profile of r th position

ai (x)



confidence level for ith job’s processing time (0 ≤ aLpi ≤ 0.5)

th

aLpi api

aLar api

confidence level for learningeffectcoefficentatr position

(0 ≤  aRar ≤ 1) Yapi :



Yaar :

1, if 0 ≤ api ≤ 0.5 0, if 0.5 ≤ api ≤ 1 1, if 0 ≤ aar ≤ 0.5 0, if 0.5 ≤ aar ≤ 1

Model Minimize cmax



(25.a)

N

Minimize

= 1wi Ci

(25.b)

= 1Ci

(25.c)

i

Minimize

N  i

Subject to: Cr = Pr + Cr−1 ∀r N 

Pr = (

(26)





−1

Xi,r ∗ −1 api )r ar pi

(aar )

∀r

(27)

i=1

C[0] = A N 

(28)

Xi,r = 1∀r

(29)

Xi,r = 1∀i

(30)

i=1 N  r=1















L











−1 api = Yapi ∗ 2api ∗ PiC − PiL + PiL + 1 − Yapi ∗ PiR 2 − 2api ∗ (PiR − PiC ∀i pi

(31)

C R R −1 ar (aar ) = Yaar ∗ 2aar ∗ ar − ar + ar + (1 − Yaar ) ∗ ai (2 − 2aar ) ∗ (ai − ai

(32)

Ci =

N 

L



Xi,r ∗ Cr ∀i

r=1

C

∀r

(33)





api = Yapi ∗ aLpi + 1 − Yapi ∗ aRpi ∀i

(34)

+ (1 − Yaar ) ∗ aRar ∀r

(35)

aar =

Yaar ∗ aLar ≤ 0.5∀i

(36)

0.5 ≤ aRpi ≤ 1∀i

(37)

0 ≤ aLar ≤ 0.5∀r

(38)

≤ 1∀r

(39)

0≤

aLpi

0.5 ≤

aRar

0 ≤ api ≤ 1∀i 0 ≤ aar ≤ 1∀r

(40)









(41)

Xi,r , Yapi Yaar ε 0, 1 ∀i, r Ci , Cr , Pr , −1 pi

api

, −1 ar

(42) (aar ) , aLpi , aRpi , aLar , aRar

≥ o∀i, r

(43)

Proposed two models are exactly in the same reason excepting a difference. This difference is that first model uses a given common confidence level as a model parameter on the contrary of second model. Second model uses different confidence levels for learning effects

254

H. Asadi / Journal of Manufacturing Systems 42 (2017) 244–261

at each position and processing times as model parameters to minimize total weighted completion time. Objective functions (14.a-14.c) and (25.a-25.c) are to minimize total weighted completion time, total completion time and makespan value, respectively, under fuzzy learning effects with fuzzy processing times. Constraints (15) and (26) show that job completion time at rth position is the summation of actual processing time at rth position and previous job completion time at (r − 1)h position. Constraint (16) shows that actual processing time at rth position is dependent on processing time likelihood profile under learning effect defined as learning effect likelihood profile at rth position with same ␣ confidence level. As a result of difference between two models, constraint (27) implies the same function with constraint (16) for different confidence levels ␣ar and ␣pi. Constraints (17) and (28) show that completion time equals to machine ready time A at the 0th position. Constraints (18) and (29) ensure that each job must be assigned to one position in the schedule. Constraints (19) and (30) ensure that each position must be used for only one job. Constraint (20) is job processing likelihood profile function for given confidence level ␣ and ith job’s processing time. Constraint (31) is job processing likelihood profile function that dependent on an ␣pi decision variable for each job. Constraint (21) is learning effect likelihood profile function for given confidence level ␣ at rth position. Constraint (32) is learning effect likelihood profile function that dependent on an ␣ar decision variable for each position. Constraints (22) and (33) show the calculation way for converting completion time for each position to completion time for each job. Constraints (34) and (35) are assignment functions to determine confidence levels ␣pi and ␣a. Constraints (36–41) are limiting the confidence levels due to their requirements. Constraints (23) and (42) show that Xi, Y˛pi, Y␣ar and Y decision variables are binary. Constraints (24) and (43) show all parameters in two models are greater than or equal to zero. Thorem 1:. the shortest processing time (SPT) dispatching rule asures optimum schedule for the problem of 1|˜pr = p˜ i r a˜ |cmax with same confidence level a. Proof 1.





−1 let II is a set of two jobs and II = ji , jj the job ji has a job processing likelihood profile −1 pi (a)profile pj (a) at confidence level

−1 a.furthermore, our assumption is −1 pi (a) < pj (a). there are two alternative schedules S and S’. schedules S implies that job ji precedes job

jj and schedule S’ implies that job jj precedes job ji . Let Z is the maximum completion time of schedule S and Z’ is the maimum completion time of schedule S’. if SPT dispatching rule assures optimm schedule, Z’-Z > 0. Z’> = Ci Z = Cj −1

a r Ci = A + −1 pj (a) ∗ r

(a)

−1

a r Cj = A + −1 pj (a) ∗ r

(a)

−1

a r + −1 pi (a) ∗ (r + 1)

−1

a r + −1 pj (a) ∗ (r + 1)

−1

a Z − Z = A + −1 pj (a) ∗ r r −1

a +−1 pj (a) ∗ r r

(a)

− A ∗ r ar

−1

(a)

∗ (r + 1)ar

Z −Z =

−1



−1

+ A ∗ r ar

−1

(a)

∗ r ar

(a)

∗ (r + 1)ar

−1

∗ r ar

(a)

−1

(a)

(a)

−1

(a)

−1

−1

 

(a)

−1

a −1 Since −1 pj (a) ≥ pi (a) and r r

(a)

(a)

(a)

(a)

(a)

−1

a r + −1 pi (a) ∗ r −1

+ A ∗ (r + 1)ar

−1

∗ (r + 1)ar

(a)

(a)

−1

∗ (r + 1)ar

(a)

(a)

(a)

− −1 pi (a)

(a)

−1

−1

−1

a r + −1 pj (a) ∗ r

− A − −1 pi (a) −1

− (r + 1)ar

> (r + 1)ar

(a)

− A ∗ (r + 1)ar −1

(a)

−1

−1

(a)

(a)

+ A ∗ (r + 1)ar

∗ (r + 1)ar

∗ (r + 1)ar

−1

a −1 r −1 pj (a) − pi (a) ∗ r

(a)

−1

+ A ∗ r ar

− A ∗ r ar

−1

+ A ∗ (r + 1)ar

a + (−1 pi (a) ∗ (r + 1) r

a − −1 pj (a) ∗ (r + 1) r

(a)

(a)

(a)

 



−1

a r + −1 pj (a) ∗ r

(a)

−1

∗ (r + 1)ar

(a)

)

the equation of Z’-Z is greater than zero, this circumstance prove schedule S dominates

schedule S’ when the objective is to minimize the sum of weighted completion times. Thus, SPT dispatching rule assures the optimum schedule for 1|P˜ r = P˜ r r a˜ |Cmax . Theorem 2:. Proof 2:.

the shortest processing time (SPT) dispatching rule assures optimum schedule for the problem of 1 with same confidence level a.





let II is a set of two jobs and II = Ji , Jj . the job Ji has a job processing like lihood profile −1 pi (a)at confidence level a. the job

−1 −1 Jj has a job processing like lihood profile −1 pj (a) at confidence level a. furthermore, our assumption is pi (a) < pj (a). there are two

alternative schedules S and S’. schedule S implies that job Ji precedes job Jj and schedule S’ implies that job Jj precedes job Ji . Let Z is the sum of total completion time of schedule S and Z’ is the sum of total completion time of schedule S. if SPT dispatching rule assures optimum schedule, Z -Z > 0. Z’ = Ci + Cj −1

a r Ci = A + −1 pj (a) ∗ r −1

a r Ci = A + −1 pj (a)r

(a)

−1

a r + −1 pi (a) ∗ (r + 1)

(a)

−1

+ A ∗ (r + 1)ar

(a)

−1

a r + −1 pj (a) ∗ r

(a)

−1

∗ (r + 1)ar

(a)

(a)

Z = Ci + Cj −1

(a)

−1

(a)

a r Ci = A + −1 pi (a) ∗ r

a r Cj = A + −1 pi (a) ∗ r

−1

a r + −1 pj (a) ∗ (r + 1)

(a)

−1

+ A ∗ (r + 1)ar

(a)

−1

a r + −1 pi (a) ∗ r

(a)

−1

∗ (r + 1)ar

(a)

H. Asadi / Journal of Manufacturing Systems 42 (2017) 244–261

−1

a Z − Z = A + −1 pj (a) ∗ r r −1

a +−1 pj (a) ∗ r r

(a)

−1

(a)

− A ∗ r ar

−1

(a)

∗ (r + 1)ar

∗ r ar

Z −Z =

−1



−1

+ A ∗ r ar

∗ (r + 1)ar

−1

∗ r ar

(a)

(a)

−1

(a)

−1

a + (−1 pi (a) ∗ (r + 1) r

−1

+ A ∗ r ar

(a)

(a)

−1

a − −1 pj (a) ∗ (r + 1) r

(a)

−1

− A ∗ r ar

(a)

 

−1

−1

(a)

−1

−1

(a)

(a)

− −1 pi (a)

(a)

(a)

− (r + 1)ar

> (r + 1)ar

(a)

−1

−1

−1

+ A ∗ (r + 1)ar

− A − −1 pi (a)

(a)

− A ∗ (r + 1)ar

(a)

∗ (r + 1)ar

a −1 r −1 pj (a) − pi (a) ∗ 2 ∗ r

a −1 Since −1 pj (a) ≥ pi (a) and r r

−1

∗ (r + 1)ar

(a)

255

(a)

 

 

−1

a −1 r + −1 pj (a) − pi (a) ∗ r

(a)

−1

∗ (r + 1)ar

(a)



the equation of Z’-Z is greater than zero, this circumstance prove schedule S dominates

schedule S’ when the objective is to minimize the sum of weighted completion times. Thus, SPT dispatching rule assures the optimum  schedule for 1|P˜ r = P˜ r r a˜ | C. Theorem 3:. the weighted shortest processing time (WSPT) dispatching rule assures optimum schedule for the problem of 1 with same confidence level a. Proof 3:.





let II is a set of two jobs and II = Ji , Jj . the job Ji has a job processing like lihood profile −1 pi (a)at confidence level a and

weight wi .the job Jj has a job processing like lihood profile −1 pj (a) at confidence level a and weight wj . furthermore, our assumption −1 is−1 pi (a) /wi < pj (a) /wj . there are two alternative schedules S and S’. schedule S implies that job Ji precedes job Jj and schedule S’

implies that job Jj precedes job Ji . Let Z is the total weighted completion time of schedule S and Z’ is the total weighted completion time of −1 −1 −1 schedule S’. ifWSPT dispatching rule assures optimum schedule, Z’-Z > 0.the condition of −1 pi (a) /wi < pj (a) /wj implies pi (a) ≤ pj (a) and wi ≥ wj .

Z’ = wi ∗ Ci + wj ∗ Cj −1

a r Cj = A + −1 pj (a)r

(a)

−1

a  r Ci = Cj + (−1 pj (a) + Cj )(r + 1) −1

a r Ci = (A + −1 pj (a) ∗ r −1

a r Ci = A + −1 pj (a) ∗ r

(a)

(a)

(a)

−1

a −1 r ) + (−1 pi (a) + (A + pj (a) ∗ r

(a)

−1

a −1 r + −1 pi (a) ∗ A + pj (a) ∗ (r + 1) −1

a Z = wi ∗ A + wi ∗ −1 pj (a) ∗ r r

−1

a +wi ∗ −1 pj (a) ∗ r r

(a)

−1

(a)

a + wi ∗ −1 pi (a) ∗ (r + 1) r

(a)

∗ (r + 1)ar

−1

(a)

−1

))(r + 1)ar

(a)

−1

+ A ∗ (r + 1)ar

(a)

(a)

−1

a r + −1 pj (a) ∗ r −1

+ wi ∗ A ∗ (r + 1)ar −1

a + wj ∗ A + wj ∗ −1 pj (a)r r

(a)

−1

∗ (r + 1)ar

(a)

(a)

Z = wi ∗ Ci + wj ∗ Cj −1

(a)

−1

(a)

a r Ci = A + −1 pi (a) ∗ r

a r Cj = A + −1 pi (a) ∗ r

−1

a r + −1 pj (a) ∗ (r + 1) −1

a r Z = wi ∗ A + wi ∗ −1 pj (a) ∗ r −1

+wj ∗ A ∗ (r + 1)ar

(a)

−1

−1

a r + wj ∗ −1 pj (a) ∗ r

−1

−1

a +wi ∗ −1 pi (a) ∗ r r

(a)

∗ A − wi ∗ −1 pj (a) ∗ r −1

(a)

−1

(a)

∗ (r + 1)ar ∗ (r + 1)ar

−1

∗ (r + 1)ar

−1 ar (a)

−1

+ A ∗ (r + 1)ar

a r + wj ∗ A + wj ∗ −1 pj (a) ∗ r

(a)

a Z − Z = wi ∗ A + wi ∗ −1 pj (a) ∗ r r

(a)

(a)

(a)

(a)

−1

(a)

(a)

−1

(a) −1

a + wj ∗ A + wj ∗ −1 pi (a) r r −1 ar (a) −1

a − wj ∗ −1 pi (a) ∗ r r

−1

a r + −1 pi (a) ∗ r

−1

−1

+ wi ∗ A ∗ (r + 1)ar (a)

−1

∗ (r + 1)ar

a r + wj ∗ −1 pi (a) ∗ (r + 1)

− wi

− wj ∗ −1 pj (a)

(a)

(a)

(a)

a + wi −1 pj (a) ∗ (r + 1) r

− wj ∗ A + wj ∗ −1 pi (a) ∗ r

− wj ∗ A ∗ (r + 1)ar

−1

∗ (r + 1)ar

(a)

(a)

(a)

(a)

(a)

256

H. Asadi / Journal of Manufacturing Systems 42 (2017) 244–261

Z −Z =



 

−1

a −1 r −1 pj (a) − pi (a) ∗ wi ∗ r





−1

a −1 r + wi ∗ −1 pi (a) − wj ∗ pi (a) ∗ r

Z −Z =



 

−1

a −1 r −1 pj (a) − pi (a) ∗ wi ∗ r −1

a −1 Since wi ≥ wj , −1 pj (a) ≥ pi (a), r r

(a)

(a)

(a)

(a)

 

 

−1

a −1 r − −1 pj (a) − pi (a) ∗ wi ∗ r −1

∗ (r + 1)ar

(a)

−1

−1

(a)





− wi ∗ (r + 1)ar

> (r + 1)ar

(a)

(a)





−1

a −1 r + wi ∗ −1 pi (a) − wj ∗ pi (a) ∗ r

(a)

−1

∗ (r + 1)ar

(a)



−1 and wi ∗ −1 pj (a) ≥ wj ∗ pi (a) the equation of Z’-Z is greater than zero, this cir-

is to minimize the sum of weighted completion times. Thus, cumstance prove that schedule S dominates schedule S’ when the objective  WSPT dispatching rule assures the optimum schedule for 1|P˜ r = P˜ r r a˜ | wC.

4. Algorithms for the problems with different confidence levels Although common scheduling dispatching rules ensures the optimum schedule for same confidence levels, the problems with different confidence levels for each job and position make harder to apply SPT or WSPT rule to obtain the optimum schedule. SPT and WSPT rules take (nlogn) computational times for a given input size n, and these are polynomially solvable algorithms. However, using different confidence levels in our problems encourages us to find new ways to find the optimum. Therefore, we need to search shortest or weighted shortest actual processing time for each combination of job and position indexes; this search basically requires n2 input size. The fallowing algorithm contains eight different phases. 1|P˜r = P˜ira˜ | Cmax problem uses Phases 1, 2, 4 and 6 of the algorithm, respectively. 1|P˜r = P˜ira˜ |  C problem uses Phases 1, 2, 4 and 7 of the algorithm, respectively. 1|P˜r = P˜ira˜ |  wC problem uses Phase 1, 3, 5 and 8 respectively. Without Phase 1, declaration part of the algorithm, computational time requirements for Phase 2–8 are n2, n2,n2 + n2,n2 + n2, n, n2 and n2. Thus, we may say that each algorithm for each problem takes (n2) computational times.

H. Asadi / Journal of Manufacturing Systems 42 (2017) 244–261

257

5. Numerical example We use following examples to demonstrate the results of the theorems: Example 1. Suppose five jobs are ready to be scheduled. Processing times and learning effects are in form of triangular fuzzy numbers. Learning effect a˜ is on an interval [−0.2, −0.1] and its triplet points are respectively −0.2, −0.15 and −0.1. The time that the machine can start to process A is equal to 0 time units. Processing times are shown in Table 1. When we solve first model with data in Table 1, results in Table 2 for different confidence levels put forward that the shortest processing time rule (SPT) assures the optimum schedule for 1|P˜ r = p˜ i r a˜ | Cmax problem with same confidence level ␣ for Pi − 1 (˛) andar − 1 (˛) .Resultsin Table 2 are obtained by using DICOPT solver in Gams 24.6 software.

258

H. Asadi / Journal of Manufacturing Systems 42 (2017) 244–261

Table 1 Data for Numerical Example 1. Job No: i = (1 . . . 5)

Processing Time (P L , P C , P R )

wi Completion Time Weights

1 2 3 4 5

{9,12,16} {8,17,18} {16,17,21} {6,9,13} {8,13,17}

0.15 0.25 0.1 0.35 0.15

Table 2 1 |P˜ r = p˜ i r a˜ | Cmax Outputs for Same Confidence Level ␣. Confidence Level (␣)

Cmax Makespan

Sequence arranged by SPT

Optimal Schedule

0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00

37.804 39.7695 41.7435 43.7179 45.7104 47.7212 49.7386 51.7752 53.8312 55.9056 58.0020 59.8056 61.6237 63.4565 65.3041 67.1666 69.0441 70.9369 72.8451 74.7687 76.7079

4,5,2,1,3 4,5,2,1,3 4,5,1,2,3 4,5,1,2,3 4,5,1,2,3 4,1,5,2,3 4,1,5,2,3 4,1,5,2,3 4,1,5,2,3 4,1,2,5,3 4,1,2,5,3 4,1,2,5,3 4,1,5,2,3 4,1,5,2,3 4,1,5,2,3 4,1,5,2,3 4,1,2,5,3 4,1,2,5,3 4,1,2,5,3 4,1,5,2,3 4,1,5,2,3

4,5,2,1,3 4,5,2,1,3 4,5,1,2,3 4,5,1,2,3 4,5,1,2,3 4,1,5,2,3 4,1,5,2,3 4,1,5,2,3 4,1,5,2,3 4,1,2,5,3 4,1,2,5,3 4,1,2,5,3 4,1,5,2,3 4,1,5,2,3 4,1,5,2,3 4,1,5,2,3 4,1,2,5,3 4,1,2,5,3 4,1,2,5,3 4,1,5,2,3 4,1,5,2,3

As shown in Table 2, SPT dispatching rule assures optimum schedule for 1|P ˜ r = P ˜ i ra˜ | Cmax problem for same confidence level ␣. To prove the result in Table 2, we will solve the first model for confidence level ␣ = 0.85. For ␣ = 0.85, p1 −1 (0.85) = 14.8, p2 −1 (0.85) =17.7, p3 −1 (0.85) =19.8, p4 −1 (0.85) = 11.8, and 5 −1 (0.85) = 15.8. For all r positions, ar −1 (0.85) = −0.115. According to Theorem 1, we arrange the sequence by using SPT rule. 11.8 < 14.8 < 15.8 < 17.7 < 19.8 this circumstance gives us the optimum sequence that is S∗ = (4,1,5,2,3). C[0] = 0 P[1] = (P4)1a = (11.8)1−0.115 = 11.8 C[1] = C[0] + P[1] = 0 + 11.8 = 11.8 P[2] = (P2)2a = (14.8)2−0.115 = 13,6660582 C[2] = C[1] + P[2] = 11.8 + 13,6660582 = 25,4660582 P[3] = (P5)3a = (15.8)3−0.115 = 13.9247736 C[3] = C[2] + P[3] = 25,4660582 + 13.9247736 = 39.39083179 P[4] = (P1)4a = (17.7)4−0.115 = 15.09163758 C[4] = C[3] + P[4] = 39.39083179 + 15.09163758 = 54.48246938 P[5] = (P3)5a = (19.8)5−0.115 = 16.45446021 C[5] = C[4] + P[5] = 54.48246938 + 16.45446021 = 70.93692959 Cmax = [5] = 70.93692959 When we solve first model with data in Table 1, results in Table 3 for different confidence levels put forward that the shortest processing time rule (SPT) assures the optimum schedule for 1|P ˜ r = P ˜ i ra˜ |  C problem with same confidence level ␣ for ␮Pi−1(␣) and ␮ar−1(␣). Results in Table 3 are obtained by using DICOPT solver in Gams 24.6 software. As shown in Table 3, SPT dispatching rule assures optimum schedule for 1|P˜ r = p˜ i r a˜ |  C problem for same confidence level ␣. To prove the result in Table 3, we will solve the first model for confidence level ␣ = 0.65. For ␣ = 0.65, ␮P1−1(0.65) = 13.2, ␮P2−1(0.65) = 17.3, ␮P3−1(0.65) = 18.2, ␮P4−1(0.65) = 10.2, and ␮P5−1(0.65) = 14.2. For all r positions, ␮ar−1(0.65) = −0.135. According to Theorem 2, we arrange the sequence by using SPT rule. 10.2 < 13.2 < 14.2 < 17.3 < 18.2 this circumstance gives us the optimum sequence that is S∗ = (4,1,5,2,3). C[0] = 0 P[1] = (P4)1a = (10.2)1−0.135 = 10.2 C[1] = C[0] + P[1] = 0 + 10.2 = 10.2 P[2] = (P2)2a = (13.2)2−0.135 = 12,0208418 C[2] = C[1] + P[2] = 10.2 + 12,0208418 = 22,2208418 P[3] = (P5)3a = (14.2)3−0.135 = 12.24269345

H. Asadi / Journal of Manufacturing Systems 42 (2017) 244–261

259

Table 3 1|P˜ r = p˜ i r a˜ |  C Outputs for Same Confidence Level ␣. Confidence Level (␣) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00



C

102.3613 108.7102 114.9255 120.8258 126.7695 132.7572 138.5757 144.4394 150.3486 156.3037 162.3051 167.8881 173.5035 179.1516 184.8327 190.5470 196.2948 202.0763 207.8918 213.7416 219.6260

Sequence arranged by SPT

Optimal Schedule

4,5,2,1,3 4,5,2,1,3 4,5,1,2,3 4,5,1,2,3 4,5,1,2,3 4,1,5,2,3 4,1,5,2,3 4,1,5,2,3 4,1,5,2,3 4,1,2,5,3 4,1,2,5,3 4,1,2,5,3 4,1,5,2,3 4,1,5,2,3 4,1,5,2,3 4,1,5,2,3 4,1,2,5,3 4,1,2,5,3 4,1,2,5,3 4,1,5,2,3 4,1,5,2,3

4,5,2,1,3 4,5,2,1,3 4,5,1,2,3 4,5,1,2,3 4,5,1,2,3 4,1,5,2,3 4,1,5,2,3 4,1,5,2,3 4,1,5,2,3 4,1,2,5,3 4,1,2,5,3 4,1,2,5,3 4,1,5,2,3 4,1,5,2,3 4,1,5,2,3 4,1,5,2,3 4,1,2,5,3 4,1,2,5,3 4,1,2,5,3 4,1,5,2,3 4,1,5,2,3

Table 4  wC Outputs for Same Confidence Level ␣. 1|P˜ r = p˜ i r a˜ | Confidence Level (␣) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00



wC

15.9605 17.0486 18.1432 19.2444 20.3523 21.4668 22.5611 23.6621 24.7696 26.1575 27.0048 27.7996 28.5984 29.4014 30.2084 31.0195 31.8348 32.6543 33.4780 34.3060 35.1383

Sequence arranged by WSPT

Optimal Schedule

4,2,5,1,3 4,2,5,1,3 4,2,5,1,3 4,2,5,1,3 4,2,5,1,3 4,2,5,1,3 4,2,1,5,3 4,2,1,5,3 4,2,1,5,3 4,1,2,5,3 4,1,2,5,3 4,1,2,5,3 4,2,1,5,3 4,2,1,5,3 4,2,1,5,3 4,2,1,5,3 4,2,1,5,3 4,2,1,5,3 4,2,1,5,3 4,2,1,5,3 4,2,1,5,3

4,2,5,1,3 4,2,5,1,3 4,2,5,1,3 4,2,5,1,3 4,2,5,1,3 4,2,5,1,3 4,2,1,5,3 4,2,1,5,3 4,2,1,5,3 4,1,2,5,3 4,1,2,5,3 4,1,2,5,3 4,2,1,5,3 4,2,1,5,3 4,2,1,5,3 4,2,1,5,3 4,2,1,5,3 4,2,1,5,3 4,2,1,5,3 4,2,1,5,3 4,2,1,5,3

C[3] = C[2] + P[3] = 22,2208418 + 12.24269345 = 34.46353525 P[4] = (P1)4a = (17.3)4−0.135 = 14.34722814 C[4] = C[3] + P[4] = 34.46353525 + 14.34722814 = 48.81076339 P[5] = (P3)5a = (18.2)5−0.135 = 14.64571024 C[5] = C[4] + P[5] = 48.81076339 + 14.64571024 = 63.45647363 C4 = C[1] = 10.2 C1 = C[2] = 22,2208418 C5 = C[3] = 34.46353525 C2 = C[4] = 48.81076339 C3 = C[5] = 63.45647363  C = 10.2 + 22,2208418 + 34.46353525 + 48.81076339 + 63.45647363  C = 179.1516141 When we solve first model with data in Table 1, results in Table 4 for different confidence levels put forward that the weighted shortest processing time rule (WSPT) assures the optimum schedule for 1|P˜ r = p˜ i r a˜ |  wC problem with same confidence level ␣ for pi −1 (␣) and ar −1 (a) Results in Table 2 are obtained by using DICOPT solver in Gams 24.6 software. As shown in Table 4, WSPT dispatching rule assures optimum schedule for 1|P ˜ r = P ˜ i ra˜ |  wC problem for same confidence level ␣. To prove the result in Table 4, we will solve the first model for confidence level ␣ = 0.05. For ␣ = 0.05, p1 −1 (0.05) = 9.3, p2 −1 (0.05) = 8.9,p3 −1 (0.05) = 16.1, p4 −1 (0.05) = 6.3 and p5 −1 (0.05) 8.5. For all r positions, ␮ar−1 (0.05) = −0.195. According to Theorem 3, we arrange the sequence by using WSPT rule. 6.3/0.35 < 8.9/0.25 < 8.5/0.15 < 9.3/0.15 < 16.1/0.1 this circumstance gives us the optimum sequence that is S∗ = (4,2,5,1,3). C[0] = 0

260

H. Asadi / Journal of Manufacturing Systems 42 (2017) 244–261

Table 5 Data for Numerical Example 2. Job No: i = (1 . . . 5)

Processing Time (P L , P C , P R )

Completion Time Weights wi

Position No: r = (1. 0.5)

Learning Effect Coefficient (aL , aC , aR )

1 2 3 4 5

{3,5,8} {6,8,10} {11,12,13} {14,15,16} {7,9,13}

0.15 0.25 0.1 0.35 0.15

1 2 3 4 5

{−0.2, −0.15, −0.1} {−0.22, −0.19, −0.18} {−0.32, −0.25, −0.19} {−0.42, −0.35, −0.23} {−0.52, −0.39, −0.25}

Table 6 Results of Phase 2. i/r

1

2

3

4

5

1 2 3 4 5

5 9.2 12.4 15.8 13

4.413515 8.120868 10.94552 13.94671 11.47514

3.627861 6.675264 8.997094 11.46404 9.432438

3.181987 5.854857 7.891329 10.05508 8.273167

2.669142 4.911222 6.619473 8.434489 6.93977

Table 7 Results of Phase 3. i/r

1

2

3

4

5

1 2 3 4 5

33.33333 36.8 124 45.14286 86.66667

29.42343 32.48347 109.4552 39.84774 76.50093

24.18574 26.70105 89.97094 32.7544 62.88292

21.21325 23.41943 78.91329 28.7288 55.15445

17.79428 19.64489 66.19473 24.09854 46.26513

P[1] = (P4)1a = (6.3)1−0.195 = 6.3 C[1] = C[0] + P[1] = 0 + 6.3 = 6.3 P[2] = (P2)2a = (8.9)2−0.195 = 7,774798774 C[2] = C[1] . + P[2] = 6.3 + 7,774798774 = 14.074779877 P[3] = (P5)3a = (8.5)3−0.195 = 6.86088723 C[3] = C[2] + P[3] = 14.074779877 + 6.86088723 = 20.935671 P[4] = (P1)4a = (9.3)4−0.195 = 7.097105322 C[4] = C[3] + P[4] = 20.935671 + 7.097105322 = 28.03277632 P[5] = (P3)5a = (16.1)5−0.195 = 11.7632337 C[4] + P[5] = 28.03277632 + 11.7632337 = 39.79601002 C4 = C[1] = 6.3 C2 = C[2] = 14.074779877 C5 = C[3] = 20.935671 C1 = C[4] = 28.03277632 C3 = C[5] = 39.79601002  wC = 0.15 ∗ 28.03277632 + 0.25 ∗ 14.074779877 + 0.1 ∗ 39.79601002 + 0.35 ∗ 6.3 + 0.15 ∗ 20.935671 = 17.04856 Example 2. Suppose that there are five jobs ready to be processed on single machine and predetermined confidence levels for these jobs are 0.5, 0.8, 0.7, 0.9 and 1, respectively. Each position’s learning effect coefficient is different and predetermined confidence levels for these positions are 0.5, 1, 0.2, 0.6 and 0.5, respectively. The time that the machine can start to process A is equal to 0 time units. Processing times, learning coefficients and weights for jobs are shown in Table 5. By using common algorithm, actual processing times P(i,r) obtained from the Phase 2 of the algorithm are shown in Table 6. Phase 4 of the algorithm gives us a schedule S = {1,2,3,5,4}. By using this sequence makespan value is obtained as 38.826 in Phase 6 and total completion time is obtained as 109.456 in Phase 7. Results of Phase 3 of the algorithms can be seen in Table 7. Phase 5 of the algorithms gives us a schedule S = {1,2,4,5,3}. By using this sequence, total weighted completion time is obtained as 21.511 in Phase 8. Each of final Phases (6–8) of the algorithm gives same results with DICOPT solver in Gams 24.6 software 6. Conclusion In this paper, we consider some single machine scheduling problems with triangular fuzzy processing times under triangular fuzzy learning effect coefficients. Credibility measure is proposed to build a chance constrained programming. Furthermore, by using predetermined confidence levels and inverse function of likelihood profile, chance constrains are converted to crisp equivalents. For three objectives, two FMINLP model types are proposed with or without same confidence levels. For same confidence levels, well-known dispatching rules SPT and WSPT ensure the optimum schedules. However, while dealing with different confidence levels, SPT and WSPT dispatching rules should be extended to search on both job and position indexes. For future researcher, we suggest to extend problem environment into parallel machine or job-shop environment and to investigate polynomailly solvable algorithms for extended problem.

H. Asadi / Journal of Manufacturing Systems 42 (2017) 244–261

261

References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27]

Pinedo ML. Scheduling Theory, Algorithms, and Systems. third edition New York: Springer; 2008. Liao LM, Liao CJ. Single machine scheduling problem with fuzzy due date and processing time. J Chin Inst Eng 1998;21(2):189–96. Itoh T, Ishii H. Fuzzy due-date scheduling problem with fuzzy processing time. Int Trans Oper Res 1999;6:639–47. Chanas S, Kasperski A. Minimizing maximum lateness in a single machine scheduling problem with fuzzy processing times and fuzzy due dates. Eng Appl Artif Intel 2001;14:377–86. Wang C, Wang D, Ip WH, Yuen DW. The single machine ready time scheduling problem with fuzzy processing times. Fuzzy Sets Syst 2002;127:117–29. Chanas S, Kasperski A. On two single machine scheduling problems with fuzzy processing times and fuzzy due dates. Eur J Oper Res 2003;147:281–96. Sung SC, Vlach M. Single machine scheduling to minimize the number of late jobs under uncertainty. Fuzzy Sets Sys 2003;139:421–30. Chanas S, Kasperski A. Possible and necessary optimality of solutions in the single machine scheduling problem with fuzzy parameters. Fuzzy Sets Syst 2004;142(3):359–71. Kasperski A. Some general properties of a fuzzy single machine scheduling problem. Int J Uncertainty Fuzziness Knowl Based Syst 2007;15(1):43–6. Cheng B, Li K, Chen B. Scheduling a single batch-processing machine with nonidenticaljob sizes in fuzzy environment using an improved ant colony optimization. J Manuf Systs 2010;29:29–34. Li J, Sun K, Xu D, Li H. Single machine due date assignment scheduling problem with customer service level in fuzzy environment. Appl Soft Comput 2010;10:849–58. Moghaddam RT, Javadi B, Jolai F, Ghodratnama A. The use of a fuzzy multiobjective linear programming for solving a multi-objective single-machine scheduling problem. Appl Soft Comput 2010;10:919–25. Li J, Yuan X, Lee ES, Xu D. Setting due dates to minimize the total weighted possibilistic mean value of the weighted earliness–tardiness costs on a single machine. Comput Math Appl 2011;62:4126–39. Bentrcia T, Mouss LH, Mouss NK, Yalaoui F, Benyoucef L. Evaluation of optimality in the fuzzy single machine scheduling problem including discounted costs Int. J Adv Manuf Technol 2015;80:1369–85. Biskup D. Single-machine scheduling with learning considerations. Eur J Oper Res 1999;115:173–8. Ahmadizar F, Hosseini L. Single-machine scheduling with a position-based learning effect and fuzzy processing times. Int J Adv Manuf Technol 2011;65:693–8. Ahmadizar F, Hosseini L. Minimizing makespan in a single-machine scheduling problem with a learning effect and fuzzy processing times. Int J Adv Manuf Technol 2013;65:581–7. Zadeh L. The concept of a linguistic variable and its application to approximate reasoning Parts 1,2,3. Inf Sci 1975;8:199–249. & 301-357 & 43-80. Zadeh L. Fuzzy sets as a basis for a theory of possibility. Fuzzy Sets Syst 1978;1:3–28. L B, Liu Y-K. Expected value of fuzzy variable and fuzzy expected value models. IEEE Trans Fuzzy Syst 2002;10(4):445–50. Liu B. Theory and Practice of Uncertain Programming. second ed Heidelberg: Physica-Verlag; 2002. Liu B. Uncertainty Theory: An Introduction to Its Axiomatic Foundations. Berlin: Springer; 2004. Charnes A, Cooper WW. Chance- constrained programming. Manage Sci 1959;6:73–9. Liu B, Iwamura K. Chance constrained programming with fuzzy parameters. Fuzzy Sets Syst 1998;94:227–82. Mosheiov G, Sidney JB. Scheduling with general job-dependent learning curves. Eur J Oper Res 2003;147:665–70. Kuolamas C, Kyparisis GJ. Single-machine and two machine flowshop scheduling with general learning functions. Eur J Oper Res 2007;178:402–7. Yin Yunqiang, Liu Min, Hao Jinghua. Single-machine scheduling with job-position-dependent learning and time-dependent deterioration. IEEE Trans Syst ManCybern A Syst Hum 2012:192–200.