Embedded simulation on a multiprocessor job scheduling system with inspection

Embedded simulation on a multiprocessor job scheduling system with inspection

Available online at www.sciencedirect.com Computers & Industrial Engineering 57 (2009) 592–607 www.elsevier.com/locate/caie Embedded simulation on a...

698KB Sizes 0 Downloads 46 Views

Available online at www.sciencedirect.com

Computers & Industrial Engineering 57 (2009) 592–607 www.elsevier.com/locate/caie

Embedded simulation on a multiprocessor job scheduling system with inspection Kang-hung Yang *, P. Simin Pulat, Yongpei Guan University of Oklahoma, School of Industrial Engineering, Norman, OK 73072, USA Available online 19 September 2008

Abstract This paper first develops architecture for a multiprocessor job scheduling system with an embedded simulation technique. The architecture provides a shell for applications that are characterized by two scheduling policies, a heuristic algorithm policy and a First-In-First-Out (FIFO) policy. These policies are implemented in the simulation model by using the embedded technique. The paper evaluates these two policies using the queue length, waiting time and flow time as the criteria to compare the performance of these two scheduling policies. Next we designed two simulation situations using two different real world applications. The purpose is to examine the performances of multiprocessor systems with and without inspection operations and two different scheduling policies. The two applications, berth allocation for the container terminal operations and production scheduling arrangement in an Original Equipment Manufacturer (OEM) power supply factory, are studied. The final results show that a proper scheduling policy will perform better than the traditional FIFO approach for a multiprocessor system. Our study also provides guidelines on balancing a system with the addition of a final inspection activity. Ó 2008 Elsevier Ltd. All rights reserved. Keywords: Embedded simulation; Multiprocessor job scheduling; Inspection

1. Introduction 1.1. Multiprocessor job scheduling and its extensions with inspection operations Multiprocessor task scheduling (MTS) problem defines one type of task scheduling problems such that each task is processed by multiple processors (machines) simultaneously and no preemption is allowed (cf., Drozdowski, 1996; Li, Lei, & Pinedo, 1997). Applications of MTS include human resource planning, diagnosable microprocessor system, berth allocation and manufacturing systems (cf., Chen & Lee, 1999) among others. Recently, inspection activities become more important for certain type of MTS problems. U.S. government has taken several actions to against terrorist attacks. For instance, Container Security Initiative (CSI) pro*

Corresponding author. E-mail addresses: [email protected] (K. Yang), [email protected] (P.S. Pulat).

0360-8352/$ - see front matter Ó 2008 Elsevier Ltd. All rights reserved. doi:10.1016/j.cie.2008.09.011

K. Yang et al. / Computers & Industrial Engineering 57 (2009) 592–607

593

gram helps increase security for containerized cargo shipped to the United States (cf., Roach, 2003). CSI sets a goal that 85% of containers heading for U.S. need pre-screened before entering U.S. on CSI seaports in the world. Accordingly, container security inspection operations are important in these seaports. Another example of inspection activity comes from manufacturing systems. The quality inspection for final products becomes a standard production step for most manufacturing factories. Either security check or quality assurance will have serious impacts on the whole system when inspection rate does not match processor service rates. We consider this type of problems as multiprocessor task scheduling with inspection operations (MTSI). 1.2. Relevant literature Two common approaches can solve MTS problems, through optimization or approximation algorithms, or through simulation. Since many deterministic MTS problems were proven NP-complete (cf., Lee & Cai, 1999), it is difficult to find optimal solutions. Thus, previous studies on this area are focused on developing heuristic algorithms to find near-optimal solutions. For instance, Chen and Lee (1999) studied a one-job-on-multiplemachine problem. They developed a pseudo-polynomial time algorithm to solve optimally two machine problems, and provided a heuristic to solve three machine problems. Guan, Xiao, Cheung, and Li (2002) developed a heuristic algorithm to study the similar type of multiprocessor task scheduling problems with the application in berth allocation. This heuristic algorithm provides a relative error within 100% with both weighted and unweighted processing time cases. More recently, Caramia, Dell’Olmo, and Iovanella (2005) designed a heuristic to provide a lower bound for the objective of minimizing makespan in which non-consecutive multiprocessors are allowed to process one job. Ying and Lin (2006) developed an Ant Colony System heuristic to solve multiprocessor task scheduling problem in a multistage hybrid flow-shop environment. For the given number of jobs, job processing times, and the required quantity of machines at each stage, their approach performs better comparing with Genetic algorithm and Tabu search approaches. Huang, Chen, Chen, and Wang (2007) developed a simple linear time approximation scheme for MTS problems on four processors and the makespan is bounded by a constant number 1.5. Lagrangian relaxation and other decomposition based optimization approaches are also utilized to solve the multiprocessor task scheduling problems (cf., Guan & Cheung, 2004; Jansen & Porkolab, 2002, 2005). In all these previous works detailed analysis and mathematical insights were studied. Most of these algorithms provided a solution approach for deterministic processing time cases. In reality, deterministic processing time cases happen rarely and instead, most MTS problems contain uncertain processing time, which means the same size jobs may have slightly different processing times, or the same processor dealing with the same type of jobs may have different processing times. If uncertain processing times are considered in MTS problems, the efficiencies of all existing algorithms need to be reevaluated. Simulation is another approach for MTS problems with uncertain processing times and effective to find performances for stochastic systems. It also provides a friendly interface, visualizes results and easily deals with time-dependent events. However, unlike the optimization approach, simulation often uses experimental methods based on general principles or manipulated experiences. It is difficult to find an optimal or near-optimal solution by simulation itself. El Sheikh, Paul, Harding, Harding, and Balmer (1987) developed a simulation model for port operation, in which ships are assigned on berths based on yearly observation. Tahar and Hussian (2000) applied process-oriented simulation software ‘‘ARENA” to analyze the performance of container berth system in Kelang harbor. The berth allocation rules were fixed in four polices in the simulation model. Alattar, Karkare, and Rajhans (2006) used simulation tools to analyze container terminal development policies. In their simulation model, cost is the main factor to decide the location of cranes or berths. Effective processor allocation is an issue for many practical MTS problems. However, complicated processor allocation regulations make it difficult to implement the system by process-flow type simulation modeling. In this paper, we employ embedded simulation technique to model MTS problems by process-flow type simulation and take advantages of both optimization and simulation approaches. An embedded simulation is a technique that simulation is used as a part of a decision support system (cf., Banks, 1998). Embedded simulations can be used to estimate certain parameters, and those parameters can be used as the input for another model. For instance, Wu and Wysk (1989) used an embedded simulation to explore the concept of simulationbased floor shop control. Seifert, Kay, and Wilson (1998) used hierarchical simulation, an alias of embedded

594

K. Yang et al. / Computers & Industrial Engineering 57 (2009) 592–607

simulation, to evaluate AGV routing strategies. Soundarapandyan (2004) used embedded simulation to study order promising. The author developed a task generator for an ongoing make-to-order production system. Another application of embedded simulation is to combine different types of models together. Lee, Cho, and Kim (2002) used embedded technique to combine discrete and continuous modeling approaches for supply chain management applications because the nature of supply chain is neither completely discrete nor continuous. In general, embedded simulation technique provides a flexible way for a modeler to combine other methodologies into the simulation model. 1.3. Our approach In this study, we build an algorithm embedded simulation model, i.e. two-level simulation, for MTS and MTSI with stochastic processing times. In the first level, we model physical layout of MTS and MTSI system with functions or blocks provided by simulation software. In the second level, a heuristic algorithm or FIFO is embedded to determine processor allocation. When running the simulation, processor allocation results from the second level embedded model will serve as inputs for the first level simulation model. That is, the second level simulation real-time updates the first level simulation status during the simulation run. Taking this advantage, a modeler can insert any optimization method by using embedded technique into a simulation model with great flexibility. Two applications will be studied in this paper. One comes from the berth allocation problem within container terminal operations. In this case, the berth is divided into several segments and these segments can be considered as processors. Due to different ship sizes, each ship may occupy several segments (processors). After berth allocation, a security inspection activity will be required to scan containers and find a suitable service rate for that inspection station activity. In this case, the berth allocation can be formulated as an MTS problem and the whole system can be treated as an MTSI problem. The other application comes from Original Equipment Manufacture (OEM) power supply industry. One assembly line usually works for one order at one time. However, in a busy season, it is a common event that the same order may be made by more than one production-unit. Several assembly lines will serve for the same order. Before shipping the final products to customers, products have to be checked in order to fulfill quality standard. In this case, the assembly operations can be treated as an MTS problem and the whole system can be treated as an MTSI problem. 1.4. Contributions and the organization of the paper MTS is widely studied by optimization approaches but without considering uncertainties in real cases. Besides, MTS is hard to implement by simulation, uncertainties can only be considered in some simple cases, which rarely happed in the real world. Although the embedded technique provides a way for combining simulation and optimization approaches, it is still hard to embed MTS algorithms into the simulation model, due to the fact that each task is processed by multiple processors simultaneously. In this paper, our contribution lies in the following aspects: (1) In this paper we deal with the multiple processors scheduling problem under the scenario that each job will be processed by several consecutive processors simultaneously, such as the berth allocation problem or the OEM factory system as described. It is very difficult for commercial simulation software with default functions to deal with this type of problems, especially for the problem we deal with in this paper. In the problem we study in this paper, the number of processors simultaneously occupied by each job highly depends on the input job size. Therefore, we develop a copy entity strategy as a facilitating tool to provide a programmer flexibility to manipulate a job that seizes several processors simultaneously. In this way we can implement MTS algorithm into a simulation model frame and take advantage of both the optimization and the simulation approaches for analyzing MTS and MTSI with both deterministic and stochastic processing time cases. (2) With more demanding for container security inspections in a container terminal or quality assurance in the factory, we extend the model for MTS to MTSI to find a strategy for an inspection center. It is common that an inspection activity is required in the system. It may become bottleneck in the system. Our simulation approach will determine a feasible service rate and how many inspection centers might be required to avoid a bottleneck due to the inspection activity.

K. Yang et al. / Computers & Industrial Engineering 57 (2009) 592–607

595

The remaining part of this paper is organized as follows. Section 2 describes model design and two scheduling policies: a heuristic approach and a commonly used FIFO scheduling policy. Detailed description of the simulation model is provided in Section 3. Section 4 demonstrates the verification of combining simulation and optimization algorithms through embedded technique. In Section 5, we describe two application problems: berth allocation and OEM factory cases. Based on the analytical results, discussions are given in Section 6, and finally in the last section, we summarize our study. 2. Model design In this section, we introduce the basic conceptual structure for the simulation model, and identify the algorithms used in this study. In order to help visualize the model and the model’s output, we will use a series of diagrams. Section 2.1 demonstrates the conceptual model. Sections 2.2 and 2.3 explain the algorithms used in this study. Section 2.4 describes job’s graph representation and demos an output example of the embedded simulation model. 2.1. Conceptual model A two-level conceptual simulation model can be described using the illustration in Fig. 1. One level is referred as an embedded model. In this embedded model, the job allocation rule such as heuristic or FIFO algorithm is implemented. The other level is referred as a main model in which a physical system will be implemented. It will simulate how the machines process the given jobs according to the job allocation rule defined in the embedded model. In this study, the embedded simulation is triggered when there are job entities generated. At that time the main model will temporarily suspend and the simulation clock temporarily stops, and the embedded simulation model starts. According to sizes and processing times, the job allocations will be determined in the embedded model. The main model then continues accordingly by imputing the output of the embedded model’s results until the next job entities are generated and the embedded model provides a new input for the main model. This process is repeated until the simulation terminates. 2.2. Heuristic The heuristic algorithm described in this paper is modified from the algorithm developed in Guan et al. (2002) for the deterministic processing time case, i.e., processors deal with the same job will have the same processing time. However, in the real world, processors dealing with the same job might have different processing times. In this paper, we assume processors dealing with the same job have the same expected processing time. When applying heuristic algorithm, we use the expected processing time instead of deterministic processing time for allocating processors to a job. In the later analyses of the simulation model outputs, we assume processing time of a vessel follows a normal distribution with the mean value as the expected processing time. By embedding the heuristic to a simulation model, the heuristic can be applied to not only deterministic processing time cases but also stochastic processing time cases. The heuristic algorithm contains four steps. The first step is the pre-processing step to sort all of the jobs based on job sizes from the smallest to the largest one. Then the following steps are clustering jobs into several groups and assigning jobs to proces-

Job requests

Embedded model Job allocation rule - Heuristic - FIFO - Others (LIFO, Priority)

System status

Main Model Multiprocessor allocation Assign jobs to operations processors - n jobs - m processors

Fig. 1. Conceptual model.

Output

596

K. Yang et al. / Computers & Industrial Engineering 57 (2009) 592–607

sors and repeating this process until all jobs are allocated. The detailed steps implemented in the embedded model for the heuristic can be described as follows: Let {t1, t2, ..., tn} and {‘1, ‘2, ..., ‘n} corresponding to the job set {J1, J2, ..., Jn}, in which ti and ‘i are the expected processing time and the size of job i respectively and ti ’s and ‘i ’s are agreeable. Step 0: Sort and number the jobs such that t1 6 t2 6    6 tn and ‘1 6 ‘2 6    6 ‘n . Initialize I = 1. Step 1: Let {J‘, J‘+1, ..., Jn} be the set of unscheduled jobs. Let (  ) q X  u ¼ max q ‘ 6 m and q 6 n  j¼‘ j in which m is the number of processors, n is the number of jobs {J‘, J‘+1, ..., Ju} Set GI Step 2: For r = ‘, ‘ + 1, ..., u (a) if I is odd, then assign Jr to processors u u u u X X X X ‘j þ 1; m  ‘j þ 2; . . . ; m  ‘j with ‘j ¼ 0 m j¼r

j¼r

j¼rþ1

j¼uþ1

(b) if I is even then assign Jr to processors u u u u X X X X ‘j þ 1; ‘j þ 2; . . . ; ‘j with ‘j ¼ 0 j¼rþ1

j¼rþ1

j¼r

j¼uþ1

Schedule Jr behind the existing scheduled jobs on these processors, and make it start as early as possible. Step 3: Set I I + 1. If there is no more unscheduled job, then stop, else go to Step 1. 2.3. FIFO Similar to Step 0 of the heuristic algorithm, we first sort new arriving jobs according to their sizes, then find the earliest consecutive available processors and assign each job starting from the first available processor to the last one. The steps are as follows. Step 0: Sort and number the jobs such that t1 6 t2 6 . . . 6 tn and ‘1 6 ‘2 6 . . . 6 ‘n , in which ti ’s and ‘i ’s are agreeable. Let R1, R2, ..., Rm represent processors 1 to m, and put all jobs into a job list {J1, J2, ..., Jn}. Initialize the earliest available time for each processor w1 = w2 = ... = wm = 0. Q Step 1: Take job Jk out of the job list where k is the smallest index in the job list and initialize = {1, 2, ..., m}. Q Step 2: Select a processor ~r ¼ arg minfwj : j 2 g. If ~r þ ‘k  1 6 m and wj 6 w~r for ~r 6 j 6 ~r þ ‘k  1, allocate theQkth Q job in the processors from R~r to R~rþ‘k 1 . Update wj ¼ w~r þ tj for ~r 6 j 6 ~r þ ‘k  1. Else let ¼ nf~rg and repeat Step 2 until job Jk is assigned. Step 3: Go to Step 1 and stop until all jobs are assigned. 2.4. An example for heuristic and FIFO A graph representation of jobs and parallel machines will help visualize the outputs when applying the heuristic and FIFIO algorithms and later model verification. Fig. 2 shows a berth allocation example of 3 jobs and 10 processors. Table 1 shows a group of 6 sorted jobs with fix processing time1 by their size. Assume there are 12 processors. Time-space graphs for jobs allocation by two algorithms are shown in Figs. 3 and 4.

1

There is no difference between expected processing time and fix processing time of jobs in deterministic case.

K. Yang et al. / Computers & Industrial Engineering 57 (2009) 592–607 Berth

Length: 2 units Processing time: 4 units Berth: 2 and 3

597

1 2 3 4

Length: 3 units Processing time: 6 units Berth: 4, 5 and 6

5 6 7 8

Length: 4 units Processing time: 8 units Berth: 7, 8, 9 and 10

9 10 1

2

3

4

5

6

7

8

9 10

Time

Fig. 2. Representation of MTS.

Table 1 Sorted jobs by sizes j

1

2

3

4

5

6

Job’s size ‘j Processing time tj

2 3

3 4

4 5

4 5

5 8

5 9

1

1 5

3

1

4

3

6

2

1 6

6 2

5

3

4

9

9 3

6

12

12 1

5

10

15

20

1

5

10

15

20

Fig. 3. Left shows jobs allocation by heuristic, right shows jobs allocation by FIFO.

Fig. 4. A conceptual model of MTS and MTSI for a berth allocation.

According to the heuristic algorithm, jobs will be grouped first, and then according to jobs’ size and processing time, jobs will be allocated to different processors. However, by using FIFO, availability of processors

598

K. Yang et al. / Computers & Industrial Engineering 57 (2009) 592–607

Fig. 5. A conceptual model of MTS and MTSI for assembly lines assignments.

will be always checked first, each job will be allocated at the first available several consecutive processors; otherwise, the job will stay in a queue. 3. Detailed descriptions of the simulation model In this section, we will use two applications, berth allocation and OEM factory system, as the selected simulations to evaluate if the embedded approach is effective or not. In the following discussion, we will describe two systems: MTS and MTSI with certain identified considerations. A flow diagram will show how the simulation model works for MTS or MTSI. Although the embedded technique provides a way to implement algorithms into the simulation model, the MTS algorithm is still difficult to implement in the simulation model. We will describe ‘‘copy entity strategy” so that we can implement the MTS algorithm into the simulation model. 3.1. System descriptions Figs. 4 and 5 show the conceptual models of berth allocation and OEM factory systems. In the berth allocation case, if we only consider the berth allocation itself, the system can be formulated as an MTS problem as shown the left half part of Fig. 4. According to the ship size, different ships are allocated to processors for operations. If a ship does not go through operations, it will wait in queue until processors assigned to it are available. After finishing berth operations, the ship leaves the berth and goes out of the system. The similar process happens in OEM assembly operations. According to product quantity, jobs will be allocated to different assembly lines. Customer’s order in the OEM factory can be considered as a ship in the berth allocation case. See the left half part of Fig. 5. MTSI is a system that a container inspection station is added after berth allocation operations or a quality check station is added after assembly operations as shown in Figs. 4 and 5. In the berth allocation case, a container will be moved to the check station once it is unloaded. The container will leave the system after it passes through the security check. Similarly, in the OEM factory case, once product assembly is finished, it will be moved to the quality control check station. There is a difference between these two systems. In the berth allocation case, a truck can pick a container to the check station immediately after the container is unloaded. However, in OEM power supply factory, all finished products need to go through ‘‘burn-in”2 process, normally 24 h, once they are assembled. Therefore, the final products will wait 24 h before they are moved to the quality check station. 3.2. Simulation model descriptions In this study, we choose Visual SLAM and AweSim simulation software with C++ programming embedded to develop the embedded MTS and MTSI simulation models. External C++ programming codes and an 2

Burn-in is a process that tests a product if the product can function normally in high temperature environment.

K. Yang et al. / Computers & Industrial Engineering 57 (2009) 592–607

599

MTSI MTS Job entities arrive

Copied entities are in the queue

Container entities or Carton entities into the queue

Entities go through “Event” node

Copy entities go through multiprocessor system

Entities go through inspection

Simulation suspends and jobs are allocated

Statistics Collection for MTS

Statistics collection for MTSI

Yes

No

Are copy entities ready to go? Fig. 6. MTS and MTSI simulation model flow chart.

internal simulation model can be combined together in the ‘‘EVENT” function node in the AweSim model. Once there is an entity going through ‘‘EVENT” node, it will trigger the embedded model. Such a flexible characteristic makes it possible to combine simulation and optimization approaches together. Fig. 6 shows the detailed MTS and MTSI simulation model flow. Berth allocation and OEM cases can share the same simulation model logic and structure (The Conceptual Model) by using different parameter settings and algorithms. One can embed these entities by using the embedded C++ program codes. In the AweSim simulation model, once a job is generated, it will go through the ‘‘Event” node in which we are allowed to implement our heuristic algorithm and FIFO policy. After the jobs are arranged, the program will check if the corresponding processor resources are available or not. If so, then job entities will be put to go through operation activities. In the MTSI case, once job entities (ships in berth allocation case or orders in OEM factory case) go through processor operation activities. It will trigger the corresponding entities, i.e., containers in the berth allocation case, and product cartons3 in the OEM case, to go through inspection check activities. Before entities leave a system, performance indicators will be collected, such as waiting time in the queue, average queue length, entity time in the system, service rate of processors, etc. Those performance indicators will be used to compare the efficiency of heuristic algorithm and FIFO policy in both MTS and MTSI systems. 3.3. Copy entity technique in simulation model Although the ‘‘Event” node in AweSim provides a way to implement user-defined functions written by using C++ in the simulation model, it is still hard to implement the simulation because one job will occupy several consecutive processors simultaneously. For example, a job with size 3 can be allocated to processors {1, 2, 3},{2, 3, 4},{3, 4, 5} and so on. Once an entity starts processing, it will occupy the corresponding processors. Because the job size in our cases varies from 1 to 12, it is not straight forward to define the simulation model logic to implement this. Instead of taking an-entity-seize-resource strategy, the Copy Entity technique is used to deal with this difficulty. The following is a brief description of the technique:

3

One order in OEM contains certain amount of product cartons.

600

K. Yang et al. / Computers & Industrial Engineering 57 (2009) 592–607

(1) Record a job’s attributes that includes job generation time and size of the job once a job entity is generated. (2) Generate a number of copy entities in which the number is equal to the job’s size. For example, we generate 3 copy entities for a job with size 1. All attributes of the copy entities are the same as those of the original one. (3) Replace the original entity by copy entities, and terminate the original entity to avoid double counting performance parameters. (4) Put copy entities in temporary stacks and let them wait for going through processors. (5) Let copy entities start together once all designated processors are ready for those copy entities. (6) The statistics of a job entity comes from the average of all copy entities. 4. Verifications of the simulation model Before we conduct experiments on berth allocation or OEM factory cases, we have to ensure that the outputs from the simulation model, i.e. if copy entity strategy works for the linkage between MTS algorithm and simulation, are correct. In Section 4.1 we check the graph pattern of the two job groups. We will discuss service rates for the MTSI problem in Section 4.2 in which we will set a service rate as a base for comparison, and check if output service rate matches the base. 4.1. Job allocation pattern verifications on the simulation model In order to verify our simulation model, we choose a similar example as shown in section 2.4. In this instance, there are 12 jobs and 12 processors. The 12 jobs are divided into two groups, 1st to 6th jobs in job group 1, and 7th to 12th jobs in job group 2. Two job groups have the same job attributes listed in Table 1. Figs. 7–9 show the results from the heuristic algorithm and Figs. 10 and 11 show the results from the FIFO algorithm. Fig. 7 shows that there are two batches of job arrivals at t = 0 and t = 21 respectively, because these two job requests didn’t overlap with each other, we can see that job group 1 has the same pattern as job group 2. Figs. 8 and 9 show the cases that the second job group arrives at t = 9. Since at the time job 6 is not processed yet, there will be two scenarios for job 6; job 6 may or may not be re-allocated. Fig. 8 shows the case that the unprocessed job will not be re-allocated and Fig. 9 shows the case that un-processed job will be re-allocated. When job 6 is not re-allocated, job group 1 has the same job allocation pattern of job group 1 shown in Fig. 7, and job 7 and job 11 have chance start at t = 12 and t = 15 respectively. Figs. 8 and 9 also demonstrate that if an un-processed job can undergo re-allocation processes, it will have less completion time. Figs. 10 and 11 show the job allocation results for FIFO case. The algorithm always checks processor availability from the first processor. Therefore, job group 1 assignment pattern is different from the heuristic approach. Fig. 10 shows that two batches of jobs didn’t overlap with each other and they have the same patterns. Fig. 11 shows the case that the second batch of jobs arrival at t = 15. In the FIFO case, since the early coming jobs always have higher priority than the later arriving ones, namely, re-allocation is never the case if one use FIFO algorithm.

1 5

3

11

1

7

6 2

8

4

9

6

3

10 12

9

12 1

5

10

15

20

25

30

35

Fig. 7. Second batch of jobs request at t = 21 (heuristic).

40

K. Yang et al. / Computers & Industrial Engineering 57 (2009) 592–607

601

1 3

5

11

1 6

7

2

6

8

4

10

9 3

12

9

12 1

5

10

15

20

25

30

35

40

45

Fig. 8. Second batch requests at t = 9 and jobs allowed to be re-scheduled (heuristic).

1 3

5

11

1

7

6 2

8

4

10

9 6

3

12

9

12 1

5

10

15

20

25

30

35

40

45

Fig. 9. Second batch requests at t = 9 and jobs not allowed to be re-scheduled (heuristic).

1

1

3

7

4

6

2

6

5

3

10

12

8 11

9

9 12 1

5

10

15

20

25

30

35

40

45

40

45

Fig. 10. Second batch of jobs request at t = 24 (FIFO).

1

1

4

3 6

5

3

10

6

2

9

7

12

11

9

8

12 1

5

10

15

20

25

30

35

Fig. 11. Second batch of jobs request at t = 15 (FIFO).

4.2. Service rate verifications on the simulation model Another verification we compare the theoretical service rate and the measured service rate to see if the simulation model suitable for the analyses on both the berth allocation and the OEM factory cases. The theoretical service rate is defined as utilization of processors times the given service rate. The actual service rate is defined as the number of jobs going through a processor divided by total simulation time. For one specific

602

K. Yang et al. / Computers & Industrial Engineering 57 (2009) 592–607

Table 2 Validations of different scenarios for the embedded simulation model Scenario

Slope

Correlation coefficient

p-value

Berth allocation heuristic case w/o re-scheduling Berth allocation heuristic case with re-scheduling Berth allocation FIFO case OEM heuristic case w/o re-scheduling OEM FIFO case

0.999 0.990 0.986 0.992 0.993

0.998 0.997 0.999 0.999 0.999

0.018 0.003 0.083 0.071 0.081

processor, theoretical service and actual service rates should close to each other. We treat theoretical service rate as one variable y and actual service rate as another variable x. Then we perform a linear regression to see if the slope is close to 1, or correlation coefficient is close to 1, or the p-value of a t-test is significant (normally tested at 0.05), then the simulation model is verified for analyzing the present system of this study. Table 2 indicates that the theoretical service rate and actual service rate match with each other and overall results support that this simulation is verified. 5. Experimental design In the experiment, the same number of multiprocessors as in the verification stage is generated. For instance, there are 12 processors for both the berth allocation and OEM factory system problems. For MTSI in berth allocation case, we assume only one machine in inspection center, and in OEM case, there are 10 quality engineers in a quality inspection station. For berth allocation case, jobs arrive every 3 days and three different scenarios (low, medium and high loading). OEM factory cases have similar parameter setting in the three different scenarios (low, medium and high pressure) but jobs arrive every 2 days instead of 3 days in the berth allocation case. There are differences in two cases. For the berth allocation case, we allow re-allocation and re-scheduling for ships. For instance, if a ship is waiting in a queue, the original berth assignment can be changed. The other difference is that a job has a chance to seize all resources in berth allocation case, whiles a job can only seize partial capacity of resources in OEM case. This lies in the fact that in the power industry, OEM factory has a lot of orders coming from different companies and a factory seldom uses all resources to support just one company. Here, we assume that a job occupies at most 50% resources in our study. Finally, 10 samples were run for each combination and we report the average values. The simulation horizon was 30,000 h (e.g., 3.5 years). Tables 3 and 4 show the details of the experimental settings for each scenario. 6. Simulation results and analysis We ran simulations for MTS and MTSI problems, respectively. For the MTS problems, we compare the performance between the heuristic and FIFO policy. In the simulation run, we can ignore finding transient state since the main goal of this study is to compare heuristic algorithm and FIFO under the same conditions, whether we use statistics in any state will not influence the final conclusions. For the MTSI problems, we evaluate the service rate ratio between the inspection center and the processors.

Table 3 Stochastic processing time berth allocation cases Scenarios

Jobs request (Days)

Processing time of cranes for one job

Processing time of check station for one container

Job requests each time

Low loading Medium loading High loading

3 3 3

N (processing time, 10% processing time) N (processing time, 10% processing time) N (processing time, 10% processing time)

N (0.1, 0.05) N (0.1, 0.05) N (0.1, 0.05)

21–40 61–81 101–120

K. Yang et al. / Computers & Industrial Engineering 57 (2009) 592–607

603

Table 4 Stochastic processing time OEM factory case Scenarios

Jobs request (Days)

Processing time of assembly line for one job

Processing time of check station for one carton

Job requests each time

Low pressure Medium pressure High pressure

2 2 2

N (processing time, 10% processing time) N (processing time, 10% processing time) N (processing time, 10% processing time)

N (0.1, 0.05) N (0.1, 0.05) N (0.1, 0.05)

21–40 61–81 101–120

Table 5 Simulation of 10-run results for the berth allocation case Berth case

Low loading Medium loading High loading

Average time in queue (h)

Flow time (h)

H

F

H

F

Average queue length H

F

22.3 (0.65) 20.4 (0.37) 22.0 (0.54)

232.8 (26.4) 488.7 (33.15) 563.2 (43.84)

25.9 (0.66) 23.5 (0.38) 24.9 (0.55)

236.6 (26.44) 492.6 (33.20) 567.2 (43.78)

60.1 (8.51) 302.9 (15.37) 570.2 (17.08)

111.7 (13.82) 515.4 (18.33) 936.0 (21.68)

The number inside parentheses is standard deviation.

6.1. MTS For MTS, the job entity is ‘‘ship” in the berth allocation case and is ‘‘order” in the OEM case. Parameters for the evaluation of each scenario for MTS include average waiting time for a job in queue, average flow time of a job, and average queue length. The analytical results of MTS for both the berth allocation and the OEM factory cases are shown in Tables 5 and 6. Table 5 shows the simulation results for different scenarios of the berth allocation case according to the problem data setting in Section 5. In the table, ‘‘H” and ‘‘F” represent the simulation results obtained from the heuristic and FIFO policies respectively. The simulation results show that the heuristic approach performs much better than the traditional FIFO policy. For the low loading cases, average waiting time in queue and average flow for the heuristic approach only takes around 1/10 of that of FIFO approach. The benefits of the heuristic approach are more significant for the medium and high loading cases. Average queue length of the heuristic approach is around 1/2 of the FIFO approach. Table 6 shows the simulation results for the OEM cases. The heuristic approach also performs better than that of FIFO approach. The heuristic approach for the low loading case leads to 1/2 the average waiting time, the average flow time and the average queue length comparing to FIFO approach. The performance is less significant for the medium and high loading cases. We also compare the standard deviations from the heuristic algorithm results with the FIFO policy. This result indicates that the heuristic algorithms have smaller deviation values and are more robust and more stable. 6.2. MTSI As mentioned in CSI program, the container security inspection is the bottleneck in the container terminal operations. According to our base scenarios, only 8% of containers are inspected. In this section, we estimate Table 6 Simulation of 10-run results for the OEM factory case OEM case

Low pressure Medium pressure High pressure

Average time in queue (h)

Flow time (h)

H

F

H

F

Average queue length H

F

47.2 (11.75) 297.3 (12.14) 415.4 (16.63)

74.6 (16.75) 322.1 (13.42) 431.5 (18.7)

50.8 (18.45) 300.9 (12.14) 418.9 (16.64)

78.2 (25.87) 325.7 (13.42) 435.1 (18.89)

30.0 (18.42) 319.1 (13.62) 622.2 (18.42)

47.3 (25.85) 345.1 (15.80) 644.9 (18.91)

604

K. Yang et al. / Computers & Industrial Engineering 57 (2009) 592–607

Fig. 12. Solid area vs. total area of a berth allocation case.

the service rate ratio between the inspection station and the service processors. In order to prevent the inspection center become bottleneck in the system, we propose two solutions, one is increase resource capacity, for example, increase numbers of examining machines (for berth allocation cases) or increase human power for a quality check (for OEM case). Correspondingly, we evaluate solution performance of the inspection center for different inspection service rates. Containers and cartons are the entities going through the inspection operations for the berth allocation and OEM factory cases respectively. Therefore, for MTSI, the simulation entities are containers for the berth allocation case and cartons for the OEM factory case instead of ships and orders for MTS in which an aggregated simulation process is used. In the current simulation system, there are twelve equal processors and one check station. If the service rate of the check station is 12 times the service rate of a processor, then it is obvious that the check station will not be the bottleneck of the system. However, for a multiprocessor system, we can always find empty area in graph representation of MTS as shown in Fig. 3. This fact indicates in the example of 12 processors and one inspection, the service rate of the inspection is not necessary 12 times of the service rate of a processor. In order to estimate the service rate for inspection center, we define two parameters. One is the total area, which is equal to the product of makespan and total number of processors. The other parameter is defined as the solid area, which is the summation of the product of job size times processing time for each job. The ratio of the solid area and the total area will give the approximate inspection rate required for the inspection center in order to make the system run smoothly. For instance, the service rate for the inspection center should be around 12qrp where q represents the ratio between the solid area and the total area and rp is the service rate for each processor. Based on this, we can find a proper range of the ratio q and the service rate for the inspection center. We studied the berth allocation and OEM factory cases and ran the simulation for a planning horizon of 30,000 h (approximately 3.5 years) as mentioned in Section 5, and we assume each processor always has the same service rate. The results of deterministic processing time assumption provide a comparing base for later comparison to stochastic case. During the simulation there are over 5000 job arrivals. We can find the index range tends to converge when the number of jobs reaches 3000. Figs. 12 and 13 show that the ratio between the solid area and the total area versus the number of jobs. The ratio is diverse at the very beginning and converges as the number of jobs increases. For the berth allocation case, it finally converges to 0.56, and for the OEM factory case, it converges to 0.48. Theoretically, if the service rate is around 0.56  12 = 6.72 times the processor service rate in the berth allocation case or 0.48  12 = 5.76 times the processor service rate in the OEM factory case, then MTSI will work smoothly. Figs. 14 and 154 show the throughput time decrement and the number of finished jobs increment when increase the capacity of the inspection center for the stochastic case. For instance, we increase the number of inspection stations in the container terminal for the berth allocation case and the number of workers for quality inspections for the OEM factory case. In the figures, job % is defined as the number of jobs finished

4

In Figs. 14 and 15 (R) means resource increasing scenario, and (S) means service rate increasing scenario.

K. Yang et al. / Computers & Industrial Engineering 57 (2009) 592–607

Fig. 13. Solid area vs. total area of an OEM factory case.

Fig. 14. Job finished % and throughput time % improvement in berth allocation case.

Fig. 15. Job finished % and throughput time % improvement in OEM factory case.

605

606

K. Yang et al. / Computers & Industrial Engineering 57 (2009) 592–607

corresponding to different inspection capacities divided by the number of jobs requested. Throughput time % is defined as the throughput time corresponding to different inspection capacities divided by the throughput time for the case the service rate of the inspection station is the same as the processor rate. We can observe in the figures that the simulation results show a little different from the analytical results. In the berth allocation case, resource capacity or service rate need to reach 9 times (instead of 6.72) as the original value, and in the OEM factory case they need to research 10 times (instead of 5.76) as the original value in order to match the service rates of the processors. The difference comes from analytical results based on the assumption of deterministic processing time of each processor. In practice, it is hard to increase 8 times the resources of the original. For example, in the harbor area, the land is limit, and most land will be planed for the commercial activities, such as transportation, storage, and etc. Expansions of inspection center by increasing the number of inspection centers will influence commercial activities and will be very expensive. In case that there is no enough inspection capacity, sampling is another strategy. Sampling rate will be decided by the inspection service rate and the processor rate. For instance, the sampling rate c = ri/(12qrp) where ri is the inspection rate.

7. Conclusions and future research In this paper, we studied embedded simulation techniques combining heuristic and FIFO algorithms for the MTS and MTSI problems. It is difficult to implement embedded simulations or processing algorithms into most commercial simulation software even though certain object oriented simulation software provides an interface to connect user-defined functions. AweSim is one type of process flow simulation software that provides ‘‘Event” node in which we developed optimization functions written in C++. These functions are finally combined with the simulation approaches to implement MTS and MTSI systems. Because of the nature of MTS and MTSI that one job will occupy several consecutive processors simultaneously and where preemption is not allowed, we developed the copy entity technique in the embedded system to facilitate the simulation. Based on this technique, we implemented both the heuristic and FIFO policies for MTS problems. Simulation results indicate that heuristic algorithm performs better than FIFO policy for both berth allocation and OEM factory cases. Many MTS algorithm are developed for the deterministic processing time cases. However, deterministic processing time cases rarely happen in the real world. By combining optimization and simulation approaches together, this gives us the ability to extend deterministic algorithm functions to deal with stochastic processing cases. Due to the security and quality check in the terminal operations and factory activities, in this study, we extend MTS to MTSI model. We evaluate the service rate of the inspection center so as to run the whole system smoothly. The simulation results show that the service rate of the security inspection center should be around 9 times the processor service rate for the berth allocation case, and service rate of the quality inspection center should be about 10 times of the processor service rate. In practice, it is usually hard to achieve such high service rates. The solution is to increase the inspection rates or provide more inspection centers. In a real world problem this simulation technique could be used to balance the system by adjusting certain parameters that include sampling rate at inspection stations. In general, this study has provided a way to link optimization and simulation approaches to simulate berth allocation and OEM factory cases. In the future research, we will perform mathematical modeling analysis for the MTS and MTSI problems that include the case that each processor has different work capacities. We will also consider a cost analysis of the whole process and explore ways to optimally allocate resources.

Acknowledgments This research was partially supported by Department of Transportation of America. The authors would also like to thank the editor and the referees for their valuable comments and suggestions.

K. Yang et al. / Computers & Industrial Engineering 57 (2009) 592–607

607

References Alattar, M. A., Karkare, B., & Rajhans, N. (2006). Simulation of container queues for port investment decisions. In The sixth international symposium on operations research and its applications (isora’06) (pp. 155–167). Banks, J. (1998). The future of simulation software: A panel discussion. In Proceedings of the 1998 winter simulation conference (pp. 1681– 1687). Caramia, M., Dell’Olmo, P., & Iovanella, A. (2005). Lower bound algorithms for multiprocessor task scheduling with ready times. International Transaction in Operations Research, 12, 481–508. Chen, J., & Lee, C. Y. (1999). General multiprocessor task scheduling. Naval Research Logistics, 46, 57–74. Drozdowski, M. (1996). Scheduling multiprocessor tasks – an overview. European Journal of Operational Research, 94, 215–230. El Sheikh, A. A., Paul, R., Harding, R. J., Harding, A. S., & Balmer, D. W. (1987). A microcomputer-based simulation study of a port. Journal of the Operational Research Society, 38(8), 673–681. Guan, Y., & Cheung, R. K. (2004). The berth allocation problem: Models and solution methods. OR Spectrum, 26, 75–92. Guan, Y., Xiao, W. Q., Cheung, R. K., & Li, C. L. (2002). A multiprocessor task scheduling model for berth allocation: Heuristic and worst-case analysis. Operations Research Letters, 30, 343–350. Huang, J., Chen, J., Chen, S., & Wang, J. (2007). A simple linear time approximation for multi-processor job scheduling on four processors. Journal of Combinatorial Optimization, 13, 33–45. Jansen, K., & Porkolab, L. (2002). Polynomial time approximation schemes for general multiprocessor job shop scheduling. Journal of Algorithms, 45, 167–191. Jansen, K., & Porkolab, L. (2005). Approximate solutions in linear time. SIAM Journal on Computing, 35, 519–530. Lee, C. Y., & Cai, X. (1999). Scheduling one and two-processor tasks on two parallel processors. IIE Transactions, 31, 445–455. Lee, Y. H., Cho, M. K., & Kim, Y. B. (2002). A discrete-continuous combined modeling approach for supply chain simulation. Simulation, 78(5), 321–329. Li, C. Y., Lei, L., & Pinedo, M. (1997). Current trends in deterministic scheduling. Annals of Operations Research, 70, 1–41. Roach, A. (2003). Container and port security: A bilateral perspective. The International Journal of Marine and Coastal Law, 18(3), 341–361. Seifert, R. W., Kay, M. G., & Wilson, J. R. (1998). Evaluation of AGV routing strategies using hierarchical simulation. International Journal of Production Research, 36, 1961–1976. Soundarapandyan, D. K. (2004). Use of embedded simulation in order promising. M.Sc. thesis. University of Oklahoma. Tahar, R. M., & Hussian, K. (2000). Simulation and analysis for the Kelang container terminal operations. Logistic Information Management, 13(1), 14–20. Wu, S. D., & Wysk, R. A. (1989). An application of discrete-event simulation to on-line control and scheduling in flexible manufacturing. International Journal of Production Research, 27, 1603–1623. Ying, K. C., & Lin, S. W. (2006). Multiprocessor task scheduling in multistage hybrid flow-shops: An ant colony system approach. International Journal of Production Research, 44(16), 3161–3177.