Computers Opns Res. Vol. 17. No. 2, pp. 163-175, 1990 Printed in Great Britain. All rights reserved
0305-0548/90 $3.00 + 0.00 Copyright 0 1990 Pergamon Press plc
A DYNAMIC PROGRAM WITH FATHOMING AND DYNAMIC UPPER BOUNDS FOR THE ASSEMBLY LINE BALANCING PROBLEM FRED F. EASTON* Quantitative Methods Department,
School of Management, Syracuse University, Syracuse, NY 13244-2130, U.S.A. (Receioed October 1988; reuised June 1989)
Scope and Purpose-Minimizing the number of assembly line work stations needed to achieve a given production rate is a difficult combinatorial optimization problem. Dynamic programs developed for such purposes are usually burdened with excessive computer memory requirements and computation time. Other researchers have suggested that relaxation and fathoming techniques may be used with dynamic programs to “prune” unneeded partial solutions; mitigating storage and computation problems. We believe such methods may have limited eNect when the fathoming tests are based on a nonoptimal upper bound, because relatively few partial solutions are eliminated. However, the rate at which partial solutions are fathomed during the recursion provides a running indication of the quality of the upper bound; thereby suggesting when an improved upper bound is likely to exist. The purpose of this paper is to determine how storage and computational effort are a&c&d when an improved upper bound is sought during the recursion, in response to generally unsuccessful fathoming attempts. The conventional dynamic program and two dynamic programs with relaxation and fathoming are applied to 60 well-known assembly line balancing problems. We wish to establish whether a dynamic upper bound approach is more effective than one in which the upper bound is held static throughout the recursion and indeed, whether either olTer any advantage over the conventional dynamic program for the assembly line balancing problem. Abstract-It has been suggested that relaxation and fathoming methods can be used to reduce the state space of certain dynamic programs. This paper applies these techniques to the dynamic program for assembly line balancing and shows that with an optimal upper bound, a substantial reduction in state space is possible. With a static nonoptimal upper bound however, the approach is found to offer little improvement over conventional dynamic programming. To achieve more consistent results, a dynamic upper bound procedure is proposed. Applied to a well-known set of assembly line balancing problems, the performance of the proposed algorithm was found comparable to a state-of-the-art integer programming method. The approach appears generalizable to dynamic programs with a similar structure.
1. INTRODUCTION For the production of discrete products such as home appliances, automobiles, modular housing and even transport aircraft, the well-known assembly line represents the epitome of manufacturing efficiency. On an assembly line, the total work content for the assembly is divided, more or less evenly, among the work stations that comprise the line. Each station is staffed by one or more operators and as an item moves down the line, it becomes incrementally more complete. When the line is rigidly “paced”, finished products come off the end of the line at a fixed interval called cycle time. The inverse of cycle time is the output rate for the line. Usually there are restrictions on how the assembly tasks can be divided among the work stations. The simplest case assumes a single operator at each station, and limits the total amount of work assigned to any one station to no more than cycle time. Technological restrictions on the order in which the tasks are performed, called precedence constraints, may also be present (see Fig. 1). Because wage costs, in-process inventory costs, and floor space are directly related to the number of work stations on the line, the number of work stations is often used as a surrogate measure for total production costs. An important design problem is to allocate the assembly tasks among the minimum number of stations possible, consistent with the desired output rate and precedence constraints. This design problem is referred to as the assembly line balancing problem, or the ALB.
*Fred F. Easton is an Assistant Professor of Operations Management at Syracuse University. His interests include scheduling issues in both the manufacturing and service sectors. He received his PhD in operations management from the University of Washington. 163
FRED F. EASTON
164
5
12
3
Fig. 1. Precedence graph for an example problem.
A typical definition for the ALB is as follows. Given a set of assembly tasks A (with a real-valued time function t and partial ordering restrictions defined on A) and cycle time (C), the assembly line balancing problem is one of partitioning A into the minimum number of subsets Ai, i = 1, . . . , 2, subject to the precedence and cycle time constraints. Mathematically, we seek to: Minimize Subject to:
Z
(1)
if TEA, and SEA,, and r must be completed before s, then x < y < Z.
(2)
1 tj
i=l,...,Z.
Two of the more effective optimization techniques for solving the ALB are Talbot and Patterson’s Cl] integer programming method (IPALB) and Lawler’s [2] dynamic programming method (DPALB). IPALB has been shown capable of quickly solving line balancing problems as large as 111 tasks [ 11, with computer storage proportional to the number of assembly tasks (N). However, the time needed to solve larger problem instances may become excessive when cycle time is less than 125% of the maximum task time. DPALB has been applied successfully to ALB problems with as many as 70 tasks [3]. It has also been successfully adapted for ALB extensions such as the stochastic line balancing problem [4], suggesting the basic technique is fairly robust. However, its computational complexity has been shown [3] to be O(MN’) and its computer storage requirements proportional to M, where M is defined as the number of states in the dynamic program. Because M can vary from N to 2N, depending on the precedence relationships of the problem [S], it is clear that ALB dynamic programs of sufficiently large M will overwhelm even the most powerful computers. But DPALB is a shortest path formulation with additive costs. Morin and Marsten [6] proposed the use of relaxation and fathoming techniques for dynamic programs with similar structures; potentially reducing the storage and computational requirements. We found that when applied to the ALB problem the effectiveness of this approach, using a static upper bound, is directly related to the quality of that bound, In particular, if the upper bound is not optimal the use of relaxation and fathoming offers little advantage over conventional dynamic programming, because relatively few states are “pruned”. This paper presents a dynamic program with relaxation and fathoming that relies on a dynamic, rather than a static, upper bound. Using the success rate of the fathoming tests as an indication of the quality of the upper bound, the method searches for an improved incumbent solution if it appears that one exists. Whenever indicated, it attempts a heuristic completion from a promising state, using the partial solution for that state to enhance heuristic performance. Implemented with this dynamic upper bound procedure, the proposed method is intended to provide more consistent reductions in storage requirements and computational effort than one based on a static upper bound. The procedure appears generalizable to dynamic programs with structures similar to the ALB. In Section 2 the DPALB algorithm is reviewed. In Section 3, fathoming and dynamic upper bound methods are adapted for the ALB dynamic program. Constraint relaxation and fathoming criteria are described in Section 3.1. Section 3.2 discusses the implementation of the dynamic upper bound procedure. Section 3.3 illustrates the proposed algorithm with an example, contrasting solutions based on static and dynamic upper bounds. In Section 4, the comparative performance
165
DP with fathoming and dynamic upper bounds
of the proposed algorithm is assessed by examining the time and storage needed to solve the Talbot and Patterson [l] problem set. Results are provided for the conventional dynamic program, the static upper bound dynamic program with fathoming, and the proposed method. Section 5 provides conclusions regarding the use of the method. 2. DYNAMIC
PROGRAMMING
SOLUTION
FOR
THE
ALB
The universe of feasible solutions for the ALB can be viewed as an acyclic network (5, A), with each node s E S representing one of M unique feasible subsets. A feasible subset {s} is a subset of assembly tasks that can be executed in some order without prior execution of any task not a member of the subset [8]. That is, if a task is a member of a feasible subset, all predecessors of that task are also members. A directed arc ({s - j}, {s})E A represents a single task j connecting node {s - j} with node {s>, where {s - j} represents the feasible subset (s} with task j removed. Arc cost is either task time or task time plus idle time at the current work station, depending on whether task j “fits” in the time remaining at the current work station. Each path through the network from node {s,,} (the null feasible subset) to node (s,} (the feasible subset containing all the assembly tasks) represents a precedence feasible sequence of the assembly tasks. The collection of all paths through the network corresponds to every precedence feasible sequence [8]. Each such sequence has an associated cost which can be expressed as either the customary minimum time needed to complete the task sequence, or the minimum number of work stations. An optimal solution to the problem is the shortest (in terms of either measure) path from 1 ure 2 shows an example of a feasible subset network for the precedence graph (4 to @/1. F’g described in Fig. 1. DPALB identifies the shortest path through the network by finding the shortest path from (so} to every other node in the network. One efficient implementation of DPALB is Lawler’s [2] “reaching” technique (similar to Dijkstra’s [7] shortest path algorithm, with a cardinality order feasible subset generation procedure [3]). The dynamic program DPALB can be described as follows : F(s):= minimum time to execute the feasible subsets ES, where: F(s) =
{F(s),f’(s - j) + A[F(s jeS;(s
j),ti]},
- j)afcariblcsubsel
with F(s) initialized to co, j an assembly task, tj its execution time, K MOD L the integer remainder after K is divided by L, and A[F(s - j), tj] =
” C - [F(s - j) MOD(C)]
if + tj
C - [F(s - j) MOD(C)]
otherwise
2 tj
1
The states in the formulation are the feasible subsets {s} ES. The stages of the algorithm correspond to the number of elements in a feasible subset. The well-known “curse of dimensionality” for dynamic programming is manifested by the need to access information about states {s - j}. One noteworthy feature of this approach is that F(s) can be computed with only the current and previous stage’s feasible subsets stored in core memory. However, F(s,) is the minimum cost of a path through the feasible subset network: it does not reveal the sequence of tasks that comprise the path. Identifying the shortest path requires access to the decisions made for every feasible subset. Sufficient space must thus be provided in low-speed memory to store every (M) feasible subset generated. 3. DYNAMIC
PROGRAMMING
WITH
FATHOMING
AND
UPPER
BOUND
IMPROVEMENT
An optimal solution to the assembly line balancing problem can be characterized with a single shortest path through the network, so finding the shortest path from {s,,} to all other feasible subsets may solve a harder problem than necessary. Morin and Marsten [6] made a similar observation about dynamic programs such as the “traveling salesman” problem and the “nonlinear knapsack” problem, and proposed the use of relaxation and fathoming techniques to “prune” unneeded states. Briefly, their procedure determined a lower bound on the cost to complete the
166
FRED F. EASTON
stage:
.~~ 1
2
‘3
4
5
6
;
8
Fig. 2. Feasible subset network for the example problem.
process from a state {s} by relaxing certain problem constraints. This bound was compared to the cost of a known feasible solution (the incumbent), held static throughout the procedure. Unless the bound showed the potential to improve on the cost of the incumbent, the state could be considered fathomed and thus ignored in all subsequent operations. The benefits of this approach include the potential to reduce both the storage requirements and computational effort, because fathomed states are neither retained nor used to generate new states at the next stage of the algorithm. These benefits are obtained with a slight increase in the effort needed to evaluate each state that is generated. 3.1. Dynamic program with relaxation
and fathoming
for the ALB
The method relies upon the existence of a known feasible solution for the problem. For the ALB the cost of such a solution, expressed as U work stations, can be obtained with any of a number of heuristic solution procedures [lo]. Given the cost of the incumbent and cycle time (C), a property of the cost of a shortest path through the network of feasible subsets, F(sJ), is that: F(s/) G CU.
(5)
A simple lower bound for the additional time needed to complete the assembly from state (s} is the sum of the task times for those assembly tasks not members of state {s). Define a lower bound on the additional cost needed to complete the assembly from state {s} as I(s), where: I(S)=
C
tj.
(6)
j*Csl
If state {s} is on a shortest path through the network, and a feasible solution with U stations is known to exist, then for cycle time C we must have: F(s) + f(s) < CU.
(7)
Morin and Marsten [6] showed in their Proposition 1.l that if a state {s} fails to satisfy relationship (7), it is fathomed and need not be considered further. However, we assume a feasible solution with cost U is already known, so our interest is with states that have completions potentially less costly than CU. Morin and Marsten’s Proposition 1.2 provides a somewhat stronger condition for such circumstances: any state satisfying (7) as an equality can also be considered fathomed. Using the number of work stations as a measure of solution cost, an even stronger fathoming condition applies to the ALB. Any ALB solution that improves on the incumbent must have fewer than U work stations. Therefore, it follows from Morin and Marsten’s Proposition 1.2 that an
DP
167
with fathomingand dynamicupper bounds
unfathomed feasible subset must satisfy F(s) + I(s) 6 C(U - l), or equivalently: (8)
ICRs) + @)1/C-l < u
where rqj is the smallest integer greater than or equal to q. Finally, (8) suggests the optimality of an incumbent may sometimes be verified without completing the recursion. Define S’(K) as the set of al1 K-element feasible subsets satisfying (8). As argued by Morin and Marsten’s Corollary 1.1, the incumbent solution is optimal if at some stage K of the recursion, S(K) = (11. A dynamic program with fathoming and static upper bounds for the assembly line balancing problem offers several potential advantages over the conventional dynamic program for assembly line balancing. First, it is unnecessary to store fathomed feasible subsets. A second advantage is that the fathomed feasible subsets will not be used to generate new feasible subsets, possibly reducing the computational effort. Third, if the optimality of the heuristic can be confirmed, it is unnecessary to backtrack through the set of decisions recorded for each feasible subset to identify the optimal station assignments. Finally, if the cost of the incumbent equals the theoretical minimum, recursion is unnecessary. The lower bound for the cost to complete the process (I(s)) involves enumerating the assembly tasks not members of the state {s) and summing their times. With negligible additional effort, it is possible to improve this bound by adding the minimum possible idle time for the most recently opened work station. That is, we can re-deline I(s) as: t(s) + I(s) + SLK (s),
(9)
where: C - F(s) MOD C
if
F(s) MOD C +
SLK (s) =
min @
(rj> > C
(s,j)feasible
Lo
otherwise
1
In (9), we reason that if none of the tasks which can be sequenced immediately after the tasks in {s} will “lit” in the time remaining at the current work station (C - F(s) MOD C), this time will be idle and can be added to the lower bound on the cost to complete the process. A final consequence of the fathoming procedure is that the fathoming tests increase the computational effort of the algorithm. Assuming I(s) is obtained by methods which do not add to the storage burden of the algorithm, the complexity of computing l(s) is O(N). Since I(s) is computed for all states, the dynamic program with fathoming for the ALB has a computational complexity of 0(M’N)3), where M’ is the number of unfathomed feasible subsets generated [33. In contrast, the DPALB has a computational complexity of 0(~~2), where M is the number of feasible subsets generated by DPALB [3]. An algorithm (SFDPALB) to implement the static upper bound dynamic program with fathoming, with lower bounds based on (9), is described in Fig. 3. SFDPALB is applied to the problem described in Figs 1 and 2 in Section 3.3. 3.2, Upper bout
i~pr~ue~e~t
With a nonoptimal incumbent solution the performance of the above method may be seriously degraded, because relatively few states satisfy (8). Although Morin and Marsten [6] suggest that substantial improvement over conventional dynamic programming may be possible with a nonoptimal incumbent, it appears that the storage requirements of a dynamic program with fathoming could approach those of ordinary dynamic programming, because if most fathoming attempts are unsuccessful. Further, a poor incumbent may result in an overall computational burden even greater than the conventional method. Thus an effort should be made to furnish a high quality incumbent solution to the procedure. One common strategy to obtain a good incumbent is to initially apply many different heuristics, with the hope that at least one will furnish an effective upper bound [ll]. Issues pertaining to the number and types of heuristics to employ for ALB problems have been discussed in Cl23 and [13 3. These decisions involve trade-offs between computational effort and the statistical likelihood of finding a better solution. The principal limitation of this strategy is that unless the cost of the
FREDF. EASTON
168
Let U :- Cost of Incumbent Solution Let I(S) = Z tj + slk(s)
3s Initialize:
S'(0) = 0
For K = 1 to N Do S'(K) = 4 For j = 1 to N Do For each s E S'(K-1) 00 If
(s,j) is a feasible subset then Perform the DP recursion if f(s,j) + l(s) < C U then e,s:'~~Wt-e{
S'(K) U (s,j) 1
else continue Next s Next j If S'(K) = $ then STOP (U is optimal) else continue Next K
Fig. 3. The SFDPALB algorithm.
current best heuristic solution equals a lower bound, it is usually impossible to know (short of solving the dynamic program) whether the application of another heuristic will have any potential benefit. And of course, even if a better solution exists there is no guarantee that it will be found by applying some other heuristic. Thus the reward for solving the problem with many different heuristics is at best uncertain. An alternate strategy is to furnish the dynamic program with any incumbent and use the relative success in fathoming states as an indicator of incumbent quality. When relatively few of the states examined are fathomed, there is reason to suspect the upper bound can be improved. If such a determination is made, an heuristic can be applied to complete the solution from some promising unfathomed state {s). One advantage to this approach is that additional heuristic solutions are generated only when it appears a solution better than the incumbent exists. Another advantage is that the heuristic chosen to complete the solution from state {s} has the optimal partial solution for state {s} at its disposal, reducing the solution space for the problem. If {s} is on a shortest path, any heuristic completion procedure starting from {s} is statistically more likely to produce an optimal solution than when the procedure starts from scratch, simply because there are fewer alternatives to consider. While such a strategy may ultimately attempt many heuristic completions, successive attempts are made on a somewhat more enlightened basis than in the first strategy. These advantages lend support for a dynamic upper bound procedure for dynamic programs with fathoming. Several implementation issues must be considered. Since each completion attempt adds to the computational burden, these attempts should be very selective. An obvious strategy is to restrict completion attempts to unfathomed feasible subsets. Because the number of unfathomed feasible subsets may be quite large, additional restrictions may be needed to limit the number of completion attempts to a reasonable level. It is also necessary to select an appropriate completion heuristic. There is some justification for using an heuristic procedure different from that used to obtain the original upper bound, to further increase the likelihood that a better incumbent will be found (if one exists) [ 141. Ideally, both heuristics should be fast and be known to yield good results. Finally, a threshold fathoming “success rate” must be specified to indicate when the dynamic upper bound improvement procedure should be activated. If set too high, the algorithm may needlessly search for an improved upper bound which does not exist. If set too low, the algorithm may have to evaluate a substantial fraction of M before identifying the optimal solution. After some preliminary experimentation with ALB test problems, it was found that the number of completion attempts could be controlled adequately when restricted to those unfathomed states
169
DP with fathoming and dynamic upper bounds Let U
:= Cost of Current Incumbent Solution
Let II'
:= Cost of Heuristic Completion
Let l(s) = 2 tj + slk(s)
3s S'(O) = 4, fathomed = 0, tries = 0,
Initialize:
threshold = .05
For K = 1 to N DO S'(K) = 0 For j = 1 to N DO For each s E S’(K-1) DO If (s,j) is a feasible subset then Perform the DP recursion tries <-- tries + 1 if f(s,j) + l(s) < C U then
S’(K) -
{ S’(K) U (s,j)
else fathomed c
)
fathomed + 1
if (fathomed/tries) < threshold, (s,j) E S'(K), and (u then Determine U' from (s,j) if U' < U then U c U', fathomed c
0, and tries -
f(S),tjI > tj) 0
else continue else continue Next s Next j
If S'(K) = I$ then STOP (U is optimal) else continue Next K
Fig. 4. The DFDPALB algorithm.
which required a new work station to execute its last task. It was also found that a threshold fathoming success rate of 5% yielded a reasonable trade-off between unnecessary completions and the ability to identify an improved solution. This value may not be appropriate for all ALB problems, however. A dynamic programming algorithm with fathoming and dynamic upper bounds (DFDPALB) is described in Fig. 4. To illustrate its application, it will be applied in the next section to an example problem. 3.3. An example To illustrate the application of SFDPALB and DFDPALB, both will be used to solve the example ALB problem described in Figs 1 and 2. With a cycle time of C = 20, the solution to this problem by the conventional dynamic program (DPALB) requires the generation and evaluation of M = 15 feasible subsets (i.e. the number of states in Fig. 2). Part A of Table 1 shows the solution to this problem using SFDPALB. To illustrate the impact of a static nonoptimal upper bound, we set U = 6, one greater than optimal solution. With a static upper bound of 6, none of the feasible subsets evaluated satisfied expression (8). Since none were fathomed the storage requirements of SFDPALB are identical to conventional dynamic programming. However, the additional effort to perform the fathoming tests increased the overall computational burden. Part B of Table 1 shows the solution to the same problem by DFDPALB. Again the initial upper bound is assumed to be 6. Using a threshold success rate of 5%, a heuristic completion was attempted after the feasible subset {A} was evaluated at stage 1 and found to be unfathomed. Using the starting solution implied by the least cost path to state (A}, ALB heuristics such as those developed by Hoffman [15] or Helgeson and Birnie [16] provide completions requiring 5 work stations. Based on the improved incumbent solution the next feasible subset generated {AB} was fathomed, because by (8) there could be no completion from {Al?} requiring fewer than U = 5 work stations. The feasible subset {AB} was the only feasible subset with 2 elements, so at the end of stage 2 the set of unfathomed 2-element feasible subsets s’(2) is empty. With no feasible subsets available to
I 2 3
8
7
6
5
K
A AB None
B. Solution
A AB ABC ABD ABCD ABCF ABCDE ABCDF ABCDH ABCDEF ABCDEG ABCDFH ABCDEFG ABCDEFH ABCDEFGH
s
to example
11 37 stop
problem
II 3-l 49 45 54 57 72 68 IO 80 15 78 83 90 93
F(s)
64+9=73 47 + 3 = 50
5 5
with C = 20, U = 6 (dynamic:
: 5 5 5 5 5 5 5 5 5 5
5 5 5
No Yes
initiated
S’(K)
:
when cumulative
fathoming
A AB ABC ABC,ABD ABCD ABCD,ABCF ABCDE ABCDE,ABCDF ABCDE.ABCDF,ABCDH ABCDEF ABCDEF,ABCDEG ABCDEF,ABCDEG,ABCDFH ABCDEFG ABCDEFG,ABCDEFH ABCDEFGH
with C = 20. U = 6 (static)
and DFDPALB
completion
No No No No No No No No No No No No No No No
problem
heuristic
to example
Fathomed?
rm + W/Cl solution
by SFDPALB
solutions
1. Example
= l(s)
A. SFDPALB
+ SLW)
64+9=13 47+3=50 38+0=38 42+0=42 33+6=39 30+3=33 21+0=21 25+0=25 20+0=30 13+0= 13 18+5=23 15+2=17 10+0= 10 3+0=3 o+o=o
by DFDPALB,
c.t,
Table
O/I II2
success rate falls below 5%)
O/3 Q/4 O/5 O/6 O/7 018 Q/9 O/IO O/11 O/I2 O/13 O/l4 o/15
Q/I 012
Fathom rate
Yes; u = 5
Completion results
DP
with fathomingand dynamicupper bounds
171
generate 3-element feasible subsets the algorithm terminated. By Morin iind Marsten’s [6] Corollary 1.1, the current incumbent was accepted as optimal. In this case, the storage requirements were l/15 that of DPALB and SFDPALB, with a proportional reduction in the computational effort. These examples illustrate the performance degradation experienced when an optimal incumbent is unavailable to the dynamic program with fathoming. They also demonstrate the potential of relaxation and fathoming techniques to greatly reduce the dynamic program’s statespace when an optimal incumbent is provided. To address the questions of whether the dynamic upper bound procedure generally out-performs one based on a static upper bound, and indeed whether either provides any significant improvement over the conventional dynamic programming approach, all three algorithms were applied to a well-known set of ALB test problems. The results appear in the next section. 4. COMPUTATIONAL
RESULTS
DPALB, SFDPALB, and DFDPALB were programmed in VAX FORTRAN to run on a VAX 8810 operating under VMS 4.6. Each program was designed to store a maximum of 10,000 feasible subsets in high speed memory, and to run for a maximum of 60 CPU sec. To facilitate re-creating the optimal solution, all feasible subsets generated by the procedures were stored in low speed memory. For SFDPALB and DFDPALB, the initial heuristic solution was based on a modification of the Ranked Positional Weight heuristic [16]. The modification employed Helgeson and Birnie’s positional weights to assign priorities for the tasks which can next be assigned to the current station; updating the list of feasible next tasks each time an assignment is made. Heuristic completions were carried out with a modification of the Hoffman [lS] method, chosen primarily for its ability to produce very good solutions [lo]. All three algorithms were applied to each of the 60 problems in Talbot and Patterson’s [l] problem set. Program output included the theoretical minimum number of stations for the problem instance (b), the cost of the initial heuristic solution (V), the cost of the optimal solution (Z), and the optimal station assignments. The program also reported the number of unfathomed feasible subsets generated, the maximum number held in high-speed storage, CPU time (including I/O) to solve each problem, and, for DFDPALB, the number of completion attempts. The results are reported in Table 2. Overall, we found that DPALB was able to solve only 30 of the 60 instances within the parameters established for the experiments. By contrast, SFDPALB solved 55 of the problems and DFDPALB solved all of them. This suggests that relaxation and fathoming techniques expand the realm of ALB problems that can be solved with dynamic programming methods. To establish the extent to which relaxation and fathoming techniques improve the performance of dynamic programs for the ALB problem, Table 3 summarizes the aggregate performance of the three algorithms for the 30 problems where a direct comparison could be made. The first rows of Table 3 show that overall, the total number of feasible subsets evaluated, the maximum (summed for all 30 instances) number of feasible subsets stored in high speed memory, and total CPU time for the SFDPALB are roughly half that required for DPALB. With dynamic upper bounds (DFDPALB), the improvement is even greater. The reduced maximum storage requirements for the fathoming methods are a consequence of the reduced number of feasible subsets evaluated; the CPU time improvement is a consequence of the first two measures and the elimination of the need for backtracking (and recursion, if b = U) whenever the incumbent is confirmed as optimal. To lend support to our hypothesis that the static upper bound procedure is adversely affected when Z < U, the second part of Table 3 summarizes aggregate performance when the initial upper bound was not optimal. As suspected, SFDPALB prunes relatively few states and requires more solution time than DPALB. In contrast, DFDPALB showed substantial improvement over DPALB in all three measures. However, a better indication of the relative performance of SFDPALB and DFDPALB can be obtained by examining the 55 instances where a direct comparison of the two methods is possible. Aggregate results for these instances are presented in Table 4. The first pair of rows reveals that overall, DFDPALB significantly out-performs SFDPALB in all three categories, justifying the
65 4 3
7
65 4
43
;: 10 18
7
I:: 13
::
x streegth - 0.833 No. states = 17
Jackson N=ll Order strength = 0.582 No. statcs=SI
HakiaoiT N=28 Order strength = 0.238 No. states = 326,594
N-21 Order strength = 0.710 No. St8kS = 199
Tws
8 5 5 4 4 3
3
3;
I38 20s 216 256 324 342
4:
::
8
7
6
J&kc
t::
: 2
O&r strength = 0.524 No. states = 21
: 4
b
6 x IO IS 18
Cyck time
u
8 7 6 4 3
Solution
T8bk
z
8 7 6 4 3
2.
cOttp&iW
199 199 199 199 199 199
:: SI 51 51 51
17 17 17 17 17
:: 21 21 21
21
DPALB
stata
DPALB,
SFDPALB,
; 0
0 44 92
x 0 0 0
0
:
f
0 44
: 0
t 0
3:
27 0 25
51
:
8 0
DFDP
(M or M’)
fOf
SFDP
lJcMatai
pdOm8nct
‘DFDf%%Lft
:t
:;
51 51
t: 18 18
If
10 IO IO 10 IO 10
DPALB
:: 0 0
0 0
(: 0
:f
0
8 0
18 0 I4
SFDP
0
0
0 x
1 0 0
t
0 11
14 0 14 0 0 0
3 4 3 0 0
0 0
:
:
DFDP
Maximum storage required
tttd
:
:: 0 0
I:
1
I I
0
x 0
4 0 7
:
4
3
::
I 0
(:
(DFDP)
8ttCtllptS
Compktion
EZ NS NS NS NS
t::
8:: 0.30 0.30
0.19 0.19 0.19 0.19 0.19 0.19
0.10 0.10 0.10 0.10
0.10
ii::
0.02 0.01 0.01 0.01 0.01 0.01
0.01 0.11 0.19 O.o( 0.01 0.01
0.10 0.01 0.12 0.01 0.01 0.01
0.05 0.08 0.05 0.01 0.01
0.07 0.01 0.07 0.01 0.01 0.01
0.20 0.20 i::
SFDP
DPALB
CP time
0.01 0.01 0.02 0.01 0.01 0.01
0.01 0.12 0.07 0.05 0.01 0.01
OMI 0.01 0.11 0.01 0.01 0.01
0.01
0.05 0.01
0.05 0.10
0.01 0.01 0.01
0.07
0.01
0.08
DFDP
Z
!f
?I
I
515s 8847 10.027 IO.743 11.378 I?*067
ANUS N-111 Order stmn8tb 9t 0.4% NtY.ft8tcs>1.0E7
27 17 IS I4 14 9
f3 I2 IO 9 9 7
27 I8 I6 IS 14 9
22 IQ 9 8 7
10 8 7 6 5 4 :: NS NS :: z:
6 4 3
2i IO 9
7 8
z:
98
27 I8 I6 IS I4 9
:I
z: NS NS
E
I6 14 I2 II IO NS
z: NS
3995 3995 3995 399s 399s 399s 3995
7 10 6
I4 13 12 IO 8 7 5 395
:: 0 0
NS
0 NS NS 0 NS NS
3987 35% I552 34% 2315 0
:: NS
0
I
Fit NS
:j: NS NS NS NS NS
NS NS NS NS NS
E3 NS NS
NS NS
2.65 2.65 2.65 2.65 2.65 2.65 2.65
0
: 0
818 0
I
(: I
0
44: 0
304
1;: 597
Zf NS
NSO NS
0 NS NS
; 442 0
81 654 591
NS
657 657 657 657 657 657 651
1
39s 414 35% 1352 3 2315 0
b-Tltcomtical minimum number of stations. U-llpptr bound on the number of stations determined by the heuristic. Z-T& optimal number of stations. DPALB-Convcntional dynamic programming algorithm for the ALB; DPALB. SFDP-Dynamic program with frthomin8 with static upper bound (SFDPALB). DFDP-Dynamic program with fathoming with dynamic upper bounds (DFDPALB). M-Number of feasible subsets generated by conventional dynamic program. M’--Number of unfathomed fcasibk subsets 8enemtcd by SFDP or DPDP. CP-Time fin see) ta s&e problm, and identify optimal station assignments, including I/O, using VAX 8810 running under VMS 4.6. NS-No solution bccausc of insuffiint memory. ‘Tk number of statn in the conventional dynamic program was determined with a lexicographic order generation technique. See Ref. [3}.
Xl48 S853 6842 7511 8412 8898 l&816
z
!:
25 IO 9
I76 364 410
Iii
:!: 92
138 184
5
IS
I5 7 6 6 4 3
9 68
II
13 12
:
25 27
AFCUS N=83 Drdcr stren8th = 0.591 No.statex>I.OE7
Order strcn8th = 0.594 No. states > 1.0 E 7
Kiibk&c & WCs&r N-45 Order strength = 0.444 No. states = 3.899.057
Order srrcnath = 0.448 No. s~atcs 53,995
N=30
SWVyCr
0.24 0.38 0.41 0.47 0.24 0.28
0.26 0.28 0.19 0.29 0.27 0.18 0.X
NS 0.09 0.12 0.14 0.08
z
0.04 NS NS O.Qf
0.52 3.62 2.63 IS9 3.28 I .95 0.03
0.38 0.41 0.27 0.49 0.21 0.25
0.25
0.28 0.20 0.29 0.21
0.09 0.24 0.24
$2.48 0.09 0.12 0.13
0.04 0.26 0.23 0.04 0.57 3.49
0.13 1.90 0.03
1.58
0.63 I.08 2.84
I
B
fr” g’
v
3
g
FRED F. EASTON
174
Table 3. Aggregate performance comparison for DPALB, SFDPALB,
AIgorithm Overall N = 30
DPALB SFDPALB DFDPALB
and DFDPALB*
M or M
% DPALB
Maximum storage
% DPALB
CP time
29,676 15,591 8,458
52.54 28.50
5,108 2.730 1,609
GS 31.50
23.19 14.63 9.11
Z
DPALB
829
N=5
SFDPALB DFDPALB
766 53
-
1.401
92.43 6.39
1.276 155
-
5.98
91.08 11.06
7.31
% DPALB
63.09 39.24 -
1.47
122.24 24.38
CP time
% SFDPALB
*Based on the 30 instances for which DPALB was able to obtain a solution.
Table 4. Aggregate performance comparison for SFDPALB
M’
Algorithm Overall
SFDPALB
N=55
DFDPALB
8,466
h
SFDPALB
7,936
N=l9
DFDPALB
7,936
b
SFDPALB
7,663
N33
DFDPALB
530
% SFDPALB
Maximum storage
15,599
*Based on the 55 instances for which SFDPALB
and DFDPALB*
% SFDPALB
2,738 54.27 lCQ.00 6.92
1,617
I .462 1.462
18.97 59.06 100.00
1.276 155
13.40 9.78 10.05 7.31
12.1s
1.47
70.64 102.76 20.11
was able to obtain a solution.
additional effort to search for an improved upper bound during the recursion. Of course, both methods benefit equally when the upper bound equals the theoretical minimum, because neither have to invoke their dynamic programs. The second and third pairs of rows eliminate the effect of these instances, to provide a more meanings comparison. The second pair of rows in Table 4 summarize the aggregate results when the upper bound is optimal, but greater than the theoretical minimum. Because both methods use the same relaxation and fathoming criteria, the number of feasible subsets evaluated and the maximum storage requirements are identical. However, since DFDPALB searches for a better upper bound (unnecessarily, in this case), its total CPU time is about 3% greater than SFDPALB. The advantages of dynamic upper bounds become apparent when the initial upper bound is not optimal. The third pair of rows in Table 4 present the aggregate results for the instances when Z < U. Here the performance of DFDPALB is clearly superior, evaluating fewer than 7% of the feasible subsets examined by SFDPALB, requiring about 12% of its high speed storage, and achieving a solution in about l/S of the time. It should be noted that with each of the five problems that could not be solved by SFDPALB, the initial upper bound exceeded the optimal solution. As a consequence, the number of unfathomed states held in high speed storage exceeded the maximum allowable value and the procedure was forced to terminate. Because DFDPALB was able to improve the initial upper bound it was able to exploit the relaxation and fathoming techniques to great effect. In fact, allowing for differences in computers and timing routines, the overall solution time for DFDPALB appears comparable to that reported for the 59 problems solved optimally by Talbot and Patterson’s [I] IPALB (17.95 CPU set for DFDPALB on a VAX 8810 vs 12.70 CPU set for IPALB on an Amdahl47O/V8). While it is clear that relaxation and fathoming techniques greatly improve the performance of dynamic programs for the ALB problem, and dynamic upper bounds overcome a major limitation of static upper bounds, the difficulties with state space have not been entirely eliminated. For example, even with an optimal upper bound both SFDPALB and DFDPALB evaluated a substantial fraction of the feasible subsets for the Sawyer [173 problem with a cycle time of 30. investigation revealed that one relatively long (83% of cycle time) task was constrained to appear late in the sequence. All immediate predecessors and successors of this task had task times greater than 17% of cycle time, so no other task could fit at the work station where this task was assigned. Therefore, an optimal solution would have idle time of at least 17% of cycle time. Unfortunately, the lower bound to complete the process (I) could not detect this idle time until the task in question was assigned to a work station, relatively late in the recursion. Once the idle time was detected, both SFDPALB and DFDPALB quickly confirmed the optimality of the upper bound.
DP with fathoming and dynamic upper bounds
175
This suggests that further performance enhancements may be possible with improved lower bound procedures. 5. CONCLUSIONS
A number of distinctly different optimization algorithms have been proposed for the assembly line balancing problem during the last three decades. Of these, the integer programming approach of Talbot and Patterson [ 1J has yielded the most impressive results. By contrast, the conventional dynamic program for the ALB problem is burdened by excessive storage requirements and computational effort. Incorporating relaxation and fathoming techniques based on a static upper bound can dramatically improve the performance of the ALB dynamic program, provided the initial upper bound is optimal. Without an optimal incumbent solution, relaxation and fathoming merely increase the computational burden; furnishing no significant benefit. We showed this limitation can be overcome by utilizing a dynamic upper bound procedure, triggered by the relative success of the fathoming attempts. Experimental results suggest the dynamic program with fathoming and dynamic upper bounds performs at a level comparable to the best of the ALB integer programming techniques. Even better performance may be possible with improved lower bounds. The relaxation procedure occasionally failed to recognize when a substantial amount of idle time would be incurred because of certain problem characteristics. A possible extension to this research is to devise pre-solution techniques that identify such instances and use the info~ation to arrive at “tighter” lower bounds. Finally, as an extension of the remarks of Morin and Marsten [6], relaxation and fathoming techniques used in conjunction with a dynamic upper bound procedure appear to be useful for other dynamic programs with a similar structure. The characteristics of these problems, which include ALB extensions such as the stochastic line balancing problem [4], are described in their paper and will not be repeated here. Acknowledgements-The author wishes to thank Professors Kao and Queyranne for the FORTRAN program for DPALB and Professors Talbot and Patterson for furnishing their test problems.
REFERENCES 1. F. B. Taibot and 3. H. Patterson, An integer programming aigorithm with network cuts for solving the assembly Iine balancing problem. Mgmt Sci. 30,85-97 (1984). 2. E. L. Lawler, Efficient implementation of dynamic programming algorithms for sequencing problems. Report BW 106/79, Stitching Mathematisch Centrum. Amsterdam (1979). 3. E. P. C. Kao and M. Queyranne, On dynamic programming methods for assembly line balancing. Opns Res. 30, 375-390 (1982). 4. R. L. Carraway, A dynamic programming approach to stochastic assembly line balancing Mgmt Sci. 35,459-471(1989). 5. A. Nijenhuis and H. S. Wilf, Combinatorial Algorithms. Academic Press, New York (1975). 6. T. L. Morin and R. R. Marsten, Branch and bound strategies for dynamic programming. Opns Res. 24.61 l-627 (1976). 7. E. W. Dijkstra, A note on two problems in connections with graphs. Numer. Moth. 1, 269-271 (1959). 8. M. Held, M. Karp and R. Shareshian, Assembly line balancing-dynamic programming with precedence constraints. Opns Res. 11,442-459 (1963). 9. K. R. Baker and L. E. Schrage, Finding an optimal sequence by dynamic programming: an extension to precedencerelated tasks. Opns Res. 26, 11l-120 (1978). 10. F. B. Talbot, W. V. Gehrlein and J. H. Patterson, A ~m~rative evaluation of heuristic line balancing techniques. Mgmt Sci. 32,430-454 (1986). 11. F. M. Tonge, Assembly line balancing using probabilistic combinations of heuristics. Mgmt Sci. II, 727-735 (1965). 12. A. L. Arcus, An analysis of a computer method for sequencing assembly line onerations. PhD dissertation. University of California (1963). 13. F. M. Tonge, A Heuristic Program of Assembly Line Boloncing. Prentice-Hall, New York (1961). 14. S. H. Zanakis and J. R. Evans, Heuristic ontimization: whv. when. and how to USCit. Interlaces 11. 84-89 (1981). 15. T. R. Hoffman, Assembly line balancing with a precedence~matrix.~Mgmt Sci. 9,551-562 (i963). ~ ’ 16. W. P. Helgeson and D. P. Birnie, Assembly line balancing using the ranked positional weight technique. J. Indust. Engng 12, 394-398 (1961). 17. J. F. H. Sawyer, Line E&rncing. Machinery and Allied Products Institute, Washington, DC. (1970).