Computers ind. Engng Vol. 24, No. 3, pp. 391-400, 1993 Printed in Great Britain
0360-8352/93 $6.00+0.00 Pergamon Press Ltd
A KNOWLEDGE
BASED JOB-SHOP SCHEDULING WITH CONTROLLED BACKTRACKING
SYSTEM
ODESSEUS CHARALAMBOUS a n d KHALIL S. HINDI Department of Computation, University of Manchester Institute of Science and Technology, P.O. Box 88, Manchester M60 IQD, U.K.
(Received for publication 7 January 1993) Abatraet--A knowledge hased job-shop scheduling system, which employs artificial intelligence techniques to solve general job-shop problems, is presented. The system builds a schedule that falls within a pre-defined makespan and meets job due dates. The emphasis on due dates is a distinctive feature. The system consists of two main components: a knowledge base and a flexible control mechanism. The model has a three level architecture which captures, in frames, knowledge about the job-shop domain and the characteristics of individual job-shop problems. The control mechanism is based on a specially designed backtracking strategy that limits the search, while, at the same time, taking into consideration the crucial fact that early decision have a much more drastic effect on the quality of a schedule than later ones. Experience with evaluating the presented scheduling system is reported and discussed.
1. INTRODUCTION
In intermittent production systems, the production rate achievable with existing capacity is much higher than the demand rate and machines are time-shared among different items, being switched from one product to another according to some pattern governed by the demand process. Such systems generate the job-shop scheduling problem. The job-shop is the set of all machines that are associated with a given set of jobs. Each job consists of a number of operations. The problem is to order the operations to be performed on each machine subject to routing and shop constraints, such that some measurable function of the order is optimized [14]. Job-shop scheduling is very complex; not only is the problem NP-complete, but even among members of this class it belongs to the worst in practice [11]. Both the importance of the problem and its difficulty aroused interest in artificial intelligence (AI) based job-shop scheduling systems. A number of prototypes of such systems have been developed [6]. One system developed by the authors (KBSS) is distinguished by its ability to address jointly two scheduling ocncerns: minimizing the makespan and meeting due dates [5]. The system described here shares these same two concerns. However, work on KBSS revealed that important decisions with respect to the quality of a schedule are taken at the early stages of the scheduling activity. The closer a schedule is to its completion, the less important becomes the sequencing of the remaining operations. This is due to two reasons. On the one hand, at the early stages of schedule formation there is greater possibility for left shifts of operations (a left shift of an operation is any decrease in the time at which the operation starts that does not require an increase in the starting time of any other operation). On the other hand, at the late stages, the reduced number of remaining operations reduces the likelihood of machine contention and the creation of bottlenecks. The knowledge based scheduling system, reported on in this paper (KBSS-II), takes into account the above realization while limiting memory requirements, by adopting, as a control mechanism, a backtracking strategy specifically designed for this purpose. The system consists of two components: a knowledge base and an effective control mechanism. 2. THE MODEL
The KBSS-II model is a repository of declarative knowledge capturing the essential characteristics of a typical job shop. It also enables the user to describe a specific job-shop in terms of the primitives provided. The model is built in CRL (Carnegie Representation Language), a framec^m 24/3--E
391
392
ODr~,~US ~ t ~ M ~ O U S
and KHALIL S. HINm
based knowledge representation language incorporated in the Knowledge Craft development tool [12], and consists of three levels: conceptual, internal and external. 2.1. Conceptual level An entity relationship formalism is adopted for the modelling of the conceptual level. Thus 'machines', 'jobs' and 'operations' are the basic entity types. A hierarchy can be established by identifying prototype machines and types of machines. Prototypes are incorporated in the model to cater for default reasoning and to avoid duplication of information. The entity 'job' has the following seven self-explanatory attributes: 'arrival-time', 'job-intervar, 'due-date', 'work-remaining', 'priority', 'scheduled-path' and 'starting-operation'. An operation is linked to a job through the relation belongs-to/consists-of. The entity 'machining-operation' defines a subclass of operations which inherit all the properties of a typical operation but also require a machine in order to be performed. The required type of machine is specified by the relation machine type/machine-type+inv. Once an operation is scheduled, it is allocated to a machine. The same machine can handle more than one operation, but not at the same time. Precedence relations among operations are defined through temporal intervals based on Allen's work [1-3]. The notion of a temporal interval, as opposed to a time point, is a primitive concept in the model. The entity 'time-intervar supports the attributes 'begin-time', 'duration' and 'end-time'. Any two of these attributes can be used to represent a time interval. Upon request by the user, the system knowing the values for two of the three attributes derives the value of the third. This is made possible by the attachment of demons to each attribute. Each operation and job is related to a time-interval specified by the values of the attributes 'opr-intervar and 'job-intervar respectively. A 'time interval' is linked by the referenceline/referred-by relation to a 'time line'. A time line allows for the variation of the grain of reasoning. The concept of operations precedence encompasses causal relations as well as temporal relations. A relation of the former type is of the form 'operation B should follow immediately after operation A', while a relation of the latter type is of the form 'operation A must be performed before operation B'. In the KBSS-II model, causality is captured by the relation 'previous-operation/nextoperation', while temporal relations are captured by 'before/after'. Both are supplemented by the inherited attribute 'opr-interval'. 2.2. Internal level The internal level represents in a machine database the conceptual schema derived from the analysis of the job-shop domain. The basic representational unit is a frame [13], known in CRL terminology as a schema [12]. The entities and their attributes, identified in the conceptual schema, were directly translated into synonymous frames and slots. The relations and constraints were codified to CRL relations, which are specialized slots defined as instances of the relation frame. Relations, being slots, inherit all the attributes of the frame slot. CRL provides a standard set of relations (e.g. is-a, instance, has-subset, subset-of, etc.) for defining classes of concepts and their instantiations. Aggregate relations may be constructed by combining two or more relations. System-defined relations have been used to express some of the taxonomies of the conceptual level. For each relation, inheritance semantics are specified. In addition to inheritance properties, the 'scope' and 'transitivity' of a relation are defined. The former is used to specify the nature of the objects among which the relation can hold; the latter to describe the nature of the links over which the relation is defined. For example, the transitivity of the subset-of relation is such that any entity which is a subset of another is also a subset of all entities of which the latter is a subset. New relations, such as aUocated-to-operation/allocated-machine, machine-type/machinetype ÷ inv, capable-operator/can-run, etc., and their inheritance semantics were also defined. 2.3. External level The external level consists of instances of entities residing in the conceptual level and is used as a storage medium for job-shop scheduling problems specified by the user and the solutions of these
J o b - s h o p s c h e d u l i n g system
393
Table 1. The 6 × 6 × 6 test problem Operation numbe~ Job
1
2
3
4
5
6
I
Machine number Operation time
3 1
1 3
2 6
4 7
6 3
5 6
2
Machine number Operation time
2 8
3 5
5 10
6 10
I 10
4 4
3
Machine number Operation time
3 5
4 4
6 8
1 9
2 1
5 7
4
Machine number Operation time
2 5
I 5
3 5
4 3
5 8
6 9
5
Machine number Operation time
3 9
2 3
5 5
6 4
1 3
4 1
6
Machine number Operation time
2 3
4 3
6 9
I 10
5 4
3 I
problems generated by the system. The user can query the external level using CRL primitives, graphics, and some macros designed to address the job-shop scheduling domain. More importantly, the user can describe a complete job-shop scheduling problem whilst the conceptual level verifies his/her actions against the rules and constraints laid in the model. Alternative worlds reasoning and knowledge base version management are provided by the context mechanism. Contexts in CRL act as virtual copies of knowledge bases and are structured as trees. Each context may inherit the schemata present in its parent context. The CRL context mechanism defines different versions of the knowledge base by representing only new or changed information among the versions. This permits management of versions among multiple users, and in the case of the backtracking scheduler, presented in the next section, it permits the modelling and testing of different situations using duplicate sets of schemata, each within a context.
3. T H E C O N T R O L
MECHANISM
The underlying algorithm is designed to solve general job-shop problems, where operator constraints and dynamic features are not considered. However, the system can handle such features as dynamic arrival of jobs and machine breakdowns. Such perturbations are dealt with, in a deterministic manner, by partial re-secheduling form the point of interruption onwards. The system assumes an open job-shop, where no provisions for inventory are made. Technological constraints impose precedence relations among operations. The user can specify machine preferences. These constraints are used to delineate the search space and guide the search process. The objective is to meet the due dates within a pre-specified makespan Fr~.
3.1. The backtracking scheduler (BS) BS is a scheduler based on a specially designed backtracking strategy. It accepts from the user a target makespan and attempts to find a solution which will meet the due dates within the target makespan. In ordinary backtracking, search is focused on the lower levels of the search tree, by virtue of the fact that the overwhelming majority of the nodes of the search tree are at these lower levels. However, practical experience gained from attempting to solve a large number of job-shop problems shows that the quality of a schedule is determined, to a large extent, by early decisions during the initial stages of the formation of the schedule. Moreover, the combinatorial nature of the job-shop scheduling problem means that any practical algorithm will have to truncate the search in some way, if solutions are to be arrived at in reasonable computation time. These considerations argue for developing a backtracking strategy with twin objectives. First, Table 2. Machine specifications for the 6 × 6 x 8 test problem Machine instance
Machine type
ml
m2
m3
m4
m5
m6
m7
m8
1
2
3
1
4
5
6
6
394
ODESSEUS CHARALAMBOUS a n d KHALIL S. H I N D I
Mt
M2
M3
2,1
D I,.,
I.., [
! D
r 3.'
I
M4
]
I
M5
M41
5.3 I ,.,
M?
'.'
3,3
[
4.6
,.5
[
M8
O
10
70
,3(1
411
~
Fig. 1. G a n t t c h a r t o f the s c h e d u l e o f the 6 x 6 x 8 p r o b l e m .
that it leads to considering at most a truncated search tree representing a limited number of alternative schedules, specified by the user. Secondly, it takes, at the same time, account of the significance of early placements by retaining a larger proportion of the decision nodes at the higher levels of the search tree. Table 3. The 10 x 10 x 10 test problem Operation number Job
l
2
3
4
5
6
?
8
9
l0
1
Machine number Operation time
1 29
2 78
3 9
4 36
5 49
6 l1
7 62
8 56
9 44
10 21
2
Machine number Operation time
I 43
3 90
5 75
10 11
4 69
2 28
7 46
6 46
8 72
9 30
3
Machine number Operation time
2 91
1 85
4 39
3 74
9 90
6 10
8 12
7 89
10 45
5 33
4
Machine number Operation time
2 81
3 95
1 71
5 99
7 9
9 52
8 85
4 98
10 22
6 43
5
Machine number Operation time
3 14
I 6
2 22
6 61
4 26
5 69
9 21
8 49
10 72
7 53
6
Machine number Operation time
3 84
2 2
6 52
4 95
9 48
10 72
I 47
7 65
5 6
8 25
7
Machine number Operation time
2 46
1 37
4 61
3 13
7 32
6 21
10 32
9 89
8 30
5 55
8
Machine number Operation time
3 31
[ 86
2 46
6 74
5 32
7 88
9 19
10 48
8 36
4 79
9
Machine number Operation time
1 76
2 69
4 76
6 51
3 85
10 II
7 40
8 89
5 26
9 74
10
Machine number Operation time
2 85
1 13
3 61
7 7
9 64
10 76
6 47
4 52
5 90
8 45
Job-shop scheduling system
395
Consider a node n that appears in both the complete enumeration tree and its truncated counterpart. Let b(n) be the number of successors of n in the former and c(n) be the number of its successors in the latter. If all b(n) values were known, it would be possible to estimate the c(n) values such that they are proportionately smaller and such that the total number of the implied schedules in the truncated tree is as desired. Since the b (n) values form a non-increasing sequence from the start node at level 0 to the nodes at the last level, proportionality between the c(n) and b(n) values will ensure that the c(n) values will also be in non-increasing order. This, in addition to the fact that deleting a node in the search tree leads also to deleting all its successors, ensures that proportionately more nodes are retained at the upper levels of the truncated search tree, leading to more emphasis on early decisions. Two questions remain. The first is how to calculate the b(n) values. The answer to this question lies in observing that there is no need to calculate these values a priori. As the search tree is developed, the b(n) value for a node can be calculated the first time it is visited simply by counting all possible moves from it. The second question is how to estimate the value of a constant 0 < h < 1 such that c(n) = h × b(n) ¥n, when it is desired to consider at most S alternative schedules. Section 3.3 explains how h is derived. The pseudo-code below, presents BS. The counter ~(n) keeps track of the number of times node n has been visited. Each node n could be visited at most c(n) times. It is assumed that the c(n) values are known.
Step 1. (a) Define an empty stack called solution-path. (b) solution-path ,,-PUSH start-node to solution-path. Step 2. If s(start-node)> c(0) exit with failure; no solution found. Step 3. n ,,--POP solution-path. Step 4. If n is NULL, or s(n) > c(n), or Fmax> target or due-dates are violated, then LOOP
n *--POP solution-path. Update node n by deleting from corresponding schedule the operations corresponding to the discarded child node. If s(n) < c(n) then exit LOOP. Go to LOOP Step 5. If all operations have been scheduled and Fm~ ~
396
ODF.S.~US CHARALAMBOUSand KHALIL S. HIND1
3.2. Job and operation priorities
The only remaining question is how to choose the operation to be scheduled next when expanding a node n. In general, when the makespan is considered, better schedules tend to be produced when jobs with heavy unprocessed workloads are selected first, by using, for example, the MWKR (Most Work Remaining Rule) [11]. However, of the various measures of performance that have been considered in research on scheduling, the measure that is most important for practical applications is the satisfaction of pre-assigned job due dates [7]. The work in [8] suggests that three factors are important in meeting due dates: 1. A function of job due date, to pace the progress of individual jobs and reduce the variance of the lateness distribution. 2. Consideration of processing time, to reduce congestion and to get jobs through the shop as quickly as possible. 3. Some foresight, to avoid selecting a job from a queue which, when the imminent operation is completed, will move on into a queue which is already congested. On the basis of the above observations, the priority for each job was determined by the ratio of remaining work over due date, with ties resolved arbitrarily. The remaining work for a job is defined as the sum of the processing times of all its yet unscheduled operations. Processing time for an operation is independent of the machine on which it will be executed, since all alternative machines are assumed to have the same processing rate. Once a job is selected, an operation belonging to it is chosen and scheduled by using the following two-level hierarchy of IF THEN rules. First level 1st rule: IF there is a starting operation not already scheduled THEN select it and proceed to level 2. 2nd rule: IF the analysis of causal constraints dictates that an operation has to follow THEN select that operation and proceed to level 2. 3rd rule: IF the analysis of temporal relationships gives a set of operations eligible for scheduling THEN select the member of the set with the longest processing time and proceed to the second level. Second level Let di be the processing time of operation i selected by the first level and let st be its ideal earliest starting time. Also let d~ and d~ be the start point and end point of idle interval t, respectively. 1st rule: Find the set S of idle intervals on machines of the required type such that: d~ <<.sj <<.d~ - di 2nd rule: IF the set S is not empty THEN select the interval for which d~ - d[ - d~ is minimum, i.e., the interval providing the tightest fit; schedule the operation as early as possible and exit. 3rd rule: Select the earliest available machine.
The first level of the hierarchy is responsible for the selection of the operation to be scheduled next and calculating its ideal earliest starting time. The second level then finds an appropriate machine and completes the scheduling process. In the third rule of the first level, the longest processing time criterion is used, so that operations with heavy processing requirements are scheduled as early as possible. This is in line with the general strategy of selecting jobs with heavy unprocessed workloads. The first rule of the second level selects the machines suitable for executing the operation chosen by the first level such that is occupies one of the idle intervals created so far. The second rule then seeks to produce an active schedule by performing left shifts whenever possible. The choice of the interval which gives the tightest fit is motivated by a desire to leave as much room as possible for later placements. If no suitable idle gaps are found, then the third rule schedules the selected operation as early as possible.
Job-shop schedulingsystem
397
3.3. Determining the constant of proportionality h Consider a job-shop problem with m jobs, each consisting of k operations. Assume further that the operations of each job are ordered sequentially, that operation pre-emption is not allowed and that each operation is performed on a different machine (i.e. the number of machines is equal to k). Under these assumptions, it is possible to estimate a constant of proportionality h between b (n), the number of successors of node n in the complete enumeration tree and c(n), the number of its successors in the truncated search tree, such that the latter tree consists of a predetermined number of schedules S. Obviously, the assumptions made will not hold in the majority of job-shop problems. Nevertheless, the derived formulae can still be used to estimate c(n). It is also worth noting that the search will not necessarily continue until all S schedules are enumerated; instead, it will terminate whenever a schedule satisfying all requirements is produced. Under the above assumptions, a path from the root of the tree to level 1, represents a partial schedule consisting of 1 operations. All complete schedules consist of mk operations and arc represented by paths from the root node to the terminal nodes on level 1 = mk. Let
S~ b(l) c(l) N(I)
= = = =
number average average number
of schedules in the complete tree. branching factor at level I of the complete tree. branching factor at level I of the truncated tree. of nodes at level 1.
The average branching factor for a level I of a search tree is the average number of arcs emanating from a node at level 1 to level 1 + 1. Proposition l: Sm~ = N ( m k ) = IIt= 0mk-I b(l). Proof." From the definition of the average branching ratio, N(I + 1) = b(l). N(I). Thus, N(1) = b (0)N(0) N(2) = b(1)N(1)
N ( m k ) = b(mk - l)N(mk - I)
Multiplying the above equations yields: N ( m k ) = b(0). b(1).., b(mk - I). N(0)
but N(O)= 1, therefore N ( m k ) = 1-Itf0"k-lb(l) For the c(n) values to be proportionately smaller than the b(n) values, let c(n)=hxb(n)
[]
O
from which
c(l) = h x b(l)
But in analogy with proposition 1, mk-I
s = H c(O I-0
Substituting c(l) by h x
b(l) yields: mk-I
mk-I
S ffi rI h x b ( l ) = h =k H b(l)=h'~S~-, l=O
l-O
Therefore h = ( S / S ~ ) I/'~ The only unknown variable in the above formula for h is S ~ , which is given by the following proposition. Proposition 2: S ~ = mk! /(k!)m Proof." If there are no constraints at all, then the number of alternative schedules is equal to the number of possible permutations of mk operations, i.e. mk!. Since there arc sequential constraints,
398
O D ~ U S CHX~LXUnOUS and KHALXLS. HINDI
for each job only one of the k ! alternative sequences of operations is valid. For all the operations o f m jobs, only one of the (k!)msequences is valid. Therefore, the total number of alternative, valid sequences of m k operations is mk! /(k!) m []
4. SYSTEM EVALUATION
Evaluating Artificial Intelligence based systems has proved to be a difficult and controversial task [10]. Unless there is a quantitative measure of performance such as the derivation of a chemical structure from spectrograms (e.g. DENDRAL [4]) or the solution of an integral by symbolic integration (e.g. SAINT [15]), measurement of success is at best subjective. Nevertheless, in order to evaluate KBSS-II, extensive tests were carried out. The results of these tests were compared against the performance of KBSS [5]. KBSS-II performed on average better, both in terms of minimizing the makespan and meeting the due dates. The test problems used were generated by introducing variations to the problems given in [9], in order to test various aspects of the system. In most problems, the most important variation introduced was the addition of due date requirements. The assignment of due dates is closely related to the prediction of individual job flow times. If one could accurately predict the flow time of various jobs under a certain scheduling regime one could, if due dates were under internal control, assign an allowance equal to the flow time so that the completion time of each job would be exactly equal to its due date [7]. However, since flow time depends not only on the characteristics of the individual job and the scheduling regime in use, but also on the nature and status of the other jobs coexistent in the shop during the time that a given job is present, perfect prediction is unattainable for all practical purposes. Among the common methods used for assigning due dates in industry [7] are the following: • total work due dates: the allowance for flow time is a fixed number of times the sum of the processing times. • Number of operations due dates: the allowance for flow time is proportional to the number of operations. • Constant allowance due dates: each job receives exactly the same allowance. • Random allowance due dates: each job is assigned an allowance at random. For the test problems used in this work, different due dates were assigned to individual jobs using the following formula:
Dj=F+ Y pj, i~
N
(4.1)
where pj~ = lj = N= F=
processing time of operation i belonging to job j, set of operations belonging to job j, total number of jobs, best makespan from [9].
The data and the results of two representative problems are now discussed. The first is a variation of the 6 × 6 × 6 problem (Table 1) cited in [9], which consists of 6 jobs, each requiring six operations, with one operation performed on each of 6 available machines. Operation times were selected randomly and vary from 1 to 10 hours in duration. The machine order was also assigned randomly with the added restriction that each job has one operation on each machine. The above problem was modified to create a 6 × 6 × 8 problem in order to test, amongst other features, the ability of KBSS-II to schedule alternative machines. Thus the 6 × 6 × 8 problem consists of 8 machines and 6 types of machines. If machine numbers in Table 1 are used to denote types of machines rather than instances of machines, then Table 2 shows that machine type 1 has two instances: ml and M4, and machine type 6 has also two instances: m7 and mS. Operations processing times and sequences remain the same as in the 6 × 6 × 6 problem. The system was instructed to seek a solution with a rnakespan less than or equal to 60 h, by
Job-shop scheduling system
399
considering at most 4000 alternative schedules. It returned a schedule of 57 h (Fig. 1, where the label x, y denotes job x, operation y), meeting all due dates, after having visited less than 3000 nodes. In contrast, the earlier scheduling system, KBSS, produced two schedules. The first had a makespan of 59 h, with 3 jobs tardy by an average of 2.3 h. The second schedule was better in terms of meeting due dates, but at the expense of increased makespan; only 1 job was tardy by 1 h, but the makespan was 71 h. The data of the second representative problem is given in Table 3. It involves l0 jobs, l0 operations per job and l0 machines, with one operation performed on each of the l0 machines. Operations times take random values between 1 and 99 h. The problem specifications included different due dates for each job, again employing (4.1). The controlled-backtracking scheduler yielded a solution with a makespan of 1107 h. In contrast, the earlier scheduling system, KBSS, produced two schedules; the first with a makespan of 1175 h and 8 tardy jobs, and the second with the number of late jobs reduced to 5, but with the makespan increased to 1427 h. It is also worth noting that the best schedule found by [9] for the same problem without due dates was 1103 h. The specification of a target makespan is of crucial importance. If too small a value is specified, the system will spend a great deal of time before discovering that the problem is infeasible. On the other hand, if too large a value is specified, then the system may produce an inferior schedule. Fortunately, it is possible to utilize historical information or insight into the given problem to specify a reasonably tight target makespan. 5. CONCLUSION
The job-shop scheduling system described (KBSS-II) proved both efficacious and effective. Much of its power is attributable to its incorporation of a flexible model independent of the control mechanism. Frames provided a natural way to represent and manipulate the conceptual schema, leading to a model that is simple to understand and maintain. The control mechanism also proved powerful. It is based on an especially designed backtracking strategy which recognizes the importance of early decisions in schedule formation. This is carried out by implicitly searching, in a depth-first fashion, a truncated search tree comprising a pre-specified number of alternative schedules, with the truncation of the complete search tree being effected in such a way that the proportion of nodes retained increases the higher the level of the tree. Similar backtracking strategies can be developed for other problems where combinatorial explosion needs to be controlled in such a way as to place more emphasis on early decisions. Another distinguishing feature of the system described is that it addresses simultaneously two scheduling concerns; namely, makespan and job due dates. The latter are rarely taken into account in job-scheduling systems, even though they are considered by scheduling practitioners to be the paramount performance criteria. Acknowledgements--The authors acknowledge with thanks fruitful help from their colleague, Mr George Kalkanis. This
work was supported by the ACME Directorate of the Science and Engineering Re~arch Council of the United Kingdom, Grant No. GR/E 82255.
REFERENCES 1. J. F. Allen. An interval-based representation of temporal knowledge. Proc. 7th Int. Conf. on AI, pp. 221-226, Vancouver, Canada, Aug (1981). 2. J. F. Allen. Maintaining knowledge about temporal intervals. Comm. ACM, 26, 832-843 (1983). 3. J. F. Allen. Planning using a temporal world model. Proc. 8th Int. Joint Conf. On A I (IJCAI-83), pp. 741-746. Karlsruhe, Germany (1983). 4. B. G. Buchanan, G. L. Sutherland and E. A. Feirgenbanm. Heuristic DENDRAL: A program for generating explanatory hypotheses in organic chemistry. In Machine Intelligence 4 (Edited by B. Meltzer and D. Michie). Elscwier Science, Amsterdam (1969). 5. O. Cbaralambous and K. S. Hindi. KBSS: a knowledge based job-shop scheduling system. Prod. Plan. Cont. In press (1992). 6. O. Charalambous and K. S. Hindi. A review of AI based job shop scheduling systems. Inform. Decis. Teehnol. 17, 189-202 (1991).
400
OD~S.~US C-'KaRAL~mOUSand KHALILS. HINm
7. R. W. Conway, W. L. Maxwell and L. W. Miller. Theory ofseheduling. Addison-Wesley, New York (1967). 8. R. W. Conway. An experimental investigation of priority assignment in a job shop. RAND Corporation Memorandum RM-3789-PR, Feb., (1964). 9. H. Fisher and G. L. Thompson. Probabilistic learning combinations of local job-shop scheduling rules. In Industrial Sdmbdi~g, (Edited by J. F. Muth and G. L. Thomas). Prentice-Hall, NJ (1963). 10. M. S. Fox. Constraint Directed Search: A Case Study of Job-shop Scheduling. Morgan Kaufman, Hove, England (1987). 11. A. C. Hax and D. Candea Production and Inventory Management. Prentice-Hall, N3 (1984). 12. Knowledge Craft, Vol. 1-3, Carnegie Group Inc., 5 PPG Place, Pittsburgh, PA 15222, U.S.A. (1988). 13. M. Minsky. A framework for representing knowledge. In The Psychology of Computer Vision, (Edited by P. Winston) McGraw-Hill, New York (1975). 14. M. S. Salvador. Scheduling and sequencing. In Handbook of Operations Research: Models and Applications (Edited by J. J. Moder and S. E. Eimaghraby). Van Nostrand Reinhold, New York (1978). 15. J. R. Sladge. A heuristic program that solves symbolic integration problems in freshman calculus. J. Assoc. Comput. Machin. I0, 507-520 (1963).