Engineering Applications of Artificial Intelligence 14 (2001) 269–286
Paradoxes in planning Wout van Wezel*, Ren!e J. Jorna Faculty of Management and Organization, University of Groningen, P.O. Box 800, 9700 AV Groningen, Netherlands Received 5 May 2000; received in revised form 10 November 2000; accepted 10 December 2000
Abstract The word ‘planner’ has two distinct meanings in literature. The first meaning refers to the profession of a human. Organizational processes must be planned so people know what work they are going to do. An example is a production planner in a factory. The second meaning of planner refers to an actor who must think about his actions before performing them, for example making a shopping list. Both connotations of planner have their own research fields. The profession of the human planner is mainly analyzed in the context of computer support. Examples of research areas are knowledge acquisition, task modeling, decision support, constraint modeling and solving, and operations research. The second meaning of planning is dealt with, for example, by psychology, semiotics, Artificial Intelligence, and robotics. Apparently, both planning worlds have their own research methodologies, languages, ontologies, and models. In this article, we describe the main schools for both fields and the apparent paradoxes that are the result of integration. # 2001 Elsevier Science Ltd. All rights reserved. Keywords: Planning; Planning support; Artificial Intelligence; Cognitive science
1. Introduction: two types of planning 1.1. Introduction Consider the class of all classes that do not belong to themselves. Does it belong to itself? This question of Bertrand Russell caused a crisis in mathematics in the beginning of the 20th century (Flew, 1979). Now consider somebody who must make a production plan for an organization. There are several activities that he must perform, such as information collection, capacity determination, sequencing, etc. Of course, the order of those activities must be determined. In other words, the activities to make the plan must be planned themselves. Is this latter type of planning the same as the former type? The answer ‘yes’ leads to endless recursion, because planning would be a part of itself. We would be forced to plan the planning activities, and therefore, to plan the planning of the planning activities, and to plan the planning of the planning of the planning activities. This ‘recursive planning paradox’ is used to *Corresponding author. Tel.: +31-50-363-7181; fax: +31-50-3632032. E-mail addresses:
[email protected] (W. van Wezel),
[email protected] (R.J. Jorna).
assume that planning your own activities is something other than planning the activities that others must perform. This assumption, however, raises the question as to how the two types of planning relate. In the past decades, planning has been subject to research in several scientific areas. Planning always concerns the determination of the future state of something, and in most research, planning is about the determination of a future course of actions to accomplish something. But at a deeper level there are many differences between the different kinds of planning and, as a consequence, in its research. Not only do the entities whose future state is determined differ (for example, machines, orders, the planner self, employees, etc.), but also the way the plan is made, and how computers are put to use in this process. In this article, we consider the application of AI-planning techniques for computer support of planners in companies. In short, we will investigate the following line of reasoning. Humans make plans when they behave, e.g., walk, play chess, drive a car, write an article, read an article, or go out for shopping. The way in which they make such a plan is researched in cognitive science. Research in cognitive science has yielded models that describe and simulate how humans make plans and use plans during action. Such models are developed further to become
0952-1976/01/$ - see front matter # 2001 Elsevier Science Ltd. All rights reserved. PII: S 0 9 5 2 - 1 9 7 6 ( 0 1 ) 0 0 0 0 9 - 4
270
W. van Wezel, R.J. Jorna / Engineering Applications of Artificial Intelligence 14 (2001) 269–286
mathematically eloquent, so that they can be used to drive the functioning of artificial agents such as robots. These advanced models often have little relation with actual cognitive processes of human planners, but still use cognitive functions, such as searching through a problems space and temporal reasoning capabilities, as a paradigm. These models and techniques are known as Artificial Intelligence (AI) planning techniques. Examples of techniques that implement or descend from the human cognitive architecture are STRIPS (Fikes and Nillson, 1971), O-Plan (Curry and Tate, 1991), constraint-based planning (Fox, 1983), and case-based planning (Hammond, 1989). All these techniques have much derivates and extensions to make them computationally efficient or broadly applicable. The models are not only used to simulate cognitive processes or drive the behavior of autonomous agents, but also to tackle planning problems in organizations. It is this last kind of application that we question. Examples are applications of AI-planning techniques in routing and transportation problems (Allen et al., 1996; Pollack, 1996), combat force planning (Myers, 1996), and manufacturing problems (Wilkins, 1990). We will discuss whether or to what extent the transition of cognitive models of human planning to models of organizational planning is valid. Before we outline the article, two examples of different kinds of planning will be described as illustration. 1.2. Production planning Many production companies employ one or more fulltime planners. The task of production planners is to determine what product must be produced by what machine at what time. This task encompasses a lot of activities that must be done, e.g., collect the necessary information, weigh the interests of the sales department versus the production department, make an initial schedule proposal, meet and discuss with other organizational members, adjust a plan if something goes wrong during production, etc. There is a continuous pressure on production organizations to accept smaller orders with a shorter lead-time. There are several reasons why this makes the job of planners difficult. First, there is the simple fact that the planners must plan more orders in the same amount of time. Second, sequence dependent setuptimes, sequence dependent cleaning times, and startup losses make the production of large batches preferable over the production of small batches. This contradicts the wish of the market for small order sizes. This would not necessarily have to be a problem if orders are known in advance. Then, orders that are alike can be grouped, so set-up losses are diminished. But, since customers also want shorter lead times, there is an increasing contradiction in the wishes from the market and the
wishes from the production department. Third, the organization of the planning is usually not geared towards adaptation of the plan. If lead times of orders are shorter than the planning horizon, however, then orders should be incorporated into the existing schedule. But, due to the sequential nature of the planning organization as it appears almost uniformly in industry (first determine capacity, then determine product families, then assign orders, etc.), it proves difficult to adjust an existing plan. Analyses in practice indicate that planners spend much time solving problems that occur unexpectedly, such as rush orders, stock-outs of raw material, machine breakdowns, etc. (Smith, 1995; Van Dam, 1995). Unfortunately, there is little literature or theory about the planning task in organizations. Therefore, it is difficult to pinpoint the cause of the planners’ discontent, to attribute the causes of poor factory performance to the planning, or to analyze and design planning practice. After all, the cause can be the mere impossibility of matching the requirements (e.g., there is not enough capacity available to meet the demands), the clumsiness of the organization of the planning, the inadequacy of the human planner to solve complex problems, the absence of specialized planning support in practice, or a combination of these factors. Without a theory to explain the relation between planning complexity, planning organization, task performance, and planning support, this question is not a trivial one to answer. 1.3. Planning of someone doing errands In the past many experiments have been done within cognitive psychology in relation to people doing errands or going out for shopping (Hayes-Roth and HayesRoth, 1979). As can be imagined, people do not use computer support in doing errands. The main interest of cognitive psychologists is in the cognitive processes and strategies people use in the completion of this (planning) task. Conceiving the performance of this task is very easy, but there are many variations. Besides the HayesRoth and Hayes-Roth examples we have the following in mind. Suppose one has to prepare a meal for which the ingredients have to be bought in various shops. We are not talking about one supermarket in which all ingredients can be bought. There are no restrictions or constraints in ending time and the shops can be visited in random order. Neither is the order of the weight of the products relevant. It is conceivable that one ends up buying the more heavy products at the beginning and the lighter ones at the end. This situation is the most frequent one in daily practices. Hayes-Roth and Hayes-Roth (1979, pp. 277–281) give nice examples of protocols of people doing errands tasks. They did not talk about preparing a dinner, but
W. van Wezel, R.J. Jorna / Engineering Applications of Artificial Intelligence 14 (2001) 269–286
about doing little daily tasks. Subjects worked with a task with the following description: ‘‘You have just finished working out at the health club. It is 11 : 00 and you can plan the rest of the day as you like. However, you must pick up your car from Maple Street parking garage by 5.30 and then head home. But you also want to see a movie.’’ Then a list of errands follows including: buy a toy for your dog at the pet store, buy fresh vegetables at the grocery, etc. We only give two protocols to illustrate people’s thinking: ‘‘Oh, real bad. Don’t want to buy the groceries now because groceries rot. You’re going to be taking them with you all the day. Going to have to put the groceries way towards the end.’’ (protocol 16). ‘‘The plan seems to have worked well enough up until then. We made better time than we had thought. That happens in life sometimes. How did I get here so fast.’’ (protocol 44). The few hundred protocol statements make clear that an errands task requires much cognitive processing. In the example a very simple situation is described. Making other more complex errands tasks is very easy. They may seem artificial, but they are real life. In the HayesRoth and Hayes-Roth example a constraint is put on the (ending) time, that is to say that it is required to do all the errands within a couple of hours. One has to shorten in one way or the other the walking distances, or to estimate the possible waiting times in the shops. The outcome of this deliberation determines the order of shops. The situation can be restricted in a more detailed way by stating that some shops are closed at a certain time and have to be visited in the beginning and not at the end. In relation to the preparation of doing errands for the preparation of a dinner several complicating factors can be included. Consisting of a starter, a main course and a dessert some tuning is normally done. This means that if some ingredients for the preparation of the starter are not available, the composition of the main course has to be changed. This means that in the shopping list some order has to be arranged. If a certain kind of fish used in the starter is not available and the main course consists of a certain kind of meat, the ingredients of the main course have to be rearranged. This means that the order of shops to be visited is determined by the choices of the various courses in the dinner. Various other variants can be thought of such as that the constraints of time and ingredients are joined or that the list is divided between two persons. In practice, situations are not pure in the sense as we stated. Doing errands often not only means getting ingredients for a dinner, but also taking clothes to the dry cleaner’s or buying some cigarettes. Although in real life there are always some constraints in doing errands, people mostly experience no real time pressure, only self-imposed pressures. Because doing errands is something most people do at least once a week, it gives a very good
271
impression of what kinds of strategies and representations people use and what cognitive limitations they experience. The big difference of this kind of planning compared with planning and scheduling in organizations is that in the latter situation the constraints are not self-imposed and that the roles of planner and performer do not coincide. The big resemblance is that doing errands on the one hand, and planning and scheduling in organizations on the other, both require similar cognitive effort. The question of course is whether we can learn something about planners in organizations taking into account what we know from the cognitive perspective. 1.4. Scope of the paper For this research we will state the following hypothesis: planning problems that a person deals with before or during a task (such as doing errands, playing chess, or hammering a nail in the wall) differ from planning problems that a professional human planner must solve in an organizational setting (such as production planning, nurse scheduling, or lorry routing problems). We will report our thoughts about this hypothesis in the following order. Section 2 provides a short description of our approach to planning and planning support. This approach is not meant to be conclusive or decisive, but it explains the way in which we discuss some of the elements in the article. In Section 3, we first delve into issues that relate to planning in organizations, after which planning as a cognitive activity will be discussed. Section 4 analyzes the differences in aspects that seem relevant in the light of our hypothesis, and Section 5 provides the conclusions.
2. Perspectives on planning and scheduling Planning and scheduling are subjects of interest in various scientific areas and hence it is difficult to find or formulate unequivocal demarcations of the problem domain (Jorna et al., 1996). This section briefly discusses some scientific areas that engage in planning and scheduling. We will end with our definition of planning. From a psychological point of view Miller et al. (1960) defined a plan as any hierarchical process within an organism that controls a series of operations. Card et al. (1983) extended this definition by stating that planning is a mental activity besides problem solving and perceptual and locomotive skills. People establish an order or arrangement on how to solve problems beforehand. The degree of planning depends on the complexity of the problem and the kind of task. Mental arithmetic hardly requires planning, whereas, for example, a game of chess requires much planning. Hoc (1989) takes a more abstract psychological view and says
272
W. van Wezel, R.J. Jorna / Engineering Applications of Artificial Intelligence 14 (2001) 269–286
that planning means anticipation and schematization. In this activity always a bottom-up and a top-down process is involved. Planning in management theory deals with the allocation of resources, for example money to investment projects, products to machines, or staff members to shifts. Management theories often divide planning into several layers in order to cope with the complexity of planning. Those layers can be distinguished in time horizons, time preciseness, scarcity of objects and geographical preciseness. Anthony (1965), for example, distinguishes strategic planning, tactical control, and operational control. Strategic planning has a long time horizon (more than 5 years) with abstract decisions, tactical control deals with medium term decisions (1–5 years), and operational control is about short-term concrete decisions. Production management planning and control frameworks not only have separate planning outcomes for different time horizons, but also for different entities such as machines, products, operators, and stock. Most other planning types, for example employee scheduling and route planning, have no theoretical foundations with regard to the organization of the planning itself. Planning in this field is often combined with mathematical modeling in Operations Research (OR) and more recently in Artificial Intelligence (AI) (Russell and Norvig, 1995; Zweben and Fox, 1994). Efforts to combine scheduling support systems with the users perception of the problem have already resulted in various mixed-initiative scheduling approaches (Prietula et al., 1994; Veloso, 1996; Smith and Lassila, 1994; Smith et al., 1996). All approaches of planning and scheduling somehow deal with arranging scarce resources. Examples of resources are cars, lorries, employees, products, raw materials or machines. More precisely, scheduling always concerns multiple tokens of different object types. Production schedules, for example, arrange products, machines, and time intervals, whereas staff scheduling deals with staff members and shifts. In this article we use the following definition to denote planning/scheduling problems: Planning or scheduling is attuning different kinds of entities (object tokens) to one another taking into account different kinds of constraints and working towards minimizing or maximizing various goal functions. (Jorna et al., 1996) Attuning does not mean harmonizing or optimal aligning. It basically means a kind of matching of entities and activities. It is about relating and accommodating instances of several object types. In the practical sense this means looking at the quantitative and qualitative values of attributes or combinations of attributes. Object types can be, for example, products,
lorries, machines, staff members or time periods. Instances are the concrete entities of these types, that is to say for staff members it can be Tom, Mary and Luigi. Constraints are the conditions under which certain combinations are possible. If Tom is not qualified as a nurse and Mary is and the scheduler needs two qualified nurses in the night shift, the combination of Tom and Mary is impossible. Of course, such a constraint may be violated, but that may come up after a careful search for other acceptable solutions. Goals or goal functions are about the minimization and maximization of certain preferences. For example, it may be an organizational goal to minimize the hours overtime. A planner should strive to accomplish this to the best of his knowledge. Planning often involves a change from constraint into goal function. Often constraints are contradictory, meaning that prioritizations have to be adjusted. The interpretations of value, priority or adjustment of constraints and goal functions is done by planners during the planning process.
3. Planning and entities 3.1. Introduction The two examples of planning in Section 1 show that there are different kinds of planning. Clearly, they are different, but there are also common characteristics. Both kinds of planning deal with the determination of the behavior of entities in the future. We will continue by describing for both types of planning, respectively, the entities whose future behavior is determined (the domain), the way in which the process of determination takes place (the task), and how the planner deals with the actuality of the behavior of the entities that were planned (the plan execution).
3.2. Planning of organizations by organizational members Planning is a phenomenon that occurs at multiple places in an organization. In its most abstract sense, all activities that involve the determination of the future of the organization deal with planning. This includes strategic considerations that determine ‘where the organization must stand’ in 10 years, less abstract issues such as growth targets or product innovations, but also very concrete decisions such as who will work at what time next week, or the exact production times and machine allocations of the production for the following week. It is the type of planning about concrete entities that we primarily focus upon.
W. van Wezel, R.J. Jorna / Engineering Applications of Artificial Intelligence 14 (2001) 269–286
3.2.1. The domain Planning in organizations is about coordination of activities of organizational members and the allocation of resources. The types of activities and resources vary widely over organizations. Still, there is a classification in types of organizational planning that is widely used: production planning, staff planning, and transportation planning. A common ground for planning problems in organizations is that it basically concerns the coordination of supply and demand, whereby (a) the supply consists of scarce capacity and (b) the way in which this capacity is put to use can make a difference with respect to the goals in the organization (e.g., producing at low costs at a production facility, having enough phone operators at a call center, or taking care that all employees work the same amount of night shifts). The way in which the coordination takes place (in other words, the planning process), determines to a large extent the plan that is eventually executed. Therefore, we will discuss the various aspects of the planning process. 3.2.2. The planning process There are three ways to look at the process of plan creation. First, the planning has to be embedded in the rest of the organization. A plan can be considered as an agreement that regulates the actions of organizational members. Therefore, the place where the planning is made has considerable impact on the functioning of the organization. It must be determined who is responsible for the planning department (for example, the production manager, the sales manager, or the head nurse), and what authorities and responsibilities the planning department has with respect to, e.g., the determination of constraints and the execution of the plan. Second, the planning process itself must be organized, for example the task division between planners. Third, an individual must make (his part of ) the planning. The three approaches will be discussed, respectively. The place of the planning in the organization is foremost dependent on the entities that are subject to the planning. If the planned entities require continuous attention, then one or more fulltime planners are needed. An example is a production department in a dynamic environment where rush orders and machine breakdowns occur. If, however, the planning only takes place once in a while and the plan is stable, e.g., as is the case with nurse scheduling, then the planning task is not large enough to make it a fulltime job. The responsibility for the planning can be one of three forms. First, the function that is responsible for one of the entities that are planned is also responsible for the planning. For example, a production manager can be responsible for both the production processes and the planning of these production processes. Second, if the
273
entities that are planned represent opposing interests, then these interests can be expressed in the way that the planning is embedded in the organization. For example, the production planning can be a combined responsibility of the production manager and the sales manager. Third, the planning can be a separate function without being controlled by the owners of the entities that are planned. There are frameworks for production planning that divide the planning problem into several sub-problems. An example of an MRP-II framework is depicted in Fig. 1. There are two kinds of division criteria to split up the planning problem into smaller parts. First, a plan can be split into sub-plans, for example the master production schedule consists of a capacity plan and a production plan. Second, a plan can be split into hierarchical levels, for example the material requirements planning plans the same entities as the master production schedule but it adds detail. The organization of the planning itself is important if there are multiple planners who must make the plan together. Their task division can have a great impact on the quality of the planning. For example, it is good practice to let changes to the plan be made by the planner who originally made it. There is, however, no theory available about this topic. Ultimately, planning is a task that must be done by someone. Someone must perform all kinds of activities, called sub-tasks, to make the plan. Examples of such sub-tasks are collecting information, counting, organizing information, evaluating, and assigning. Unfortunately, descriptions of task performance of human planners in organizations are sparse, and a sound theory with which the planner’s task performance
Fig. 1. MRP-II framework.
274
W. van Wezel, R.J. Jorna / Engineering Applications of Artificial Intelligence 14 (2001) 269–286
can be explained or designed is missing altogether. Still, we will describe some research that has focused on the task of human planners. In order to make generic statements about the planning task, it is important to know what the task performance depends upon. McKay et al. (1995) discuss a number of measurable attributes that can say something about the quality of a human planner in situations of instability: } ‘‘Accuracy}the ability to predict [. . .] can be measured, and as the scheduler’s ability increases, the horizon should extend; } Span}how far into the future a scheduler can accurately predict is related to planning and this can be measured – as the scheduler’s ability increases, the horizon should extend; } Decision timing}it is possible to measure the timing of short, mid, and long term decisions when they occur during the decision process. It seems that most schedulers have multiple decision modes that work with varying degrees of aggregation and time horizons (e.g. similar to hierarchical production planning levels). Based on our preliminary observations, it appears that good schedulers do not artificially restrict the different levels to specific times of the day, week, or month, while novice schedulers use regimented boundaries; } Decision domain}the ability of the scheduler to identify and suggest changes to the processes, procedures, and capacity of the plant can be captured. Coupled with prediction tracking, this last trait can illustrate how much the scheduler understands about the situation and contributes to manufacturing improvement. An experienced scheduler should be expected to participate in the complete system and put their knowledge and skill to good use’’ Although these points were not meant to be discussed exhaustively, they provide some directions to what constitutes the individual characteristics and expertise of planners. Mietus (1994) analyzed thinking aloud protocols of three groups of nurse schedulers: experts, novices with practical training, and novices without practical training. She concludes that due to the lack of domain knowledge, novices without practical training are unable ‘‘to perform problem solving adequately’’ (op cit., p. 85). According to Mietus, novices look at the schedule as the thing to be solved rather than at what the schedule represents. The goals that the novices tried to achieve were clearly linkable to a domain entity (e.g., honoring a wish of a nurse), whereas experts also used goals that were based on relations between entities in the plan (e.g., continuity in the schedule). Mietus found that
the novices use an opportunistic planning approach, whereas experts appear to use scripts that determine a more or less fixed sequence of actions. An often-used way to describe how planners make decisions is with the use of heuristics or rules of thumb. Such rules describe what action is taken if some conditions are met. McKay et al. (1995), for example, studied extensively the decision behavior of a planner at a printed circuit board factory. They found 128 policies and heuristics that the planner used to perform his task. These were separated in routine heuristics for standard situations (e.g., determination of the batch size and prioritize orders) and exceptions (e.g., how to deal with products that are not made in a long time). The latter category comprised more than a hundred rules. Heuristics are often used to create computer models of decision behavior. Although heuristics alone are not enough for complete task support, they provide a very detailed description of decision behavior of human planners. Some authors describe the task at a somewhat more abstract level with flowcharts. Such diagrams often consist of (a) sub-tasks and (b) a strategy that denotes the order in which the sub-tasks are carried out. In this sense, sub-tasks depict heuristics that decompose a task into independently solvable sub-problems. For example, the heuristic ‘first schedule the products of category A, then the products of category B’ is used in a task model to denote two sub-tasks. A sample task model in Fig. 2 depicts the sub-tasks of nurse scheduling (Mietus, 1994). With the analysis Mietus was able to make a generic model of the nurse-scheduling task. Other task models of planning are provided by Dorn (1993), Sundin (1994), and Wiers (1997). An interesting aspect is that task models usually deal with the generation of a plan, while in practice planners often spend much time on tasks other than plan generation. Sanderson (1989, p. 661) notes about this that ‘‘in industrial environments, schedulers generally perform their scheduling activities in the context of a wide variety of other responsibilities’’. Several authors report how much time is invested in sub-tasks of planning. Mietus (1994) indicates that clerical work (e.g., data collection) often accounts for 40–80% of the planner’s time; Verbraeck (1991) even reports that (for his case) the actual planning task only accounted for 3% of the planner’s time. Bakker (1995) surveyed several planners in different situations. In her study she asked planners in 50 production, transportation, and staff planning situations to indicate the amount of time spent on various sub-tasks. This was done by presenting the planners a list of fixed sub-tasks and asking them for percentages and priorities. The tasks were all performed manually at the time of investigation. Table 1 contains her data about the time that is needed for the several sub-tasks. The figures in Table 1 indicate that adequate support of task components other than creating an
W. van Wezel, R.J. Jorna / Engineering Applications of Artificial Intelligence 14 (2001) 269–286
275
Fig. 2. Division of the nurse-scheduling task (Mietus, 1994, p. 93).
Table 1 Average percentage of time investment for sub-tasks (Bakker, 1994, p. 19) Sub-task
Starting from scratch (%)
Using a previous schedule as a basis (%)
Clerical work Create initial schedule Adjusting schedule Counting
21.2 55.9 18.5 10.4
42.8 36.1 38.6 10.0
initial plan (plan generation) is important if one wants to create task support. The fact that several sub-tasks can be distinguished in the planning task means that it must somehow be determined in what order the sub-tasks are performed. In other words, a planner must plan his own task. This is a potential source of confusion, since planning of the planning is the kind of planning that we will discuss in the next section. First, however, we proceed by discussing computer support of organizational planning and the execution of plans. Computer support of plan creation: the imitate/replace debate. Most research that deals with day-to-day planning deals mainly with automating the quest for good schedules. Automated schedule generation, however, often leaves little room for human interaction and control in the search process. There seems to be a continuous quest for a good balance in machine versus human control in scheduling systems. Advocates of
analytical models argue that humans do a poor job at planning. Advocates of a more human centered perspective, however, state that analytical models cannot deal with uncertainty and instability of the real world (McKay et al, 1988; Sanderson, 1989). The latter state that the lack of application of scheduling systems in practice is due to the black box nature of such systems. Although there are many systems available on the market, especially for production planning, the support is not always oriented towards inspecting intermediate outcomes or adjusting and manipulating constraints in such a way that initial planning states are adjusted to the outcomes. Generation techniques can be categorized into three main philosophies of schedule generation. The first two categories focus on mere generation. The first category of techniques only looks at characteristics of the problem. The second category only looks at the way that humans solve the problem. Techniques in the third category base themselves on support requirements that stem from decision support theory. Techniques in the latter category try to find a balance between efficiency that can be reached by using the computer and the fact that the human planner must understand the solution. First, there are approaches that focus mainly on the domain without analyzing the way in which the problems are solved by the human planner. In such approaches, characteristics of domain entities and their relations are analyzed by a modeling expert (for example, capacity of machines, shift requirements, historical data of working hours, etc.) and an algorithm
276
W. van Wezel, R.J. Jorna / Engineering Applications of Artificial Intelligence 14 (2001) 269–286
is formulated to efficiently find schedules that do not violate constraints. There are several approaches to generate a schedule. Operations Research techniques are available for all domains of plan generation such as job shop, flow shop, routing, etc. The so-called intelligent scheduling or knowledge-based scheduling approaches from Artificial Intelligence are assumed to have more expressive power than Operations Research techniques with respect to constraints, expertise, manipulation, and adaptability. Examples of knowledge-based schedule generation are constraint-based search, discrete event simulation of heuristics, activity-based scheduling, and fuzzy scheduling (Smith, 1992). These techniques use human intelligence as a metaphor to create computational techniques. Thus, they are not designed with the aim to support an intelligent human actor who performs a task but they use human intelligence as a starting point to formulate techniques that are optimized for performance. This optimization step can result in the loss of resemblance between human reasoning and the model in the technique. Second, approaches can focus on imitating the human problem solving processes in the so-called rule bases or expert systems, also called the transfer view (Schreiber et al., 1993) because the knowledge is extracted from a human and transferred into a computer program. For this approach, the problem solving approach of the human scheduler must be analyzed. In terms of the human problem solver (Newell and Simon, 1972), this means that the problem space and admissible operators must be traced and implemented. As with the domain oriented approach, the distribution of tasks between the computer and the user is mainly towards the computer, but the available computational capacity is not used since the computer is used as a symbolic processor. It is, however, understandable for the human planner why a generated plan looks as it looks, because he would have processed the symbols in more or less the same way. This approach can be used if a planner is satisfied with a reduction of the efforts without much improvement of the solution. The main disadvantage of this approach is that the system not only inherits the capacity of abstract reasoning that is so typical of humans, but also the myopic fire fighting tactics that human schedulers practice (Smith, 1992). Domain oriented techniques and expert system approaches both focus on computerized schedule generation. The mixed initiative support approach focuses on improvement of the solution by establishing a coalition between the computer and the user. Hereby, it is not the domain or the problem solving process that is the main focal point, but the task of the human planner. This implements the common DSS view that both humans and computers should do the tasks they are best at. This is called knowledge-based decision support (KB-DSS). The focus in a KB-DSS is on the
level at which the system and the user communicate. Since such a system will change the current problem space of the human planner, it is not straightforward which decisions should be taken by the user, which decisions should be taken by the computer, and for which decisions the user and the computer should cooperate. Hofstede (1992) gives some prerequisites which generation techniques must comply with if they are to be used in interactive support at the knowledge level. First, the user must be able to interact during operation. Second, the problem representation must consist of objects that are meaningful for the planner and it must be possible to show the progress of the heuristic to the user. Third, the operators or transitions in the heuristic must refer to actions in the real world. Fourth, the control mechanism must allow the user to alter the current state during execution of the heuristic, and must provide a way for the user to make a trade-off between the quality of the solution and the time spent in generating it. The research of Mietus (1994) shows the distinction between sub-tasks that can be performed by the computer (such as counting) and the sub-tasks that need human judgment (such as scheduling a shift). For the latter, the level at which the human planner must be able to control the process is of importance. Below that level, the computer can apply whatever algorithms are most efficient. Above that level, however, intermediate results must be communicated to the human planner for he/she must be able to interfere in the search process. We outlined three categories of generative scheduling support. We cannot unequivocally say which approach is best. Much depends on characteristics of the problem domain, the frequency of planning decisions, time pressure on planners, the stability of the environment, etc. There has been experimental research to determine the performance of humans versus computer with respect to the scheduling task. Sanderson (1989, p. 661) states that ‘‘many studies have compared human scheduling performance with that of simple priority rules. This usually makes human superiority easy to demonstrate’’. Of course, schedule generation can be more sophisticated than the use of simple priority rules, but there are more authors that report the superiority of human planners. Experiments of Schartner and Pruett (1991) show that (in their setting) an approach in which the human scheduler makes the decisions yields better results than approaches where the computer makes some or all of the decisions. 3.2.3. Plan execution The planning in organizations is usually decoupled from the execution of the plan. There are two main reasons why the planner is someone other than the one who executes the plan. First, planning is a difficult job that requires expertise and experience. This is the
W. van Wezel, R.J. Jorna / Engineering Applications of Artificial Intelligence 14 (2001) 269–286
organizational concept of task division. Second, a planner must be able to weigh the interests of multiple parties. Therefore, he must have knowledge about things that extend over the limits of the individual tasks that are planned. The separation of planning and execution can cause several problems. First, the planner can have a different view of the entities that he plans compared to the people who must execute the plan. For example, although the real world is full of uncertainties, planners must make assumptions without the detailed knowledge that, for example, human operators have about the machines they operate. Second, adjustment of the plan after unexpected events is troublesome at the least. This is because events must be communicated, and subsequent changes to the plan must be weighed and communicated back to the ones who execute. 3.3. Planning of the individual by the individual In the case of doing errands but also in situations described in the last section for planning in organizations, humans as information processing systems are supposed to do the job. In the next sub-section, we will discuss the aspects of this information processing, that is to say cognition, in doing the planning task in greater detail. 3.3.1. Aspects of cognition It is obvious that a planner in industry uses his cognitive system to solve the problems of the planning process in the factory. In terms of analysis and support the big issue is whether what the planner does or has to do in the factory situation differs from what he has to do in everyday life planning situations, for example in doing errands. What are the differences with planning in industry, especially in terms of the task and in cognitive strategies? It is easy to see that the problems in the factory are more complex and more dynamic and that many lines of coordination are important, but are we for that reason dealing with two different kinds of planning? An affirmative answer to this open question makes it rather inappropriate to use the knowledge of cognition we have of the errands or shopping list task to be transferred to (support) of planning in the factory situation. To make this more clear we briefly discuss what is known from the cognitive analysis of planning tasks. We also have to discuss what kind of conceptual framework is appropriate for the study of cognition. The point is that cognition can be studied from a neurophysiological stance, but also from a cognitive functional stance. Whatever the preferable stance is, it should be kept in mind that behind these issues is the assumption that if one is talking about the support of planning, one has to deal in one way or another with the
277
human cognitive system that wants to be supported in an adequate way. Before we look at the cognitive functions of the brain, we first look at physiological aspects that relate to planning. Das et al. (1996) provide an extensive overview of the processes of the human brain that relate to planning. Das et al. distinguish three functional units of cognitive processes: (1) arousal and attention, (2) information reception, coding, and storage, and (3) planning, self-monitoring, and structuring of cognitive activities. These units are sequential: information can only be received after arousal-attention and can only then be synthesized and acted upon in the third unit. However, the units are tightly coupled since processes in the third unit direct resources for attention. The third functional unit is the one that gives humans their intelligence. It is located in the prefrontal areas of the frontal lobes. This area of the brain is intimately connected with the motor cortex, which indicates that planning can directly control actions. Anticipation and temporal structures of behavior are located in this unit, so this is the primary part of the brain for goal directed behavior of humans. The considerations about neuropsychological aspects of planning are used by researchers in cognitive psychology to build models that can describe and explain human behavior. The neurological structures specify the material grounding of the cognitive functional components. Before going to the cognitive functional perspective on planning, we will discuss some starting points not well known outside cognitive science. An individual is considered to be an information processing system. In this system, also called a cognitive system, an architecture, a set of representations and various manipulations on these representations are discerned. The architecture is functional in nature and consists of memory components, perceptual and motor systems and various kinds of central processors (Posner, 1989). The relation with the neuro-physiological components above is evident. The architecture leads to all kinds of constraints. It is, for example, impossible for a planner to hold more than 10 meaningless items in his short-term memory. An architecture without content is empty. In cognitive science the representations (or symbol structures) form the content. They are the basic constituents or elements of our thoughts. They concern images, propositions, semantic nets, scripts and other kinds of mental structures. In the case of doing errands it means that people build up a mental model of the shops they have to visit. Operations on these representations are called activities of symbol manipulation, such as creating, deleting or reordering parts of the representations. A planner might, for example, decide to work through (representations of ) constraints in a hierarchical order. During the process he switches to a cognitive strategy
278
W. van Wezel, R.J. Jorna / Engineering Applications of Artificial Intelligence 14 (2001) 269–286
with solving the constraint that is most significant. In this case the manipulations together form a method or a strategy. The discussion on manipulations can go from low level operations such as small calculation tasks to high level operations that can be called strategies. Hayes-Roth and Hayes-Roth (1979) started a discussion about the sequential nature of human planning processes at the cognitive level. They made a distinction in hierarchical and opportunistic planning. Hierarchical planning means that there is a nested number of goal and sub-goal structures or a hierarchy of representations of a plan. The highest level in the hierarchy may be a simplification or an abstraction, whereas the lowest level is a concrete sequence of actions to solve (a part of ) the planning problem. One solves a planning problem by starting at the highest level and then one continues by realizing sub-goals until one reaches the final solution. Hayes-Roth and Hayes-Roth relate this to a distinction in the overview and the action aspect of plans that they successively call plan-formation and plan-execution. In contradistinction to the hierarchical view on plan execution, Hayes-Roth and Hayes-Roth proposed the so-called opportunistic approach to planning. This nonhierarchical planning assumes that a plan is executed with the help of some kind of mental blackboard where pieces of information, relevant cues and possible subgoals are stored. They claimed and showed that planning happens asynchronously and is determined by the momentary aspects of the problem. No fixed order of operations exists; the plan execution and the steps to be taken grow out of the problem stage at hand. When planners solve a planning problem, they may start with the top-goal, but very soon they lose track of the goal structure and then they continue to fulfill the goals that are reachable within reasonable time. The hierarchy very soon vanishes and what remains is some sort of heterarchy. For that reason this planning behavior is called opportunistic. Although the contrast with the hierarchical approach may be large, a strong similarity is also present. In the hierarchical as well as in the opportunistic approach the fundamental assumption is that planning is problem solving, that can best be described in terms of problem spaces, production rules and goals. That is to say that the basic descriptive structure is the same for both, but that real behavior within the problem space is executed differently. The discussion about the real nature of what humans do when they plan is not settled, yet. In well-structured domains the dominant approach is hierarchical. In ill-structured and highly dynamic domains the strategy mostly resembles an opportunistic way. The discussion is also closely connected with how we represent problems}in production rules or in schemata}and with the function of memory structures. We shall come back to these issues in the next section.
3.3.2. Planning and cognition Production planning and doing errands have in common that both require cognitive effort from the persons who plan. In a classical problem solving description, in both situations various states, such as initial, intermediate and goal states, can be formulated. The planners go through these states by performing all kinds of cognitive operations. The differences in the two situations are also very clear. First, the complexity in the production planning is larger. Second, more people are involved in the production planning. Third, self-imposing and self-pacing of constraints and goal functions in the production planning are to a large extent absent. Fourth, on the other hand, mathematical and software techniques are mostly absent in the errands task. The only things one often uses are pencil and paper. Picking up the question as to whether the tasks in both situations are the same we return to the details of the similarities and dissimilarities between the production environment and the shopping list. Nominally, both tasks are called ‘planning’. However, are the tasks comparable? The literature shows two extreme positions regarding this issue. The issue circles around the equivalence of planning and problem solving. Newell et al. (1958) describe the planning method as part of a general problem solving technique. Later, in 1972, Newell and Simon renamed the planning method as problem abstraction, necessary if the problem was not solvable within its original state space description. Because planning as well as problem solving means searching for routes, i.e. sequences of actions, that lead to a solution or a goal state, the explicit distinction between planning and problem solving disappears in the later research papers of Newell and Simon. Planning is just one very interesting example of the general problem solving approach. Another position has been formulated by Das et al. (1996). They argue against the ‘‘planning is a subset of problem solving’’ approach in saying that a difference exists in problems to prove and problems to find. According to Das et al. (1996, p. 40) ‘Planning is a more pervasive, general regulating process than problem solving, with problem solving being a part of a planning process.’ Planning includes anticipation and overview and refers to future actions, whereas these components seem to be absent in problem solving. Das et al. have a point in one aspect of this debate. An enigmatic element in the problem solving approach of Newell and Simon has always been the starting point of the problem solving process. How does a problem solver construct a problem space? Where does the choice for a particular problem space come from? Put in other words, why does a problem solver construct this special problem space and not another? In terms of Newell and Simon the question is how a task environment gets its representation in a state space description. It is easy to say that one
W. van Wezel, R.J. Jorna / Engineering Applications of Artificial Intelligence 14 (2001) 269–286
has a new problem, here, which requires a second order state space description. Although this might be true in the strict sense of the word, it does not solve the issue. Perhaps something like what Das et al. (1996) called ‘overview’ or ‘having a higher perspective’ is necessary, here. Therefore, it might be insightful to distinguish planning as second order problem solving from ‘ordinary’ problem solving. For the moment we distinguish two perspectives. The first is by Newell and Simon, indicating that planning is another kind of problem solving, comparable to for example diagnosing or monitoring. The second is by Das, saying that problem solving is a kind of planning. Das subsumes that planning is a subset of problem solving, but it can also be stated that planning implies cognitive activities apart from problem solving. Although this discussion may seem futile, its relevance is evident if one talks about task support with the help of artificial intelligence. Determination of the task and of the kind of intelligence that is involved are prerequisites. Cognitive models of planning show how problem solving, planning, and information processing relate. Das et al. (1996, p. 27) state about this that ‘‘it is the plan that controls human information processing and supplies patterns for essential connections between knowledge, evaluation, and action’’. This very generic description can be extended by the approach of Newell and Simon (1972). They describe planning as a system of heuristics that is used by their general problem solver (GPS) ‘‘to construct a proposed solution in general terms before working out the details. This procedure acts as an antidote to the limitation of means-ends analysis in seeing only one step ahead’’ (op. cit., p. 428). Planning heuristics are used to guide action when a problem is too difficult to solve by means-end analysis. Newell and Simon assume the following steps in planning (op. cit., p. 429): (1) Abstracting by omitting certain details of the original objects and operators; (2) Forming the corresponding problem in the abstract problem space; (3) When the abstract problem has been solved, using its solution to provide a plan for solving the original problem; (4) Translating the plan back into the original problem space and executing it. Complexity is reduced by leaving out details and reasoning by analogy. In this sense, planning is a way of problem solving. Earlier models of planning presume that planning is always a hierarchical process that proceeds according to successive refinement. Sacerdoti (1975) implemented
279
such an approach in his computer program NOAH. In this view, planning is performed by recursively decomposing goals into sub-goals, until a, sub-goal can be reached by elementary actions. This paradigm is contradicted by Hayes-Roth and Hayes-Roth (1979). In their line of reasoning, they first argue ‘‘that planning processes operate in a two-dimensional planning space defined on time and abstraction’’ (op. cit., p. 312). In those terms, successive refinement would always work top-down from high to low abstraction and forward in the time frame of the plan. Thinking aloud protocols from different subjects that perform planning tasks show that this is not always the case. Hayes-Roth and HayesRoth found planning actions that they call ‘opportunistic planning’. The subjects do not work solely linearly but appear to switch in levels of abstraction and move both forward and backward in time in successive reasoning steps. Hayes-Roth and Hayes-Roth (1979) propose a theoretical framework for cognitive planning that incorporates this behavior. Some behavior that can be explained by their model is multi-directional processing in addition to top-down processing, incremental planning, and heterarchical plan structures. According to Hayes-Roth and Hayes-Roth, the choice of a planning strategy depends on three variables: the problem characteristics, individual differences, and expertise. This indicates that there is no best way to plan. Task strategies within a domain depend on individual differences and change over time if experience increases. Riesbeck and Schank (1989) argue that planning is based on scripts. Instead of thinking up a new plan for each problem, humans try to find a plan that was used for a previously solved comparable planning problem. Then, the basic planning activity is more adaptation than construction. In this paradigm, planning is about memory, indexing and learning (Hammond, 1989). These issues are very much interrelated. Plans should be stored in memory in such a way that it becomes easy to find an existing plan on the basis of a comparison of the new goal with already handled goals. There are two senses of learning in the case-based planning paradigm. First, solutions must be remembered so that they can be used for new problems. Second, a failure to execute the plan provides an indication that the knowledge that the planner has of the execution world could be faulty. Thus, script models can be seen as adding learning to the paradigms already discussed. Together, the three paradigms that were discussed provide a cognitive model for human planning. In this model, planning is about how to find the actions that solve a problem or, more general, reach a goal. The process of planning is not neatly hierarchical but switches in level of abstraction and in the time frame under consideration. The process itself is about formulating goals, finding similar solved goals, finding existing plans, adapting plans, and storing plans
280
W. van Wezel, R.J. Jorna / Engineering Applications of Artificial Intelligence 14 (2001) 269–286
in such a way that they can easily be found for future reference. The study of cognitive planning circles around three issues. The first is the issue whether planning is a form of problem solving or a part of problem solving. A position in this debate has repercussions for the analysis of cognitive strategies and processes involved. The second issue is about the representation structures people use. Are the representations production rules with the cognitive infrastructure that belongs to it, like conflict resolution and chunking mechanisms? Or are the representations schemata and scripts and its corresponding cognitive mechanisms? The third issue is about the cognitive processing in solving planning problems. Do people proceed hierarchically or opportunistically? This issue is related to the degree of structure in the domain and cannot be settled beforehand. Other issues can be related to the ones mentioned, here. As can be imagined it is very difficult to settle once and for all the debate(s). For that reason, and because of the better availability of computers, issues in planning were transferred to the field of Artificial Intelligence. Here, the debate continues in other terms, as we will discuss, but in relation to the division of tasks between computer support (and algorithms) and human planners. 3.3.3. Planning and Artificial Intelligence Planning in Artificial Intelligence is very much related to the problem solving approaches as described in the previous section. The contribution of Artificial Intelligence is that it introduces computer programs that simulate and test problem solving strategies. Although the distinction between problem solving approaches and Artificial Intelligence is somewhat blurring, it is certainly true that contemporary methods in Artificial Intelligence relate less to human problem solving and more to finding technological solutions for intricate problems. Much of the planning research in Artificial Intelligence stems from the wish to let autonomous agents (such as robots) perform tasks without prescribing how the task should be carried out. Most Artificial Intelligence methods, whether they are called algorithms, procedures, or heuristics, are based on state space descriptions. An agent or actor finds himself in a state, in which he can perform a limited number of actions. An action changes the state, after which he can again perform a number of actions (Meystel, 1987). The agent keeps on choosing and performing actions until the state it gets into somehow satisfies its goal. Planning is one way in which the agent can reach his goal (other ways are, for example, trial and error or full search). To make a plan, an agent somehow simulates the actions he will make beforehand. The original link to physical entities has been relinquished somewhat so planning agents are now often only computer programs that find a plan but do not necessarily execute it. Interestingly, in Artificial
Intelligence planning literature, planning agents are called planners. Of course, this causes quite a Babellike confusion if Artificial Intelligence researchers talk with or read from organization scientists. The disadvantage of a state space approach is that it requires a lot of information storage; for each operation in each state, the resulting state must be known. The approach STRIPS (Fikes and Nillson, 1971) resolves this issue by separating actions explicitly from state descriptions, and adding a condition to each action. The model then consists of a world that is described as a number of characteristics that can be true or false in any specific state (for example, a door is open or closed), and a number of actions that can be applied (for example, open or close the door). In each state, all rules must be considered to determine what actions can be taken in the current state. An action can be taken if its corresponding condition is true. The action is described as the characteristics that are changed by it. For example, we have the rule: if the door is open (condition), then close it (action): the door becomes closed (which is a change in the world so that the state of the agent changes). In this modeling paradigm, planning is searching for a sequence of actions that will bring the agent from its current state to the goal state. The way in which a planner finds this sequence is the paramount activity of Artificial Intelligence planning approaches. Rich and Knight (1991) describe five functions that a planner of this kind must be able to perform: } Choose the best rule to apply next, based on the best available heuristic information. } Apply the chosen rule to compute the new problem state that arises from its application. } Detect when a solution has been found. } Detect dead ends so that they can be abandoned and the system’s effort is directed in more fruitful directions. } Detect when an almost correct solution has been found and employ special techniques to make it totally correct. Planners do not necessarily have to reason with all available information. To connect this to the previous section, models of human problem solving have provided researchers in Artificial Intelligence with starting points for their planners. Examples are the general problem solver (GPS), which constructs a proposed solution in general terms before working out the details, the opportunistic planning paradigm, and script-based planning. Here it becomes clear that models of human problem solving are closely related to Artificial Intelligence, because it is no exception that research in human problem solving is initiated from the wish to make intelligent artificial agents. There have been numerous extensions and elaborations of the
W. van Wezel, R.J. Jorna / Engineering Applications of Artificial Intelligence 14 (2001) 269–286
original STRIPS paradigm. A search method that receives a lot of attention is nonlinear planning. A plan is nonlinear if it has multiple sub-plans that are worked on simultaneously instead of sequentially. Another wellknown technique is constraint posting. In contrast to the state space model, a state is not a description of the ‘world’ but a collection of operators. Such a collection of operators represents a partial or complete plan. In this model, operators to change from one state to another are the addition or deletion of plan operators. The development of techniques within Artificial Intelligence not only concerns planning, but is also related to scheduling. If in the combination of the cognitive starting points and the growing availability of algorithms one is able to mimic human intelligent performance, it goes without saying that the algorithms might have some value of their own that can be used in optimization problems. Scheduling support is then the name of the game and AI is then used in settings where the emphasis has shifted from interest in intelligence to interest in optimal solutions. If one then goes one step further and tries to use these techniques and algorithms in the support of human planners who use their cognitive systems, the circle is complete, but the cognitive starting point has disappeared. We shall come back to this line of reasoning in the conclusion, but first we want to end this section by going back to some aspects of the comparison of errands and production planning. The discussion on cognition, planning and artificial intelligence shows that there is not much uniformity in the approaches and points of view. There are few agreements and many disagreements. There is agreement about the fact that the coordinating entity is a natural or artificial intelligent unity. Another agreement is that system’s resources are overloaded if every possible step has to be attended to. Therefore, processing in the system has to be unconscious or automatic. If we return to our initial examples of production planning and doing errands, the differences are dominant. In comparison with the production situation the domain that is relevant for individual (or ‘‘life’’) planning is relatively simple. In an errands task at most three different object types are involved. Doing errands includes time, that is to say that the activities are ordered in a certain periodic sequential format. This is of course related to various starting and ending times. Doing errands normally means that one has to go from location to location. The second object type that has to be planned is location. The third object type is the products that have to be bought. As indicated earlier, not every combination is possible. If the products are interlinked because of courses during dinner preparation, a change in products for the first course has repercussions for the other courses. This means that planning in the case of errands is relatively simple, also
281
because there is a connection between products and locations (shops). If we continue the comparison with the production planner the complexity of the task for an individual planner is also simple. If we look at the possible subtask, they are either tightly interconnected, for example in planning to go home by bicycle, or they are very limited in the example of doing errands. In an errand task one usually makes a list of products. This is just a matter of enumeration. Another sub-task is to communicate or to negotiate about the composition of the products on the lists. This is usually done before shopping actually starts. A third sub-task is going along the list of shops. This is normally nothing more than executing a consecutive pattern of actions. A fourth subtask is monitoring the actual process. This includes replanning the order of actions if in the mean time the product specification has changed. Sub-tasks, such as doing administration, listing an inventory of constraints and goal functions and doing more or less complex calculations or computations, that are necessary in the production situation are irrelevant in the errands task. There is also a skewness in the division of time spent on the sub-task. Enumeration, communication and monitoring take less than 10% of the time spent on the errands. Almost all the time is spent in getting the products from the shops.
4. Analysis of similarities and differences Until now the discussion was about the cognitive processes of planners, the kinds of planning tasks they are involved in and the solutions in terms of procedures and algorithms. Looking from a generic perspective the planning task itself can be called a synthetic or configuration task. It is well known that these kinds of tasks are very difficult to complete, by humans alone as well as with the support of software. Although the usual idea in software development is that the characteristics of the task are very important in contrast to the characteristics of the task agent, we want to argue that in the case of planning this picture should be more balanced. The reason is a very simple one. Until now the planning task could not be handled by software alone. In all cases the influence, that is to say the interpretation, correction and adjustment capabilities of humans are indispensable. This holds for transportation planning, for all kinds of production planning and for staff or manpower planning. We think that a larger part of the complexity is related to the problem solving characteristics of the natural agent. By studying these characteristics in greater detail it may be possible to determine the peculiarities and qualifying properties, which in turn may be implemented or compensated for by (supporting) software. However, we will first discuss
282
W. van Wezel, R.J. Jorna / Engineering Applications of Artificial Intelligence 14 (2001) 269–286
the planning task itself a little bit more in detail, before we deal with the various aspects. From a task perspective, realizing a suitable plan or solving a planning problem requires three nearly decomposable phases. In state space descriptions the first phase is the design of a (complex) initial state, of goal state(s) and of admissible operations to change states. The second phase is}given the admissible operations}to search for an (optimal) solution. The search process may be done by exhaustive computation or by adequate evaluation functions combined with calculations. In many cases search does not give an optimal solution. The most one may get is a satisfying solution and even that is often not possible. Then the third phase starts in which one goes back to the initial state and the admissible operations and changes these in such a way that a solution is found. This way of dealing with the planning problem in the third phase is the correct one. If one does not get a solution in the second phase one has to adopt the initial phase and has to start looking for a solution again. This is how the planning process should be done. However, there is another way of dealing with the third phase which is more usual, especially if humans have to make a plan. If the second phase does not give an optimal or satisfactory outcome given the constraint and goal functions, the planner already is so much involved in the planning process, that because he has a glimpse of the solution given the constraints he takes his idea of a solution as necessary and changes the initial state and operations}that is the constraints}in such a way that they fit the preconceived solution. Formulated in other words, the phases of (1) initial state, (2) search, no solution and (3a) start again with a new initial state follow the so-called ‘closed world’ assumption. This is the necessary sequence if algorithms
have to be applied. The other essentially different order of phases can be named the ‘open world’ approach. It consists of (1) initial state, (2) search including not finding a real or established fixed solution and (3b) adjustment of initial state according to the fixed solution reality. This sequence of activities is what human planners whether in the industry or doing errands frequently and with great success do. We now have nearly all the ingredients available to make a reasonable comparison between the two perspectives on planning. The comparison is between the individual who is making a plan for himself and the individual who is making a plan in and for the organization. The former concerns the task of doing errands, the latter the production planner in industry. The aspects that will be used for the comparison can be divided into three subgroups: (a) the kind of planning entity, (b) the characteristics and processing of the planning entity and (c) the domain characteristics. We will first go into the details of the components of each of the aspects and then present the comparison in a matrix (see also Table 2). 4.1. Kind of entity With a planning entity we do not mean the product or material that has to be planned. We mean the entity that does the planning. In earlier sections we discussed the production planning task and the task of doing errands. Both tasks are done by entities; let us call them agents. Until now we have only discussed human agents who do the planning. In both cases we are dealing with the socalled single, natural intelligent agents. Two elaborations can be made regarding these agents. The first concerns artificial intelligent agents compared to natural intelligent agents. The second has to do with single
Table 2 Comparison on three dimensions of planning aspects
Entity kind Entity/process characteristics
Domain characteristics
Human who plans for himself
Human who plans in an organization
Alone/group Natural/artificial Information processing Representation Communication
Alone Natural Internal Internal: hidden and mental Internal: hidden and mental
Modeling
AI-models (temporal, case-based reasoning) Intertwined; flexible adaptation after unforeseen events Ill-defined Sequence of own activities
Alone and group Natural and artificial Internal and external External: various and coded Internal and external: mental and coded OR-models
Relation planning, execution, and control Problem space Planned entities Constraints/goal functions
Research aimed at
Self-paced; self-imposed; easily revisable Simulating planning process; operating autonomous agent
Decoupled; inflexible with respect to adaptation Strive towards well defined Alignment between other’s activities, capacity, orders Externally imposed, non-paced and difficult to change Support/completion of the planning process
W. van Wezel, R.J. Jorna / Engineering Applications of Artificial Intelligence 14 (2001) 269–286
versus multiple agents, whether they are natural or intelligent. We may call this compound planning situations. In many cases in organizations there is not only one planner who does the job. Many planners are active, sometimes everyone doing part of the overall planning tasks, sometimes by a division in sub-tasks over planners. To make it even more complicated the most realistic situations are the ones where natural and artificial agents work together. In this we could speak about hyper–compound planning situations. If we put these analytical distinctions and observations together we end up with the following question: what comparisons can be made concerning planning by entities such as natural and artificial agents, whether they are individuals or groups and organizations? How do the five types of natural single, artificial single, natural compound, artificial compound and hyper–compound agents differ? In the matrix (Table 2) the hypercompound agents are omitted. 4.2. (Processing) characteristics of the entity From a problem solving perspective the phases in planning have various consequences for the kind of planning entities that do the job. We can ask questions related to the various aspects that make planning by single/complex and by natural/intelligent agents different. The aspects that will be discussed in detail and that are used for comparisons are as follows: the information processing mechanism, its architectural components such as memory and attention, its representations, the importance of communication, of meaning and interpretation and the characteristics of coordination. 4.2.1. Information processing mechanism An information processing mechanism operationalizes the way information is selected, combined, created or deleted. The mechanism itself needs a physical or physiological carrier. Various possibilities are already present, such as the brain as our neurological apparatus, the layered connection system of a chip in a computer, a human individual in an organization or a group of interconnected individuals in an organization. The most relevant distinction is the one in internal and external mechanisms. By internal we mean that there is no direct access to the system from outside. Internally controlled, but invisible processes take place in the system. The cognitive system and the chip are internal, but they differ in the sense that the latter is designed which means that its operations are verifiable. External mechanisms are information processing mechanisms such as groups of individuals or organizations. 4.2.2. Architectural components An architecture is a set of components whose arrangement is governed by principles of form or
283
function. A cognitive architecture consists of memory components, attention processors, sensory and motor components and various kinds of central processors. The division is by function and the components are all implemented in neurological structures in the brain. Two other material structures for architectural lay-out are the chip and the constellation of a group of individuals. The same kind of components can be discerned for the computer, consisting of memory, sensory and motor components and central processors. For a group of individuals the architecture is different because the constituting elements are similar}the individuals}but the roles and tasks are different. Again, the discussion about the character of the architecture boils down to a discussion about internally or externally defined. The cognitive architecture is internal, whereas chips and groups of people can be dealt with externally. 4.2.3. Representations In cognitive science the conceptual framework to deal with representations can be found in the approaches of classical symbol systems, connectionism and situated . action (Newell, 1990; Dolling, 1998; Smolensky, 1988). The basic idea is that humans as information processing systems have and use knowledge consisting of representations and that thinking, reasoning and problem solving consist of manipulations of these representations at a functional level of description. A system that internally symbolizes the environment is said to have representations at its disposal. Representations consist of sets of symbol structures on which operations are defined. Examples of representations are words, pictures, semantic nets, propositions or temporal strings. A representational system learns by means of chunking mechanisms and symbol transformations (Newell, 1990; Jorna, 1990). A system is said to be autonomous or selforganized if it can have a representation of its own position in the environment. This means that the system has self-representation. 4.2.4. Communication, meaning and interpretation Communication means the exchange of information between different components. Depending on whether we are talking about internal or external information processing entities, communication means restrictions on the kinds of symbols or signs that are used for the exchange. If we relate this to the before mentioned discussion about representations the various kinds of signs have different consequences. Clearly, sign notations are more powerful, but also more restricted than sign systems, which in turn are more powerful than just sign sets. Unambiguous communication requires sign notations, but we know that all communication between humans is not in terms of notations. If computers require sign notations and humans work with sign systems, then if the two have to communicate, the one
284
W. van Wezel, R.J. Jorna / Engineering Applications of Artificial Intelligence 14 (2001) 269–286
has to adjust to the other. Until recently, most adjustments consisted of humans using notations. Now, interfaces are designed that allow computers to work with less powerful}in terms of semantic requirements}but more flexible sign systems. This means that computers can deal with ambiguity. For mental activity no explicitness (channels, codes, etc.) is necessary; for planning as an external task it is essential. 4.2.5. Coordination Coordination concerns attuning or aligning various entities that are not self-evident unities. Information processing in a cognitive system is a kind of coordination mechanism (with no direct access). It is internal or mental. The coordinating processor is cognition itself. No explicit code is necessary. If the code is made explicit and obeys the requirements of a notation we can design an artificial intelligent agent that in its ultimate simplicity could be a chip. In case of multiple entities that are not by itself a unity various coordination mechanism can be found, such as a hierarchy, a metaplan, mutual adjustment, a market structure and many others (Thompson, 1967; Gazendam, 1993). The important difference compared to the single agent is that these coordination mechanisms are external (with direct access). 4.2.6. Planning, execution and control Making a plan, executing it and monitoring its outcomes in reality are valued differently in the errands task and in production planning. In the most extreme case in industry the tasks are completely decoupled with, as a consequence almost always, inflexibility with respect to adaptation. For the errands task the possible division in terms of sub-tasks may be interesting, but can in reality be intertwined with flexible adaptation after unforeseen events. If the controlling entity is itself a unity, discussions about transfer, communication, sign systems to do the communication and representations are almost trivial. This does not make the planning task itself simpler; it only prevents the occurrence of ambiguity, interpretation and meaning variance. 4.3. Domain characteristics Given our definition of planning as the alignment or attunement of instantiations of various object types, complexity is partly explained in terms of the number of object types that are involved. The more the types the more is the complexity. An acceptable or optimal solution is defined by the constraints and goal functions. However, as is well known, constraints require prioritization, constraints are often changed into goal functions and constraints are often contradictory. We can also formulate this in terms of state space descriptions. A well-defined initial state and a
well-defined goal state make it easier to complete a search within reasonable time, than ill-defined states. This can also be formulated as that human planners like to work with open world situations, whereas search and algorithms require closed world assumptions. That is the reason that humans have no problems with the errands task. They are able to adjust the world according to their present requirements. This is not the case with production planners. The object types they have to deal with have to be explicit and quantifiable and the same holds for the constraints and goal functions. However, although the closed world assumption is relevant for production planning, planners in practice know that they have to improvise and be flexible, otherwise no workable plan is realized.
5. Conclusions Artificial Intelligence techniques are powerful tools that can be used in a wide range of applications. However, in the present development of planning support, it might from a cognitive point of view be more appropriate to talk about Intelligent Artifacts than about Artificial Intelligence, because contemporary Artificial Intelligence techniques hardly use models of human cognitive functions that induce our intelligence. In this sense, the word ‘Intelligence’ says more about the designer than about the resulting system. It is this shift in emphasis, and the consequences for the interaction between the planner and the computer, that we have dealt with in this article. The kind of planning that someone makes in an organization and the kind of planning that someone makes to perform his own task are especially different with regard to the level of description. This is made clear with the recursive planning paradox: one of the activities that must be done in order to accomplish the planning task, is to plan the planning activities. Hence, organizational planning differs from individual planning. The debate whether planning is part of problem solving or vice versa becomes a trivial one, since we can only answer: both. The conclusion of the different levels of discussion is that there are two kinds of planning: ‘‘life’’ planning and organizational planning. This leads us to the question whether or to what extent artifacts that are created on the basis of the one can be used for the other. Let us start with the similarities. For both kinds of planning we can state that a model of the underlying problem can be made in terms of state space descriptions with operators. In fact, this is how most AI planning techniques reason and also how the cognitive task performance of humans is often represented. But are these similarities enough? After all, there are differences, as we saw in Section 4. The state space of organizational planning will typically
W. van Wezel, R.J. Jorna / Engineering Applications of Artificial Intelligence 14 (2001) 269–286
be large with multiple dimensions to be attuned (e.g., orders, machines, time, human operator). Such a state space (and its underlying dimensions) must be understood and agreed upon by multiple humans. Adjustments of constraints after an illegal state is entered would mean new negotiations within the organization. Planning and execution are separated because they are performed by different actors. External memory (e.g., a Gantt-chart or matrices) is used in the planning process extensively. These aspects all differ from planning as a cognitive activity. The number of dimensions that determine the state space is small, since we mostly deal only with activities that are sequenced in time. The state space is a temporary construct that only exists for one person, constraints can be easily revised, and planning and execution can be parallel because there are no lengthy communication sessions to discuss the consequences of alteration of the plan. All communication is internal, that is to say mental. Thus, support of the task of a planner in an organization needs more than a technique that is based on how a human being performs the cognitive planning function. What we need, from a task support view, are techniques that take into account the sub-tasks of planning and the human cognitive functions. Early AI models in the form of rule bases did this. In such systems, the rules that are used by the planner can be modeled and implemented. The outcomes of such an implementation are recognizable and understandable by the human planner whose rules were modeled. Furthermore, the results are understandable for other humans, if we consider the fact that such rules represent to a certain extent the cognitive limitations of all humans. Newer AI planning techniques certainly implement functions that are found in (cognitive) human planning, such as opportunistic planning, dealing with incomplete information, successive refinement of plans, and reactive scheduling. The question is, are these techniques useful in task support? Our answer is: only in a very limited way. These techniques, like techniques from operations research, often have no intermediate outcomes that are meaningful to the scheduler, and furthermore, generation techniques are only a (small) part of the task of a human planner. A production planner that works together with an implemented algorithm works in a hyper-compound planning situation in which natural and artificial planners have to (externally) coordinate their planning efforts. For artificial algorithms to be successful, they need the following characteristics in such a situation. First, they should be adapted to the problem space that the planner has of the situation. In other words, an algorithm should be able to communicate in terms of the dimensions that the planner uses to define the problem space, and the operators that the planner uses to go from state to state. Second, they must allow an ‘‘open
285
world’’. This means that the algorithm (on indication of the planner) must be able to adjust the initial state, e.g., relax constraints, and search again for solutions in a perhaps structurally different problem space. Only then can we speak of truly intelligent planning support.
References Allen, J.F., Ferguson, G.M., Schubert, L.K., 1996. Planning in complex worlds via mixed-initiative interaction. In: Tate, A. (Ed.), Advanced Planning Technology, AAAI Press, CA. Anthony, R.N., 1965. Planning and Control Systems. A Framework for Analysis. Harvard, Boston. Bakker, J., 1995. Classificatie en diagnose voor scheduling situaties. Internal report, University of Groningen (Classification and diagnosis for scheduling situations), Groningen. Card, S.K., Moran, T.P., Newell, A., 1983. The psychology of Human–Computer Interaction. Lawrence Erlbaum Associates, Hillsdale, NJ. Curry, K.W., Tate, A., 1991. O-Plan: the open planning architecture. Artificial Intelligence 51, 1. Das, J.P., Karr, B.C., Parrila, R.K., 1996. Cognitive Planning. Sage, New Delhi. . Dolling, E., 1998. Semiotik und Kognitionswissenschaft. Zeitschrift . Semiotik 20 (1–2), 133–159. fur Dorn, J., 1993. Task-oriented design of scheduling applications. In: Dorn, J., Froeschl, K.A. (Eds.), Scheduling of Production Processes. Ellis Horwood, Chichester, England. Fikes, R.E., Nillson, N.J., 1971. STRIPS: a new approach to the application of theorem proving to problem solving. Artificial Intelligence 2, 189–208. Flew, A., 1979. A Dictionary of Philosophy. Macmillan Press, London. Fox, M.S., 1983. Constraint-directed search: a case study of job-shop scheduling. Ph.D. Thesis, Carnegie Mellon University. Gazendam, W.H.M., 1993. Variety Controls Variety: on the Use of Organizational Theories in Information Management. WoltersNoordhoff, Groningen. Hammond, K.J., 1989. Chef. In: Riesbeck, C.K., Schank, R.C. (Eds.), Inside Case-based Reasoning. Lawrence Erlbaum Associates, Hillsdale, NJ, (Chapter 6). Hayes-Roth, B., Hayes-Roth, F., 1979. A cognitive model of planning. Cognitive Science 3, 275–310. Hoc, J.-M., 1989. Cognitive Psychology of Planning. Academic Press, San Diego. Hofstede, G.J., 1992. Modesty in modeling: on the applicability of interactive planning systems, with a case study in pot plant cultivation. Ph.D. Thesis, University of Wageningen, Wageningen, The Netherlands. Jorna, R.J., 1990. Knowledge Representation and Symbols in the . Mind. Stauffenburg Verlag, Tubingen. Jorna, R.J., Gazendam, H., Heesen, H.C., Wezel van, W., 1996. Plannen en Roosteren: Taakgericht analyseren, ontwerpen en ondersteunen. Leidschendam, Lansa (Planning and Scheduling: Task Oriented Analysis, Design and Support). McKay, K.N., Safayeni, F.R., Buzacott, J.A., 1995. Common sense realities of planning and scheduling in printed circuit board production. International Journal of Production Research 33 (6), 1587–1603. McKay, K.N., Safayeni, F.R., Buzacott, J.A., 1988. Job-shop scheduling theory: what is relevant? Interfaces 18 (4), 84–90. Meystel, A.M., 1987. Theoretical foundations of planning and navigation for autonomous robots. International Journal of Intelligent Systems 2, 73–128.
286
W. van Wezel, R.J. Jorna / Engineering Applications of Artificial Intelligence 14 (2001) 269–286
Mietus, D.M., 1994. Understanding planning for effective decision support. Ph.D. Thesis, University of Groningen, Groningen. Miller, G.A., Galanter, E., Pribram, K.J., 1960. Plans and the Structure of Behavior. Holt, Rinehart and Winston, New York. Myers, K.L., 1996. Advisable planning systems. In: Tate, A. (Ed.), Advanced Planning Technology. AAAI Press, CA. Newell, A., 1990. Unified Theories of Cognition. Harvard University Press, Cambridge. Newell, A., Shaw, J.C., Simon, H.A., 1958. Elements of a theory of human problem solving. Psychological Review 65, 151–166. Newell, A., Simon, H.A., 1972. Human Problem Solving. PrenticeHall, Englewood Cliffs, NJ. Pollack, M.E., 1996. Planning in dynamic environments: the DIPART System. In: Tate, A. (Ed.), Advanced Planning Technology. AAAI Press, CA. Posner, M.I. (Ed.), 1989. Foundations of Cognitive Science. MITPress, Boston, MA. Prietula, M.J., Hsu, W., Ow, P.S., 1994. MacMerl: mixed-initiative scheduling with coincident problem spaces. In: Zweben, M., Fox, M.S. (Eds.), Intelligent Scheduling. Morgan Kaufman, San Francisco. Rich, E., Knight, K., 1991. Artificial Intelligence. McGraw-Hill, New York. Riesbeck, C.K., Schank, R.C., 1989. Inside Case-based Reasoning. Lawrence Erlbaum Associates, Hillsdale, NJ. Russell, S., Norvig, P., 1995. Artificial Intelligence, a Modern Approach. Prentice-Hall, Englewood Cliffs, NJ. Sacerdoti, E.D., 1975. The nonlinear nature of plans. Proceedings of the Fourth International Joint Conference on Artificial Intelligence, pp. 206–214. Sanderson, P.M., 1989. The human planning and scheduling role in advanced manufacturing systems: an emerging human factors domain. Human Factors 31 (6), 635–666. Schartner, A., Pruett, J.M., 1991. Interactive job shop scheduling: an experiment. Decision Sciences 22, 1024–1046.
Schreiber, G., Wielinga, B., Breuker, J., 1993. KADS. A Principled Approach to Knowledge-based System Development. Academic Press, London. Smith, S.F., 1992. Knowledge-based production management: approaches, results and prospects. Production Planning & Control 3 (4), 350–380. Smith, S.F., 1995. Reactive scheduling systems. In: Brown, D.E., Scherer, W.T., (Eds.), Intelligent Scheduling Systems. Kluwer Academic Publishers, Boston. Smith, S.F., Lassila, O., 1994. Toward the development of flexible mixed-initiative scheduling tools. In: Proceedings ARPA-Rome Laboratory Planning Initiative Workshop, Tucson, AZ. Smith, S.F, Lassila, O., Becker, M., 1996. Configurable, mixedinitiative systems for planning and scheduling. In: Tate, A. (Ed.), Advanced Planning Technology. AAAI Press, Menlo Park, CA. Smolensky, P., 1988. On the proper treatment of connectionism. Behavioral and Brain Sciences 11, 1–74. Sundin, U., 1994. Assignment and scheduling. In: Breuker, J., van der Velde, W. (Eds.), Common KADS Library for Expertise Modeling: Reusable Problem Solving Components. IOS Press, Amsterdam. Thompson, J.D., 1967. Organizations in Action. McGraw-Hill, New York. Van Dam, P., 1995. Scheduling packaging lines in the process industry. Ph.D. Thesis, University of Groningen, Groningen. Veloso, M.M., 1996. Towards mixed-initiative rationale-supported planning. In: Tate, A. (Ed.), Advanced Planning Technology. AAAI Press, Menlo Park, CA. Verbraeck, A., 1991. Developing an adaptive scheduling support environment. Ph.D. Thesis, University of Delft, Delft. Wiers, V.C.S., 1997. Decision Support systems for production scheduling tasks}Part I of a case study: analysis and task redesign. Production Planning & Control 8 (7), 711–721. Wilkins, D.E., 1990. Can AI planners solve practical problems? Computational Intelligence 6 (4), 232–246. Zweben, M., Fox, M.S., 1994. Intelligent Scheduling. Morgan Kaufman, San Francisco.