Analogical reasoning in design processes

Analogical reasoning in design processes

Analocal reasoning in design processes NICHOLAS V FINDLER Department of Computer Science, State University of New York at Buffalo, 4226 Ridge Lea R...

771KB Sizes 1 Downloads 178 Views

Analocal reasoning in design processes NICHOLAS

V

FINDLER

Department of Computer Science, State University of New York at Buffalo, 4226 Ridge Lea Road, Amherst, NY 14226 USA

We consider the design process as a goal-oriented activity for solving ill.structured problems. To avoid the combinatorial explosion of design alternatives, a control structure embracing heuristic ideas has to be established. One of primary importance is based on AR (analogical reasoning), ubiquitous in human problem-solving in general but hardly used at all in programming systems. First, we investigate AR detached from specific tasks and formulate some of its general principles. We then present a set of plausible working hypotheses and outline the contributive and hierarchical models. Further, advanced processes are discussed which can extend the scope of the above models. Some problems of implementation are followed by the description of practical results obtained in the exploration of a few simple tasks. Suggestions for further work and a summary conclude the paper.

The domain of design problem solving is ill-structured because: • • •

The measure of goal attainment is multidimensional and fuzzy A detailed knowledge of facts outside the universe of problem definition is required for the solution Alternative courses of action often cannot be enumerated and evaluated in a systematic manner for lack of general guidelines.

(See Simon I for a detailed discussion of ill-structured problems.) A careful analysis is needed of the interplay between the two major contributing aspects of design: • •

creativity and imagination (the 'art' in design) recognizing and satisfying rules and constraints (the 'science' in design)

No doubt, both rely on experience and if we want to obtain assistance from computers in the design activity, we have to formulate, in procedural terms, what kind of experience is used and how. Our ability to carry out design tasks is the result of continual learning. In the course of this, a knowledge base is constructed with reference to 'situations'. These describe associations between individual and combined design criteria (task features) on one hand, and solutions on the other. Learning makes sense because situations resemble each other and some information obtained in one situation is of use in others.

vol 2 no1 january 1 9 8 1

The numerous mechanisms of the learning process enable us not only to rely on a large stock of situations but, also, to extract their essential characteristics. By means of these characteristics, we can then select from our total experience, a small number of situations, which are the most similar to a new situation we are faced with. Similarity is a very general and loosely defined concept. Different objectives may invoke different types of similarity. The relevance of resemblance is determined by the context of the problem given. Experience provides the foundation for discovering and recognizing similarity. Each situation can be characterized by a combination of features. (The term 'feature' is used to represent one or several chunked properties. Constraints, distance and adjacency requirements - centrally important in design problems - are treated as properties. Whereas properties are atomic and directly measurable, features can in general be observed as present or absent only. Features are important because they reduce the information-processing load2). At the simplest level, resemblance can be measured by the weighted sum of common features two situations share. (Not all features of a situation contribute equally to the selection of some appropriate action for coping with the situation.) Further, there are numerous higher order similarities people discover and make use of. For example, in plane geometry, point and line (theorems of duality), circle and line (line being a circle of infinite radius), and so on are components of resemblance between certain problems. They serve in calling for similar theorems in proof procedures or construction tasks. Again, experience is used in discovering such higher order similarities. Although the importance of similarities between problems has been recognized both by psychologists and computer scientists since early times (see, for example, Minsky's seminal work3), relatively little progress has been made in using the powerful ideas involved in AR. We have carried out some detailed but preliminary studies on the role of AR in problem-solving by computers 4's, Both theoretical and practical issues have direct implications to design processes as well. The ultimate objective a general-purpose computer-based system would: •

Accept problems (design tasks) in a proper format which could be changed to an effective internal representation • Extract the relevant features of the above • Recognize similarity between a new problem to be solved and one or several problems with known sol utions • Make use of such similarity in discovering a solution to a new problem 'efficiently' • Augment its knowledge base by adding new problemsolution pairs in which the solution is either found by the system or provided by the user.

WORKING HYPOTHESES OF ANALOGICAL R EASON I NG To make progress toward the above objective, one has to make certain plausible assumptions which establish the rationale of AR: •

Each design problem is describable as an (ordered) collection of certain, possibly overlapping, fundamental features.

We have put the word 'ordered' in parentheses to indicate that it is a desirable goal because it reduces the search space

0142-694X/81/010045-07 $02.00 0 1981 IPC Business Press Ltd

45

but it may not always be possible to reach a solution. The decision as to what constitutes a feature is left to the user, although we can envisage a system that starts working in different problem domains using trial-and-error and, as experience is accumulated, selects more and more appropriate features. •

Solutions are associated with respective problems in a well defined, deterministic manner.

This assumption is stronger than it sounds. Causality, inherently underlying all scientific investigations, does not imply that one can measure all relevant variables, and with a sufficient precision, to establish reliable explicatory and predictive relationships. Here we have assumed that the features of the problems are identifiable and are strongly enough (but not necessarily uniquely) correlated with the solutions so that the latter can be directly derived from the former. Constraint satisfaction in design problems means that the search must first yield feasible solutions. A separate program component is responsible for finding an optimum solution from among them. This fact sometimes makes people interpret design problem-solving as 'nondeterministic' - a somewhat misleading connotation. •

In the task domains of interest to us, similar problems have similar solutions.

Similarity, of course, must not be defined in a circular manner (ie 'problems are considered similar if their solutions are similar') but must be measured along certain dimensions that depend on a priori features of problems on one hand and of solutions on the other. •

When two problems have similar solutions, the features present in one problem but not in the other are likely to be of lesser importance. In turn, features shared by problems which have similar solutions are likely to be important.

The last hypothesis can be strengthened quantitatively in the sense that the more problems having similar solutions which share a feature, the more important it is likely to be. Also, the more features shared by two problems, the more similar their solutions will be. The procedure based on the above hypotheses will have a flavour of approximation for two reasons. First, the knowledge base on which the search for similarity operates is necessarily limited. Second, for nontrivial problems, the solution steps will have gaps in between, which have to be filled by some heuristically guided trialand-error method, or control must be transferred to another problem-solving component of the program at that stage.

C O N T R I B U T I V E AND H I E R A R C H I C A L MODELS OF AR We wish to study the problem of what information should be extracted from raw experience consisting of descriptions of problems and solutions, and how this information can then be used in determining solutions to new problems. One may consider problem-solving as the construction of a mapping function either directly between problems and solutions, or indirectly between problems whose solutions are similar. Let us call these functions of type I and type II, respectively.

46

Type I functions A function of type I uses the features of a problem as independent variables and its value is the solution to the problem. This function is constructed gradually by recording the number of associations over many problems between particular features and solutions. In general, the solution will be represented by an ordered sequence of steps (for example, applications of transformation operators or building up larger and larger subsystems.) This way, it is not necessary to accept the whole solution as a starting point in the attempt to solve a new problem: tentative solutions will be composed only of solution segments of varying length. The frequency of prior usage ('score') of solution segments associated with a given problem feature provides an important heuristic guide. Namely, the solution segments are ordered and offered for testing according to the score values. The testing for adequacy in a solution may be done by another system component using, for example, logical or analytical techniques. We wish to point out how similar this approach is to the development of empirical sciences. In medicine, for example, the long-term process of forming causal relations between symptoms and diagnoses, and the ordering of components of cure to diagnoses seem to have taken place along such lines. The learning process in our model is also analogous to the real life event. Every time a new problem is solved, its features are separated and identified, and the knowledge base is updated accordingly. We call the above paradigm the contributive model because we consider the relevant features of the problem to have contributed to the selection of solution (components). The contributions have a cumulative effect. The more problems with a given feature have used a certain solution component, the more likely new problems with the same feature will use that solution component. The feature in question demands the use of that solution component and competes with other features offering their contributions. It is also usually necessary to decompose a complex problem into conjunctive and disjunctive subproblems to make the search process more efficient and the consequent learning process of more use. The decomposition is, of course, task-dependent and is the responsibility of another component of the problemsolving system. In many task domains, certain features of problems have a dominant role; their absence or presence may determine a large part of the solution regardless of the contributions of other features. In these, there is a well defined hierarchical relationship among the features to be considered in order to identify the solution correctly. The contributive model would prove to be extremely inefficient in ~uch cases; we have to find an additional source of power for such complicated search processes.

Type II functions Functions of type II represent mappings between problems whose solutions are 'similar'. Intuitively, one could say that two similar problems will be close to each other in the problem space defined by a judiciously selected set of features. One can also expect the respective solutions to be close in a corresponding solution space. If we can find a mapping function between a problem with a known solution and one without a solution, it is plausible to try

DESIGN STUDIES

to apply the same mapping function to the known solution to obtain a tentative solution to the new problem. Let the 'distance' in problem space be the weighted sum of the number of features shared by the two problems. The weighting should express the relevance of the features to the solution. The whole system of weights of relevance can be well expressed by a hierarchical structure - hence the name hierarchical model. Without any prior knowledge or help from the user, the program has to assume all possible permutations among the features of a new problem, only one of which will be correct. The ordering of features in a linear string for a given problem represents a part of the hierarchy of features characteristic of the whole problem domain. (Note that the contributive model becomes a special case of the hierarchical one if in the experience-gathering phase, combinations rather than permutations of features are recorded.) The system will return that sequence of solution steps which has been associated in the knowledge base the highest number of times with the longest string of matching features, starting with the assumedly most important feature in the leftmost position. If the knowledge base is 'sufficiently' large, individual features and ordered feature sequences are uniquely associated with solution steps and solution-step sequences; so the direct application of the mapping function of type II is possible. Otherwise, the same problem-solving philosophy prevails concerning the list of potential solutions supplied by the hierarchical model as by the contributive model. Namely, these lists are ordered according to the plausibility level defined by the score of prior usage (the number of times the same solution step is recommended by different feature entries). Tentative solutions are completed and tested by other analytical or logical program components. The learning process works in two directions. First, irrelevant permutations of problem features become gradually deleted. Second, new entries consisting of ordered features and solution steps are added to the knowledge base. I m p l e m e n t i n g t h e models The following ideas point to more sophisticated but still relatively easily implementable processes. For most, if not all, nontrivial problem domains, it is possible to group together properties according to several different principles. Although the asymptotic behaviour of the problem-solving system should not greatly depend on how the features are composed of individual problem properties (but, of course, memory and time requirements can vary widely), some grouping techniques could result in a much faster acquisition of knowledge than others. Therefore, the program should construct several knowledge bases for a given problem domain and see which one proves the most efficient and most effective. Another idea of pragmatic value is to start out with the contributive model because it presents lesser processing demands. If some features seem to emerge as decisive factors in selecting solution steps, these should be separated and placed in the hierarchical model. A very important high-level learning process is pruning the knowledge base. Various inductive and deductive inferences can be generated, which can reduce the domain of search for solutions. For example, the separation of influential and uninfluential features would be guided by the numbers of prior usage. There are, however, two

vol 2 no 1 january 1981

types of irrelevant features, as found in our experiments to be described later, which also need to be identified. One type has a fairly uniform contributive effect which can be easily detached. The other type causes random and fluctuating contributions to the scores, particularly at the early stage of knowledge acquisition. We have established 4's that such irrelevant features, in contrast with influential ones, have the following characteristics. If one forms the ratio of the scores of two solution steps associated with this feature, the ratio assumes widely different values as experience accumulates. This ratio is, however, fairly constant for influential features. Furthermore, one can discard those entries in the knowledge base of the hierarchical model in which some permutation of features has suggested wrong solutions more than a certain number of times. This latter number, of course, decreases as the knowledge base increases. Combining features into higher order ones (further grouping), when justified, should also streamline the knowledge base. We have noted before that humans discover and utilize, mostly on the basis of previous experience, higher order similarities also, such as theorems of duality, structural, semantic, functional and thematic similarities. The incorporation of these concepts in problem-solving programs is a major challenge. Evans' program 6, for example, dealt with spatial relations between subfigures as well as with types of subfigures. Winston's work 7 discovered structural similarities between components of simple three-dimensional bodies. Charniak's system a made use of semantic and thermatic similarities. The recognition of functional similarity could mean a breakthrough in game-playing programs 9-11 . Our contention is that the two fundamental models of AR can deal with such high-order similarities as long as sophisticated feature-extraction programs can be set to work in cooperation with the AR component. Finally, we note that the paradigm of AR is capable of discovering efficient problem-solving strategies. The flexibility and modularity of system components responsible for knowledge acquisition, feature extraction, feature grouping, and composition of tentative solutions makes it possible to experiment with different strategies for given problem domains and to come up with one that appears to be optimal according to some set of criteria.

PROBLEMS OF I M P L E M E N T A T I O N The following six program components constitute a general-purpose problem-solving system using AR: •





An executive program which manages and schedules the various program parts. It receives and transmits information, tests the validity and destination of data, coordinates possibly conflicting demands and makes the final judgement on the basis of opinions contributed by a panel of 'judges'. A subgoal-forming program which tests whether the goal can be accomplished in one step (for example, the single application of a transformation operator) and, if not, generates a goal tree consisting of conjunctive and disjunctive subgoals. A logic-arranging program that sets up forward or backward reasoning processes operating on the goal tree. It presents the subgoals recursively to the second program.

47







The AR mechanism itself, which collects and processes the experience with problems and solutions, and presents a list of potential solutions and solution components to the last program. The definition of the task domain, a set of possible properties, a set of rules for grouping the latter into features and interpreting their semantics, a set of axioms, theorems and one-step solutions (operators). Some analytical or logical procedures that can test solution components, fill in missing links and arrive at complete solutions. The solution accepted will be a 'satisfactory' one, to use Simon's well known term, rather than an optimum one, in general.

Considering the likely high demands on machine time and memory, certain compromises and simplifications have to be made as compared with the ideal system outlined above and in the previous sections. These will be discussed in the following. •





There must be a sensible middle way found between starting with no previous experience and with a tailormade, appropriately preprocessed, expert-level knowledge base. The inherent power of AR in receiving, processing and storing information has to be shown in action. This is the most economic approach to intelligent systems. The user should, therefore, provide a certain amount of knowledge in terms of feature v e r s u s solution-component associations, judiciously chosen prior-usage frequencies, methods and criteria of grouping and eliminating features, and the like. It may be more efficient to establish for a single problem domain several knowledge bases, each covering information relevant to only a few problems. This would make search processes significantly shorter. How to select the problems which contribute to individual, possibly overlapping knowledge bases is a question that could in principle be answered by a special-purpose computer-learning process but, in practice, this may be too difficult and unnecessary. An online interactive system could ask the user for guidance concerning any and all of the above questions rather than leaving them to complicated search processes. This type of collaboration between man and machine, each contributing what he/it can do best, is an optimum arrangement. The next subgoal to be solved can be selected by a heuristically guided 'best-first' principle instead of some systematic depth-first or breadth-first technique. It means that the subgoal most recommended by the history of problem-solving so far is attacked next. The features of this subgoal - ordered or otherwise, depending on whether the hierarchical or contributive model, respectively, is used - are associated with the most frequently employed solution components. This subgoal is, therefore, the one the problem-solving system has the highest 'confidence' in, as far as solubility is concerned since the system has the best tools for doing that job.

PROGRAMMED EXPLORATIONS Preliminary

experiments

Before we tried to apply the principles of AR to real-life complex problems, we had wanted to get a feel of how the two models proposed can be set up for relatively

48

simple tasks. A brief account of these experiments is given as follows. First, the idea of the contributive model was used to simulate some piecewise smooth functions. Ranges of the independent variables can be considered features and values would represent solutions. The function is not given to the system but is gradually found out. One functional form was, for example, f(a,b) = A if = B if

0%a~<4, 0
= C if - 4 ~ a < 0 , = D if - 4 ~ a < 0 ,

0
The values of a and b were generated at random. Within the first 500 tries, the first guess was correct 66 per cent of the time but the first two guesses contained the correct solution 94 per cent of the time. The corresponding figures for the second 500 tries improved to 71 per cent and 95 per cent, respectively. A similar effect of the learning process was observed with other functional forms. A concept-formation task was used for the hierarchical model. Each object was described by three attributes, which could all assume four values. These were: colour (red, blue, yellow, green); shape (cube, wedge, round, star); and size (large, medium, small, tiny.) There were two sets of concepts. The first set contained five concepts: red and large; blue, wedge and medium; small; green and tiny; everything else. The second set contained four concepts, each being one colour. Features were defined as individual values of different attributes, or combined values of up to three different attributes. The solution is the concept itself; objects are instances of concepts. Not only is the effect of learning clearly discernible in the experiments, but it was shown that, given enough instances of different concepts and proper updating procedures for the knowledge base, perfect performance is obtained with different training sequences. (One can even see immediately what represents an optimum training sequence with given instances of concepts but this is too obvious to include in this paper.)

P r o b l e m s in t h e o r e m - p r o v i n g and c o n s t r u c t i o n in

plane geometry We wanted to show that the processes of AR are a useful and general enough component in a design problem-solving system. We have decided to use theorem-proving and construction tasks in plane geometry. These two activities, we believe, underlie the design process in several task domains. If we can enhance our understanding of the premises and goal structures on which decisions are based in the above areas, and develop a fairly general cognitive strategy which guides the search for solutions, we would make some significant steps toward automatic design technology. Furthermore, theorem-proving and construction tasks in plane geometry are sufficiently disjoint for generality considerations but have the advantage of identical machine representation of problem constituents. Also, the properties and the features formed from them are practically the same for the two problem domains.

DESIGN STUDIES

It was not, we emphasize, our objective to compete with the numerous impressive accomplishments in the areas of problem-solving. Rather, these experiments serve the role of some fairly simple proof that AR is working with nontrivial problems. We can operate the system either in batch mode or in interactive mode, using an I DI I OM graphics computer connected to a central machine. In the latter case, diagrams of the problem can be displayed, the system can ask for the user's advice if necessary, and the user can interrupt a poor sequence of actions and substitute his own choice for it. The language used was the SLIP/AMPPL-II package 12. To handle the rich information content embedded in the deceptively simple representation of geometric problems, we had to enable the system to make semantic transformations easily by regrouping properties into different sets of features when necessary. Much implicit information is generated by the system but structural relations are, of course, often lost in the process of forming features. The various shortcomings and strengths of the SSI (simplified system implemented) will be discussed as we outline some of its characteristic features below.

Components of the implemented system SSI contains only three main components, as opposed to the six listed in connection with a general implementation. The first part organizes and coordinates the logic of problem solving. It is also responsible for subproblem generation. There is now no program which combines backward and forward reasoning. For reasons of simplicity, we have chosen to employ backward reasoning only. The second part carries out AR and offers its recommendations to the first one. Finally, a database completely separate from the rest of the system contains the description of the task domain and the procedures to interpret the semantics of the definitions used. Also a list of all possible features appears here, covering fundamental elements (point, line, etc), relations (parallel, perpendicular, etc.), and composite features (triangles, quadrilaterals, etc). However, SSI will not automatically combine features into higher order ones. A network of nodes and edges is set up, representing elements and relations, respectively. Every geometric element is decomposed recursively as far as possible. For example, an angle is represented as a vertex and two sides. A side is a line segment which in turn is represented as the set of all (given or generated) points through which it passes. Relations are also characterized by being symmetric, transitive (eg PARALLEL, EQUAL-SEG), or inverses of each other (eg MIDPOINT, which may refer to a set of pairs of equidistant points, and MIDP-OF, which refers to a single point in the middle of two given points). The definition of an entity starts out with the keyword DEFINE, followed by the type-name of the object and by the manner in which it can be referenced; for example, the names of points on a straight line. The fourth element of the definition serves to resolve possible ambiguities in referencing the object in question. This facility cuts down search time by telling the system not to repeat particular processes involving the same items named differently due to a different order (permutation) of references. It also lets the system know, for example, which point is the vertex of an angle out of several on the defining line segments. The fifth element of the definition

vol 2 no 1 january 1981

is a list of constituents of a composite object being defined. Finally, the last element is a list of all applicable relation descriptors. This may refer to a numerical value, a potential constituent (for example, a line segment halving an angle), equality to other entities, parallelness, perpendicularity, being of equal distance from two points or lines, being the tangent of a circle, etc. A special case can be defined using the keyword DEFINE-SC. The body of the definition refers to the superset (for example, rectangles or parallelograms are special cases of quadrilaterals), and contains the distinguishing numerical values and relation descriptors. Those characteristics that are true for the members of the superset need not be repeated here (for example, the existence of diagonals in rectangles). We also note the special keywords M A N Y (an unlimited number of elements may appear in a set, such as points being generated by intersections with or distances on a line segment), EMPTY-N (there being some points not bound yet to any designated name inbetween given points), and SYNONYM (enabling the user to refer to entities and properties by different names if so desired). Problems are stated in a natural way, in terms of GIVEN/ THEN clauses and a PROVE clause. Axioms and known theorems (legal operators) are part of the GIVEN/THEN clauses. The theorems proven true, as intermediate results, are also recorded as new GIVEN/THEN clauses.

Structure of the knowledge base The entire knowledge base is stored as a list structure (see Figure 1 ). At the top level, one sublist contains the 'innate' knowledge represented by the operators supplied, and the other the problems solved and assimilated. Each of these has in turn two similarly composed sublists; the first receives information with reference to goals of proving the correctness of elements and the second receives information with reference to goals of proving the correctness of relations. Let us consider, say, the second one. The sublist consists of alphabetically ordered relations, after each of which there is a pointer to a sublist containing featuresolution associations for the goal expressed by the relation in question. These associations are divided into three alphabetically ordered groups; the first covers all elements in the problem, the second all elements in the GIVEN clauses, and the third all relations in the GIVEN clauses. The three sublists, one for each group, are identically structured. Let us consider, say, the second one. The list containing the elements of group 2 has a sublist ordered to each element. On this sublist, there appear pairs of cells; the first one contains a pointer to a sequence of operators that has solved the goal in question in problems having the feature referenced by the element, the second cell contains the number of problems in which this has happened. SSI actually uses a mixture of the two models described in the part on the theoretical aspects of AR. Recall that the contributive model is based on the notion that the occurrence of a feature suggests the possible use of its associated solution component. The hierarchical model, on the other hand, treats each feature as a necessary condition leading to the use of a particular solution component. In SSI, a modification was made to the contributive model so that the multiple occurrences of certain features in a given problem necessitate'that much and only that much ordered (hierarchical) contribution from

49

~

Header of knowledge base

list structUreSublis t about 'innate' knowledge from operators supplied Sublist about problems solved (same structure as above sublist)

e~'

r~

Feature-solution associations for goals of proving correctness of elements

Feature-solution association for ~oals f~proving correctness of relations

4"r

These two sublists have the same structure

~L. _ - - - - - - - - ~

- - -- -

-T V

Feature-solution associations in three grot

for solving the goal EQUAL-SEG rA/pa~abnetiaCna~Y f~rrdeeareh d.

)]

ILl

- Group 1: all el . . . . ta in problem

features

// \\

"Group 2: all relations in GIVEN-clauses

~.

Group 3: all relations in

\ .~

~,

An element in Group 2

Legend n"

:. The header of a list (

Further elements and

C. . . . . ponding sublists

~1 I~ ~- .~--

The contents of a cell isapointert . . . . blist

~ I~ ~.~1~

'~

A sequence of operators that solved goal in problems having this feature

~this

Similar elements follow on list

Solutio . . . . d their prior usage frequency given the exist . . . . . , feature ANGLE

;i wi;il;

)

I]

The number of problems which have

• Explanatory remarks }lP: Explanaton/ pointers

-.

(

Further pairs of operator sequences and number of prior solutions

~_j ~ ~,~/

,:

this feature and have been solved

r

Schema of the knowledge base

Figure 1.

Structure o f the knowledge base

the associated solution components as recorded with problems solved in the past. Furthermore, the (assumed) different levels of importance of the three groups of feature-solution associations noted above are expressed by the a d h o c multiplicative factors of 1,2 and 3, respectively. The prior-usage frequencies of the solution components are first multiplied by the relevant factors and then added to the total score. (This was the reason for separating the members of the three groups.) ] h e features serve as a precondition for using the associated (sequence of) operator(s) to attain a goal. The accumulated knowledge represents a system of preconditions an aspect of many real-life phenomena. Because of the lost structural information in our simplified system, however, the preconditions are not so restrictive as they could and should be. In view of the limited amount of experience collected by SSI, we introduced another rule found useful empirically• It can be put simply as follows. A problem does not always have all the features needed to satisfy the preconditions of applying an operator. The absence of a feature in the contributive model means no contribution by the feature in question. We want, however, to take into account the absence of a potentially necessary feature• Thus, when a feature needed for the successful application of an operator is absent in the problem at hand, its actual prior-usage count is subtracted from the score of applicability of this operator. (Note that the group multiplicative factor was not used in this case.) We have also included in the system a few 'obvious' rules to reduce search time, such as: 'two straight lines that meet cannot be parallel', 'perpendicular lines form a right angle', and so on. Before a subgoal is processed for operator -

50

evaluation and selection, it is put through a test with respect to such rules. Such control acts like a heuristic filter, similar to the diagram in Gelernter's work 13,]4. We have used the SSI to solve several problems both in plane geometry theorem-proving and construction. A detailed description of this can be found elsewhere4,s.

DIRECTIONS OF FURTHER WORK We consider both the present implementation and the problems chosen only as the first steps toward more powerful and more versatile systems oriented towards design problem-solving. The task-dependent part of the program - for example in our case, the plane geometry component should be able to partition the problems and the solutions in various different ways to select the best method. It should also make use of the diagrammatic information as a heuristic filtering mechanism for rejecting obviously wrong avenues .tp the solution. Tentative or 'global' evaluations could be based on a forward-reasoning component, which would cut down the search domain considerably. The level of interaction between the user and the system via the graphics computer should be raised. Testing the input for consistency and completeness, and drawing diagrams resulting from different choices (look-ahead) are obvious responsibilities that can be delegated to a more responsive system. A major challenge would come from trying to incorporate high-level similarities - structural, functional, semantic, thematic, etc - - in the general framework of the models of AR proposed here. Our contention is, as stated

DESIGN STUDIES

before, that the main effort required for this accomplishment would be in the area of dynamically modifiable feature identification. This endeavour is, of course, rather task-dependent; we expect some task domains to be relatively amenable to such development, others could give great difficulties. Planning must be studied as a tentative method of design problem-solving, involving the creation of an abstract problem space and its development as a guide for detailed action. Controlled and optional backtracking should automatically take care of any side effects caused by the actions eliminated. The ability to formulate and solve design problems recursively would be a significant asset. Finally, we note an interesting topic for detailed investigation : How is performance in problem solving affected by the amount of experience collected, the training sequence chosen, and number and quality of constraints the solution has to satisfy? Answers to these questions may be obtained only when the size of the knowledge base used can be increased to realistic levels, approaching that of a human problem-solver. At that stage, one could also perform some 'systems tuning', a term rather often used but rarely performed in its true sense.

the level of performance improves with more experience, and the accumulated knowledge does not congest the computer memory. We hope to have shed some light on how humans are aided in the exploration of the design space by AR and how this powerful heuristic can be embedded in computerized systems. The cognitive strategy put forward results in a control structure that could eliminate the responsibility of human designers for much of the routine and uncreative tasks.

REFERENCES 1 2

Miller, G A 'The magical number seven, plus or minus two: Some limits on our capacity for processing information' (Psycho/. Rev. Vol 63 (1956) pp 81-97

3

Minsky, M 'Steps toward artificial intelligence' Proc./RE Vol 49 (1961) pp 8-30. Reprinted in Faiganbaum, E A and Feldman, J (ads) Computers and thought McGraw-Hill, New York, USA (1963)

4

Chart,D T Analogical reasoning, learning and problem solving Ph D thesis, State University of New York at

5

Chan, D T and Findlar, N V 'Toward analogical reasoning in problem solving by computers' J. Cybern. Vol 9 (1979) pp 369-397

6

Evans,T G A heuristic program to solve geometric analogy problems Ph D thesis, MIT, Cambridge, MA, USA. Also reprinted in Minsky, M (ad) Semantic information processing

7

Winston, P 'Learning structural descriptions from examples' In Winston, P (ed) The psychology o f computer vision McGraw-Hill, New York, USA (1975)

8

Charniak, E Toward a mode/of children's story comprehension AI-TR266, MIT Artificial Intelligence Laboratory,

Buffalo, NY, USA (1976)

CONCLUSIONS We have shown in this paper how certain programmable procedures can transform raw experience in problemsolving into the knowledge base of an intelligent system. In fact, the relationship dealing with problems and solutions develops gradually, it is constructed step-by-step by a learning mechanism of the system. As experience is accumulated, the search for solution, guided by AR, becomes shorter and shorter. Two fundamental and task-independent models of AR, the contributive and the hierarchical, have been described in detail as well as certain additional techniques which can extend the power and the scope of the AR approach. Although we have mainly been concerned with similarity due to basic features shared by two situations, higher level similarities can at times be relevant and recognized by an experienced information processing system if it is capable of identifying the appropriate features active in such cases. Therefore, the conceptual framework of AR presented here should be applicable to all such situations w i t h o u t any need for modification. We have solved a few simple problems in some preliminary explorations, using both the contributive and the hierarchical model. We have then implemented a simplified system of AR and shown how it works with theorem-proving and construction tasks in plane geometry. In contrast with many problem-solving programs, there is very little a p r i o r i knowledge required about the tasks,

vol 2 no 1 january 1981

Simon, H A 'The structure of ill-structured problems' .Artific. Intell. Vol 4 (1973) pp 181-201

MIT Press, Cambridge, MA, USA (1968)

MA, USA (1972) 9

Findlar, N V 'Some new approaches to machine learning' IEEE Trans. Syst. Sci. Cybern. SSC-5 (1969) pp 173-182

10

Findlar, N V 'Studies in machine cognition using the game of Poker' Commun. ACM Vol 20 (1977) pp 230-245

11

Zobrist, A L and Carlson, F R, Jr. 'An advice-taking chess computer' Sci. Am. Vol 228 (1973) 92-105

12

Findlar, N V, Pfaltz, J L and Bernstain, H J Four high-level extensions of FORTRAN IV: SLIP, AMPPL-II, TREETRAN and SYMSOLANG Spartan Books, New York, USA (1972)

13

Ge|ernter, H 'Realization of a geometric theorem proving machine' Proc. Int. Conf. Inf. Process. UNESCO House, Paris (1959) pp 273-282. Reprinted in Feigenbaum, and Feldman, (ads) Computers and thought McGraw-Hill, New Yoek, USA (1963)

14

Galernter, H, Hansen,J R and Loveland, D W 'Empirical explorations of the geometry theorem proving machine' Proc. WJCC Vol 17 (1960) pp 143-147. Reprinted in Feigenbaum, E A and Faldman, J (ads) Computers and thought McGraw-Hill, New York, USA (1963)

51