Concurrent applicative implementations of nondeterministic algorithms

Concurrent applicative implementations of nondeterministic algorithms

Comput. Lang. Vol. 8, No. 2, pp. 61-68. 1983 0096-0551 83 83.00-0.00 Copyright 'S 1983 Pergamon Press Ltd Printed in Great Britain. All rights reser...

649KB Sizes 0 Downloads 52 Views

Comput. Lang. Vol. 8, No. 2, pp. 61-68. 1983

0096-0551 83 83.00-0.00 Copyright 'S 1983 Pergamon Press Ltd

Printed in Great Britain. All rights reser'~ed

CONCURRENT APPLICATIVE IMPLEMENTATIONS OF NONDETERMINISTIC ALGORITHMS RICHARD SALTER Mathematics Department. Oberlin College. Oberlin. OH 44074. U.S.A. (Receired 5 November 1980; revision receired 18 March 1983)

Abstract--In this paper we introduce a methodology for utilizing concurrency in place of backtracking in the implementation of nondeterministic algorithms. This is achieved in an applicative setting through the use of the Friedman-Wise multiprogramming primitive frons, and a paradigm which views the action of nondeterministic algorithms as one of data structure construction. The element by element nondeterminism arising from a linearized search is replaced by a control structure which is oriented towards constructing sets of partial computations. This point of view is facilitated by the use of suspensions, which allow control disciplines to be embodied in the form of conceptual data structures that in reality manifest themselves only for purposes of control. We apply this methodology to the class of problems usually solved through the use of simple backtracking (e.g. "'eight queens"), and to a problem presented by Lindstrom to illustrate the use of coroutine controlled backtracking, to produce backtrack-free solutions. Our solution to the latter illustrates the coroutine capability of suspended structures, but also demonstrates a need for further investigations into resolving problems of process communication in applicative languages. Nondeterministic algorithms Concurrency structures Applicative languages

Multiprocessing

Multisets

Suspended

1. I N T R O D U C T I O N Nondeterministic algorithms [1] are abstract models used to formalize the search procedure usually implemented deterministically by backtracking. The importance o f such search procedures has led to the incorporation o f backtrack control structures in various high level languages, with applications to such areas as combinatorics and pattern matching. One o f the m a j o r practical problems o f this a p p r o a c h is that a simple depth-first search procedure often spends a great deal o f time pursuing n u m e r o u s lengthy paths leading down blind alleys, simply because o f the initial ordering o f the paths. One possible a p p r o a c h to this problem is to replace the single process backtrack search with multiprocessed concurrent probes o f the tree o f alternatives. Such an a p p r o a c h would ideally be able to produce the "fastest" solution in the time required to produce only that solution. Recent advances in hardware technology have m a d e the use of real concurrency feasible, while corresponding software research has made it accessable to high level languages. At the same time, by using abstract models o f backtrack algorithms [1-3], it is possible to clearly define the requirements o f the new implementation without having to rely on any previous implementations. In this paper, we will present a m e t h o d for utilizing concurrency to implement nondeterministic algorithms, and we will illustrate this m e t h o d with several examples. The model o f concurrency which we will use was developed in Refs [4-6] as an extension to LISP [7]. We will present the basic ideas o f this model and show how it permits an orderly transition from backtracking to concurrent search. The simplicity o f this transition is due to the natural integration o f concurrency into the data structure, and so we are freed from concern with the details o f the multiprocessing environment. In the next section we develop the m e t h o d o l o g y for distributing the nondeterministic search, and present a backtrack-free solution for a simple depth-first search problem. The resulting perception

Supported in part by the National Science Foundation under grant number MCS 80-04130, and the Office of Naval Research under grant numbers N00014-80-C-0752 and N00014-83-K-0036. This work was begun while the author was on the faculty of Drexel University, Philadelphia, Pennsylvania. 61

62

RaCHARD SALTER

of trees leads us to apply our technique to an example involving explicit trees as data structures in Section 3. In the final section we present our conclusions and discuss future work. 2. C O M B I N I N G N O N D E T E R M I N I S T I C A L G O R I T H M S AND C O N C U R R E N C Y As discussed in Refs [1], [2] and others, nondeterministic algorithms involve the use of a multiple-valued choice function, and the labeling of termination states as either success or failure. Only successful terminations are regarded as computations. The quest for such terminations incorporates some systematic search of a tree of partial computations by using the choice function at each branch point. If this function, due to the constraints of the problem and the history of previous attempts, is unable to extend a given partial computation, the node is labeled as a failure and deleted from further consideration. When backtracking is used, the program linearizes the search by attempting to extend one branch of the tree as far as possible. A failure causes the restoration of the most immediate decision point, and an attempt at an alternate continuation. The simplest such implementation is equivalent to a depth-first search. It is often possible, however, to further constrain the choice of possible extensions through,information gained in the search, and to statistically decrease the required time by ordering the nodes at each level [8]. Without loss of generality, we can assume that the solutions produced by nondeterministic algorithms are sequences [8]. The set of all possible sequences representing partial computations which may arise during the search for a solution in a nondeterministic algorithm corresponds to the set of paths followed by the control structure performing the search. As an applicative program, a nondeterministic algorithm can then be regarded as an elaborate sequence builder; i.e. a function that accepts as arguments the parameters of the desired sequence, and returns as its result some sequence constructed according to those parameters. Since one of the parameters can be sequence length, the process of constructing a legal sequence of length n can be thought of as recursively extending by one element a legal sequence of length n - 1. The basic constructor used to realize this is "nondeterministic" in the sense that the particular implementation determines the resulting extended sequence (if any) that is produced for each set of input arguments. We shall refer to this constructor, in its various forms, as cousx. With the nondeterminism localized in the single step constructor consx, the attempt level [3] nondeterministic function (which we call build) maintains the skeletal structure of a simple recursive sequence builder. Let us illustrate this approach with an algorithm, modified from one communicated in [9]. This algorithm can be used to solve the standard backtracking problems (e.g. "eight queens" [1] or "good sequences" [10]) in which the solution has a fixed length n, and each element is taken from the set {1, 2 . . . . . k}, for some n and k. build creates the entire sequence at once by recursively setting up n composite applications of c o n s x . Thus invoking build (4, 3) will essentially result in the invocation of consx (3, consx (3, consx (3, consx (3 NIL)))). consx successively attempts to extend its second argument by 3, 2, or 1, in the hopes that one of these will be successful. If no value can be legally placed there, consx will try again after calling backtrack to alter the current solution sequence, backtrack accomplishes this by calling consx with the next candidate for first element. In the code that follows, functions and variables are represented by lower case and constants by upper case symbols. Constants can either be atoms (numbers or special atoms such as NIL) or sequences (indicated with brackets, e.g. (1 2 3)). Sequences may be operated on by first, which produces the initial element, and rest, which returns the sequence obtained by omitting the initial element. The empty sequence is indicated by NIL. first (NIL) and rest (NIL) are undefined. The binary function cons is used to construct sequences, for example the sequence (1 2 3) is the result of the application cons (1, cons (2, cons (3, NIL))). The predicate function null tests its argument for the value NIL. We use # as a special symbol to indicate that no solution exists (i.e. it is passed along untouched by each instance of eonsx). The function legal.'? performs the problem-dependent legality test on the candidate partial sequence. Problems solved using this method must admit legality tests on partial solutions that can rule out impossible initial segments, and which also identify solutions when applied to full length

Nondeterministic algorithms

63

build (n, k) =- if (n = O) then NIL else consx (k, build (n-l, k)) consx (try, seq) -- if (seq = ~ ) then

else if (try = O) then consx (k, backtrack (seq)) else if legal? (try, seq) then cons (try, seq) else consx (try-l, seq) backtrack (bdseq) - if null (bdseq) then # else consx (first (bdseq)-l, rest (bdseq))

Fig. 1. Backtracking.

sequences. Therefore, in addition to n and k, we can consider legal? to be a parameter of build. The code for this example appears in Fig. I. The element by element nondeterminism arising from consx in this example is a result of the linearization of the search process. If we abandon this linearization, we can make do with a much simpler deterministic eonsx which does not backtrack, and consequently does not always produce a solution. These new structures are manipulated by a more elaborate nondeterministic control structure, which constructs a tree of solutions using eonsx, sifting out failures as soon as they are apparent. Thus we handle entire paths in a manner similar to the way in which single elements were handled previously. The advantage is that this new structure is non-sequential, so that we can distribute the effort of computing these paths to concurrent processes. For example, suppose eonsx were restricted to either extending a subsequence by consing on its first argument or, if it were determined that this would produce an illegal sequence, returning the atom FAIL. Let us now consider the "good sequences" problem, in which we seek a sequence containing no identical adjacent segments. Thus, the sequence (2 1 2 3 1 2 3) would be rejected because of the adjacent 1 2 3 segments. After 2 steps, we would have the structure in Fig. 2. In the next iteration we would only have to be concerned with those leaves not labeled with FAIL. The search can clearly be distributed to an array of concurrent processors. In the event that we are only interested in producing a single solution, this method represents an improvement only if we can stop the computation as soon as one is produced and return that value. If we are interested in more than one solution, we must be able to return each value as it is computed and have the capability of resuming the search. Another important concern is the representation of individual paths. We now require that an explicit data structure be built to replace the implicit control structure used previously. We must create a "virtual" structure which would not require a great outlay of space, but which could be treated by the program as an existing search tree. These requirements can be met in applicative languages through the use of suspended structures and lazy evaluation. Specifically, we will utilize the ideas of Friedman and Wise[6]. In LISP, as in most imperative languages utilizing call-by-value, parameters are passed into functions completely evaluated. This means that if a parameter is itself a function call that would result in the invocation of a long computation, that computation must be completely undertaken on entry whether its result is to be used or not. Many languages (e.g. Pascal) also support call-by-name, whereby unnecessary co~putation is avoided. The introduction of suspensions in an applicative language is similar in tha t parameters passed into functions are not evaluated until their values are required, however unlike call-by-name, the value replaces the suspension once it has been computed. In particular, the parameters to cons are suspended, so that the construction of data objects is delayed until these objects are probed, and then only that part of the object that is actually

[ FAIL <32> <3 I>

NIL

I

<23> FAIL ( 2 1 ) Fig. 2. Non-backtracking consx.

<13) <12> FAIL

64

RICHARD SALTER

required is realized. Note that use of suspensions permits the representation of infinite data structures. For further discussion see [4]. The combined use of suspended data and lazy evaluation (parameters evaluated at the last possible moment) permits the luxury of an entity that could be treated as an existing completely realized data structure yet would require no great initial outlay of space or time. The tree of sequences required by our algorithm clearly benefits from this. Unfortunately, the representation of this tree remains insufficient since we are still required to use a nest of suspended sequences, the elements of which are accessed in a fixed order. In order to be able to write a nondeterministic choice function which can randomly access any branch from a given node, we must have equal access to each branch. If the branches are sequenced, this means they are accessable only through a composite of first and rest. Such a composite will, however, bring about the complete convergence of all branches in a left to right fashion, once again resulting in a totally ordered backtrack search. We avoid this problem by making use of a second new concept introduced in [6]. the multiset. This structure resembles a suspended sequence in that it is composed of constituant elements, but it differs by not imposing any a priori order on these elements. Multisets are built analogously to sequences using the new constructor frons, and remain suspended until probed with the selector functions first and rest. if we access the multiset m for the first time by invoking, say, first (m), then any element of m might be returned. We can, in fact, regard the elements as processing concurrently until one converges. At this time all other computations are suspended in their current states and the convergent value is promoted to the front of the structure. Thus all subsequent calls to first (m) will produce this same element. A sequential structure can be gradually build out of m by accessing with the composites first (m), first (rest (m)), first (rest (rest (m))), etc. until m is exhausted, and the resultant structure, assuming all elements converge, is identical to a sequence. The actual order in which the elements appear is immaterial since the programmer, in using a multiset, is guaranteed only that if the multiset contains a convergent computation, a call to first will also converge and subsequent accesses will produce the same order. A complete development of these ideas can be found in Refs [4-6]. Our objective is therefore to construct a multiset of all solutions to a given nondeterministic problem. We start with the multiset consisting of only the empty sequence. Each iteration of the algorithm consists of constructing a new multiset of legal sequences by attaching a new element to the legal sequences of the previous iteration with the deterministic version of consx described above. The new consx is capable only of producing either a longer legal sequence or marking failure. The actual computation is suspended until the structure is probed; i.e. until we actually request a solution. The nondeterministic choice function used in backtrack algorithms is replaced by a function which applies the selector first until a non-failing element is found. With a single-valued random selector replacing a multivalued one, we are assuming (at the "'attempt level" [3]) that only successful candidates will be selected. The main role of eonsx is to signal failing sequences. By selecting and terminating these as soon as possible, we are able to sieve out the undesirable sequences from the set. At any given time, the program is only working with those partial solutions for which it has been determined that the possibility of a successful termination still exists. In our language we now assume that the arguments to cons are no longer evaluated, and that the resultant structures are sequences of suspended elements. In addition, we add the new constructor Irons to build multisets. The selectors first and rest are extended to force the convergence of their arguments, either sequences or multisets. For sequences, this means that the initial element is forced until it returns a value; for multisets, all elements are processed until one returns a value, which then becomes the initial element of the emerging sequence. Finally, we add the function m-union which takes two multisets ml and m2 and forms their "multi-union"; i.e. the

choose(m) ~ if null(m) then NIL else if (first(m) = FAIL) then choose(rest(m)) else m Fig. 3. Nondeterministic choice function.

Nondeterministic algorithms

65

build (n, k) -= if (n = 0) then NIL

else make-level 0c, build (n-l, k)) make-level (hum, m) -- if (hum = 0) then NIL else m-union (attach (hum, m),

make-level (hum-l, m)) attach (hum, m) - if null (m) then NIL

else frons (consx (hum, first (choose (m))), attach (num, rest (choose (m))) consx (n, 1) --- if legal? (n, l) then cons (n, 1)

else FAIL Fig. 4. Non backtracking.

multiset composed of all elements of ml and m2. (This is the analog of the append function for sequences.) We first consider the function choose. This function will be applied to the set of partial sequences of each fixed length, some of which will fail in the next attempted extension. We desire only those sequences which survive the extension process. Since we choose to represent terminated elements with the atom FAIL, we can implement choose by continuing to probe the set until the first time we encounter any element other than this atom (or the set is exhausted, in which case we consider the choice itself as having failed by returning NIL). This function is pictured in Fig. 3. Our top level function is once again called build, however we now require that it construct the multiset of n element admissible sequences by attaching with eonsx a new level to the multiset of n-I element admissible sequences. This attaching is done with the help function make-level, which collects the individual multisets of sequence extensions using m-union. The individual sequence extension multisets are obtained by attaching a single given element to all non-failing elements of the original multiset. This is accomplished with the help function attach. Within this elaborate setting, the code for consx is simple and nonrecursive. We represent the sequence of multisets generated by a call of build (3, 3), where legal.'? tests for identical adjacent subsequences, in Fig. 5. The multisets of levels 1 through 3 are created by make-level from the successful sequences of the previous level. Each line of levels 2 and 3 represent the output of one call to attach, with the lines assembled by make-level using m-union. The elements themselves (either FAIL or an extended sequence) are produced by eonsx. No failure is continued into the next level. The first call to make-level calls attach three times to form the singleton multisets {(3)}, {(2)} and {(1)}, and then, using m-union, produces the multiset {(3), (2), (1)} (level 1 below). In the next call to make-level, each call to attach forms a multiset of extensions with one of 3, 2, or 1. Thus, for example, attach (3, {(3), (2), (1)}) considers the sequences ( 3 3 ) , ( 3 2 ) , and ( 3 1 ) . Attach, however, uses consx to perform the extension, and so the atom FAIL replaces the sequence ( 3 3 ) . The result, therefore, is the multiset {FAIL, ( 3 2 ) , (31)}. Combining this result with the other applications of attach gives us level 2. Now in constructing the third level, choose sieves out the FAlLs, and only the currently successful sequences become candidates for further attachment. Level 0: NIL Level 1: {(3), (2), (1)} Level 2: {FAIL, (32>, (3 !>, (23), FAIL, (21), (13), (12), FAIL} Level 3:

{FAIL, FAIL, (323>, <32 I>, (313>, (312), (232), (23 I), FALL, FAIL, (213>, (212) ( 1 3 2 ) , ( 1 3 1 ) , ( 1 2 3 ) , ( 1 2 1 ) , FAIL, FAIL} Fig. 5. build (3, 3) for "'good sequences" problem. CL8/2--C

66

RtCHARD SALYER

We see that the call build (3, 3) constructs the multiset of all 3 element solutions to the problem. However due to the use of lazy evaluation, no actual computation has yet taken place. A call of the form first (choose (build (3, 3))) will produce a single solution, and a list of all solutions can be obtained with choose-all (build (3, 3)), where choose-all is defined as follows: choose-aft(s) - if null(s) then NIL else cons (first (choose(s)), choose.all (rest (choose(s))). The algorithm described above constructs the entire (suspended) solution tree using consx as a discriminating link before any actual searching takes place. In analyzing the time efficiency of this algorithm, we are faced with the problem that various implementations of first have widely different time requirements, one of which could be the minimum time required by any constituant of the argument. Discounting overhead, then, the algorithm requires as much time as the construction of the one solution. We also note that although duplicate invocations of choose (for example, in choose-all and in particular in the various calls to attach) give the appearance of repeated computations, the semantics of first (see Ref. [4]) allow for the parallel probing of the single structure. The promotion of a convergent computation by a given invocation of first occurs in a critical region [12] and brings about an actual change in the data structure. All subsequent probes will return this altered structure. Thus, even though attach (3, m), attach (2, m) and attach (1, m) are all operating in parallel (with choose) on the same multiset, the efforts of each choose instance is detected by each of the others, and the same object is produced by all of them. Neither computational effort nor allocation of space needs to be duplicated due to the ability of concurrent processes to access shared structures. 3. E X A M P L E - - P A R A L L E L NONDETERMINISTIC SEARCHES As an additional example, we consider the problem presented in Ref. [11] in which t w o trees with non-negative integer node weights are given, and we are to determine which of the trees has the root to leaf path with the smallest node weight sum. The proposed solution sets up two independent backtrack modules as coroutines, and alternates control between them. Each time control returns, the criterion for extending the search along any particular branch becomes more strict, since it is based on the value just acquired from the alternate tree. The trees are thus more severly pruned as the search progresses "to the right". It would be advantageous to search shorter branches earlier, since these would most likely yield the smallest pathweights (assuming a random distribution of node weights) and because pruning is most effective when applied to longer branches. Since we have no control over the tree structure, it is not possible to linearly order the search to guarantee that this will occur. Applying our earlier concurrent technique to pre-existing tree structures could improve the overall efficiency of the search, but would automatically allow each nondeterministic process to be operated as a coroutine without any additional control structures[13]. For each given tree, we once again wish to construct a multiset of paths through the corresponding search control tree, which in this case is identical in structure to the data tree. We could regard these paths as sequences of nodes, but in this case it is more useful to consider the sequential updating of the accumulated weight of the partial path. Such partial sequences are terminated as soon as their weight exceeds the current optimum. Those sequences not exceeding this value at the time of the final comparison will converge, and some convergent computations, caught in the process of convergence following the final comparison at the time of an update, may exceed the most recent maximum. These exceptions can be handled by a top level selector similar to choose. Since the optimal value is dynamically dependent on the state of the search of the other tree, it is communicated to the search algorithm through the use of a global variable. This is analogous to the solution presented in Ref. [I I], but is not entirely satisfactory in this applicative setting. The only assumptions made about the representation of the tree is that each node returns the appropriate values when accessed by the functions weight and dese, the latter producing a list of

Nondeterministic algorithms

67

descendents. The maximum number of descendents of each node does not have to be fixed in advance. In the code in Fig. 6, build, consx and make-level are once again the names of the principal functions of the tree search algorithm. We build a multiset of paths by descending through each level of the tree, using eonsx to determine the subsets of paths which are to be cut off. By using NIL to designate a terminated subset, it becomes unnecessary to require an explicit choose, since m-union assembles these subsets at each level and will automatically eliminate the failures. The two arguments used by build play the same role as before: The first represents a path, but is in fact an accumulated weight, and the second, the set of descendents, represents the possible extensions. For a nonterminal node, make-level is used to assemble the multisets of extensions. The recursive calls are filtered through eonsx, which checks the extension criteria before proceeding. In treewaik we create two such structures with build, and insert these as arguments to alternate, which alternates control between them (we discriminate by indexing each call according to which tree is not processing). Path weights are extracted from the multisets with the function select, which acts as the necessary final filter described above. When a new optimal pathweight is found, it is used to update the free variable best, and control is transferred to the other multiset. Processing continues from the point at which it stopped following the promotion of the last convergent value from that multiset, but with the path termination criterion updated. As soon as a multiset is exhausted, then the tree represented by the other multiset is the solution. In the code of Fig. 6, the function list (x) creates a singleton sequence containing the value of x, and maxint represents a very large integer.

4. C O N C L U S I O N

We have presented two examples which illustrate a methodology for implementing nondeterministic search algorithms in a multiprocessing environment. In each case, the high level of control achieved in these implementations is a result of building the algorithm around the construction of data structures. The applicative multiprogramming tools which we used were explicitely designed for this very purpose [13], and so our results stand as further evidence to the validity of this paradigm. In preparing these examples, it became clear that there were a number of possible alternatives to the programs which we ultimately presented. For example, we found that calls to choose could

build (wt, trees) = if null (trees) then list (wt) else make-level (wt, trees) make-level (wt, trees) = if null (trees) then NIL else m-union (consx (wt, first (trees))) make-level (wt, rest (trees)) consx (wt, tree) = if (weight (tree) -t- wt < best) then build (weight (tree)+ wt, desc (tree)) else NIL select (m) - if null (m) then NIL else if (first (m) > best) then select (rest (m)) else first (m) " alternate (ml, mY., index, best)= let m3----select (ml) in if null (m3) then index else alternate (m2, rest (m3), 3-index, first (m3)) treewalk (trl, tr2) = alternate (build (0, list (trl)), build (0, list (tr2)), 2, maxint) Fig. 6. Alternating parallel treewalk.

68

RICHARD SALTER

be eliminated from the first example by structuring it similarly to the second, or that choose could be introduced to the second program by using the methods of the first. We found that the goal of building a multiset of paths was always an invariant of whatever technique was used to structure the data-flow, and it is this idea that we stress as the underlying concept permitting the transformation from linear to concurrent search algorithms. By realizing this concept, we can conduct the search along non-interfering datapaths, yet maintain the spacial efficiency of shared structures in the implementation of these paths. The most important problem yet to be satisfactorally solved is the means of communicating dynamic termination criteria to various processes once they have been invoked, as these invocations are in fact closed to all but the global environment. The method of communication through a free variable used in the parallel tree search does work in this instance, but is not a particularly satisfying solution since it ties the processes to a single global state. It would not be useful in, say, a concurrent alpha-beta mini-max implementation (which we are currently considering) since each process's perception of current alpha-beta values is tempered by its own environment. What we are seeing is a conflict beteen the stateless perception of applicative structures as existing, either in a manifest or suspended form, over all time, and the need to incorporate state-dependent data into our programs. One possible means of satisfying this need could be found by introducing a new nondeterministic, state-dependent primitive which would discriminate between suspended and manifest structures. Use of such a primitive would have to be coupled with a facility for mutually recursive suspended structures, similar to the mutually recursive function definitions facilitated by labels in SCHEME [14]. We would then be able to make direct inquiries from within one process for state-dependent (partial) results from a second by probing that process as a data structure. By introducing this or some other method for interprocess communication, we would have the necessary flexibility for implementing the sophisticated techniques required by complex nondeterministic algorithms in this concurrent style. REFERENCES 1. R. W. Floyd, Nondeterministic algorithms, J. Ass. comput. Mach. 14, 636-644 (1967). 2. J. Cohen, Nondeterministic algorithms. Ass. comput. Math. Surv. 11, 79-94 (1979). 3. C. Montanegro, G. Pacini and F. Turini, Two-level control structure for nondeterministic programming. Comm,ms Ass. comput. Much. 20, 725-730 (1977). 4. D. P. Friedman and D. S. Wise, Applicative multiprogramming, Technical Report No. 72, Indiana University Computer Science department, Revised (1979). 5. D. P. Friedman and D. S. Wise, An approach to fair applicative multiprogramming, Proceedings of the International Symposium on Semantics of Concurrent Computations (edited by Kahn and Milner), pp. 203-225, Springer, Berlin (1979). 6. D. P. Friedman and D. S. Wise, An indeterminate constructor for applicative multiprogramming, Seventh Annual Symposium on Principles of Programming Languages, pp. 245-250 (1980). 7. J. McCarthy, A basis for a mathematical theory of computation, in Computer Programming and Formal Systems (edited by P. Baffort and D. Hirschberg), pp. 33-70, North-Holland, Amsterdam 0963). 8. J. R. Bitner and E. M. Reingold, Backtrack programming techniques, Communs Ass. comput. Much. 18, 651-656 (1975). 9. D. P. Friedman, private communication (1979). 10. E. W. Dijstra, Notes on structured programming, in Structured Programming (edited by Dahl, Dijstra and Hoare), Academic Press, New York (1972). I I. G. Lindstrom, Backtracking in a generalized control setting, Ass. comput, much. Trans. program. Syst. 1, 8-26 (1979). 12. E. W. Dijkstra, Cooperating sequential processes, in Programming Languages (edited by F. Genuys), Academic Press, New York (1968). 13. D. P. Friedman and D. S. Wise, Unbounded computational structures, Software--Practice and Experience 8, 407--416 (1978). 14. G. L. Steele Jr and G. J. Sussman, The revised report on SCHEME, a dialect of LISP, MIT AI Memo 452 (1978).

About the Author--RICHARD M. SALTER received the A.B. degree in Mathematics from Oberlin College in 1973, and the M.A. and Ph.D. degrees in Mathematics from Indiana University at Bloomington in 1975 and 1978, respectively. He is an Associate professor of Mathematics and Computer Science at Oberlin College. His research interests include applicative programming langauges, A.I. programming languages, multiprocessing, and continuous modeling. He is a member of the Association of Computing Machinery, the American Mathematical Society, and Sigma Xi.