Inheritance of theory in the ADABTPL language

Inheritance of theory in the ADABTPL language

Comput. Lang. Vol. 17, No. 3, pp. 157-167, 1992 Printed in Great Britain. All rights reserved 0096-0551/92 $5.00+0.00 Copyright © 1992 Pergamon Press...

735KB Sizes 0 Downloads 20 Views

Comput. Lang. Vol. 17, No. 3, pp. 157-167, 1992 Printed in Great Britain. All rights reserved

0096-0551/92 $5.00+0.00 Copyright © 1992 Pergamon Press pie

INHERITANCE OF THEORY IN THE ADABTPL LANGUAGE TIM SHEARD l a n d DAVID STEMPLE 2 'Department of Computer Science and Engineering, Oregon Graduate Institute, Beaverton, OR 97006, U.S.A. 2Department of Computer and Information Science, University of Massachusetts, Amherst, MA 01003, U.S.A.

(Received 17 May 1991) Abstract--The ADABTPL language is a functional specification language which attempts to support a programming paradigm that is natural and flexible, which supports concise programs, and which also promotes proofs of system properties. ADABTPL supports this paradigm by tying the compiler to a powerful induction based theorem prover in the style of Boyer and Moore. The theorem prover uses a tailored theory about particular programs that captures more of the application semantics than is usual in programming verification. Such theory is generated during compilation by instantiating selected parts of generic theorems about second order functions. The instantiated theory is more particular to the program in which it will be used and thus decreases the amount of search that is needed to prove program specific properties. This approach constitutes an instance of theory inheritance, and can also be used to make sound program transformations. Our work indicates, against intuition, that the introduction of higher order mechanisms does not necessarily complicate formal reasoning about programs, but instead provides a powerful abstraction mechanism for simplifying mechanical reasoning. Higher order logic Proof of program properties Meta-the0rem types Type-safe programs Program transformation

Theory inheritance Predicate-based

1. I N T R O D U C T I O N

ADABTPL (Abstract DAta Base Type Programming Language), an experimental database programming language being developed at the University of Massachusetts at Amherst, is a high level specification and programming language which supports the following features. • Definition of new types using type equations. The use of recursive types allows the creation of data structures of indefinite size, such as lists, binary trees, etc. • Definition of parameterized types and generic functions that can manipulate instances of parameterized types. This allows a flexible concise notation for specifying certain kinds of abstraction [1]. • Definition of derived types, where a derived type is any existing type which is restricted by an arbitrary predicate. • Pattern-based manipulation that considerably simplifies the description and manipulation of complex constructs. • A verify statement which is used to indicate properties that the compiler must verity at compile-time. These are used both as proof obligations that the compiler must maintain as well as hints to the compiler on how to attempt future proofs. • A powerful induction based theorem prover which is intimately tied to the compiler and type checking system. This supports a level of static type checking unavailable in most other languages, as well as a mechanism for the user to specify semantic-based program transformations. In this section we will introduce these features and show how they can support a programming paradigm that is flexible, concise and natural, and which also promotes proofs of system properties. We start with an overview of the features we will deal with in this paper, which include types, functions, theorems, and meta-theorems. 157

158

TIM SrmARD and DAVID SI"r~MPLI~

1.I. Introduction to ADABTPL types and functions

Recursive types are those types whose instances have subcomponents of the same type as themselves. For example, binary trees are a recursive type, since the left and right subtrees of a binary tree are themselves trees. Recursive types are usually defined as the least fotpoint of recursive type equations [2]. Parametric types are types having components that are parameterized, i.e. can take on any type. For example lists are parametric types since one can have lists of integers, lists of strings, etc. ADABTPL types are defined by equations using primitive types (string, integer, boolean, etc.), parameterization, recursion, aggregation (struet), disjunction (union), derivation (where), function types (~), and constants (singleton). We will base our examples on the structures whose type definitions follow. These definitions will also illustrate the syntax of ADABTPL type definition. 1.1.1. Recursive data types. Person is a simple (neither recursive nor parametric) type describing a structure type with components name and age and a constructor function make_person. Components of structures are referenced with the dot notation, i.e.x.name refers to the name component of the variable x with type person. person = struct make_person(name:string, age:integer); Pair is a parametric type with two type parameters, a and b. It has as its values pairs of objects of any two types. pair(a,b) = struct pairup (firstp:a, secondp:b); Note that in an actual pair instance, the type variables a and b are bound to some concrete type, e.g. pair(string, integer). Natural is a recursive type describing the natural numbers. It is a union type. Any instance of a union type has as its structure exactly one of the alternatives of the union. natural = union ( zerop: singleton zero; positive: struct add1 (sub1: natural) ); In the definition above, zero and add1 are constructors for the zerop and positive alternatives of the natural type. The zero value is the single value contained in the type constructed by the singleton type constructor (of the same name). The singleton type constructor takes a function name and produces a type containing exactly one element. The element is constructed by the niladic function defined just as the person and pairup constructors in the previous struct types. In the same way, the symbol add1 denotes a one argument function in the signature of the type built by the struct type constructor, and sub1, a tuple selector function, which denotes the predecessor function (and is partial over natural). The symbols zerop and positive stand for recognizer functions defined by the semantics of union construction. Recognizers can be used in boolean conditions to make choices based on the type of a value of a union type. The values of union types are most often referenced in a case expression that allows separate actions for each alternative of the union. As is illustrated below, in these cases the particular form of a union value is determined by matching against patterns defined by the constructor functions from the alternative types, thus avoiding the use of the recognizer functions. List and bintree are recursive, parametric abstract structures describing LISP like lists and binary search trees. list(m) union(null: singleton nil, consp: infix struct cons(car:m I cdr:list(m))); =

Inheritance of theory in the ADABTPL language

159

The infix before struct defines the I symbol to be an infix synonym for cons.

bintree(m) = union (nulltree: singleton empty, fulltree: struct tree(left:bintree (m), value:m, rig ht: bi ntree (m) ) ); Struct and union type constructors introduce axioms into the theorem prover's knowledge base as well as functions and constants into the object code. For example the following theorems are introduced by the list definition.

null(cons(a,I) ( - ) > false,

null(nil) ( - > > true,

consp(cons(a,I)) ( - > > true,

consp(nil) ( - > > false,

car(cons(a,I)) (->> a,

cdr(cons(a,I)) ( - > > I,

The operator ( - ) ) indicates an equality that is used in an left to right manner to rewrite expressions during theorem proving. Combinations of primitive types, parameterization, struct, union, singleton and recursion grounded on a non-recursive or singleton type define the recursive data types [2]. The axioms generated satisfy the conditions defined by Hoare for good axiomatizations of data structures and have been shown to be effective in supporting mechanical reasoning about a broad range of computations [3, 4]. 1.1.2. Derived types. Derived types construct subtypes by restricting a type to those values that meet a certain predicate. ADABTPL indicates derived types using the where constructor. For example

where(x:integer, (x > O) && (x < 100)) indicates the integers in the range 1-99. The following is the type of lists of ordered integers.

where(l:list(integer),ordered (I)) The ordered predicate is defined in the next section. 1.2. Function definition in A D A B T P L

The following is the definition of the ordered predicate for lists of integers. It illustrates a simple pattern-based definition. function ordered (l:list (integer)) :boolean;

case I { nil -> true; headlnil -> true; head!(second!m) -> (head
};

The case expression is a pattern directed conditional expression. The operation of the case expression is to sequentially compare the patterns of the case body (the expressions before the - >) to the case arguments (I in ordered) until a case is found where all the elements of the pattern match the case arguments. Then the expression after the - ) is returned as the value of the case. If a pattern match fails, the case moves onto the next (pattern, action) pair. If no pattern succeeds the value of the case is undefined. Patterns are constructed of constants (like nil) which match themselves, calls to constructor functions (like l) which match values constructed by the same constructor, and variables (like head)

160

TIM SI~ARD and DAVID S ~ P L E

which match anything [5]. Pattern matching is a binding operation. If a pattern match succeeds then the variables in the pattern are bound to the corresponding matching values of the case arguments. These variables can be used in the action part of the successful case. Pattern matching can be compiled into efficient code and no unification or pattern matching need be executed at run-time [6]. Functions in ADABTPL are strongly typed and can be polymorphic if they have formal parameters involving type variables. Consider the list reduction function below which reduces a list to a single value and a particular call which sums all the elements of I, of type list(integer).

function(alpha) listreduce (subl:list (alpha), selectf:[alpha]-)beta, acc:[beta,beta]-)beta, nullvalue:beta) :beta;

case subl { nil - ) nullvalue; cons(p,m) - ) acc(selectf(p),listreduce(m,selectf, acc,nullvalue))

};

The following clarifies some points of polymorphic ADABTPL function definition. • The prologue function(alpha) indicates that the following function is polymorphic and will work for any type alpha. In the example above listreduce will accumulate a value from a list of any type given an appropriate selection function, f, accumulating function, ace, and bottom value, nullvalue. The type alpha is inferred by the type checker from the actual arguments supplied in a call of the function, not given as in some languages. • The type of selectf is written selectf: [alpha]-)beta and denotes the type of functions from type alpha to type beta. The following is a call to listreduce which computes the sum of elements of a list of integers.

l istred uce (I, [x] - )x, [x,y] - >x + y,O) The function constructor [ x ] - >exp denotes the anonymous function (lambda expression) with argument x and body exp. Anonymous functions can have any number or arguments and are statically scoped. Derived types can be of great use in specifying functions that are not total. For example the integer division function has type:

[integer,where(x:integer,x()O) ] -)integer and the car function defined by the list type equation above has the type:

[where(x:list (alphaJ,consp(x)) ] - )alpha where alpha is a free type variable and can take on any type. Of course, for such declarations to be of any use the compiler needs to enforce the restrictions indicated by derived types. This is, in general, an undecidable problem, but many cases are tractable and can afford a high level of quality assurance without undue cost. This is where the theorem prover enters the picture.

2. TYPE C H E C K I N G THE USE OF DERIVED TYPES All functions in ADABTPL are total functions. If the compiler cannot prove the totality of a function it is not compiled. Of course there are many useful partial functions which the user would like to use. Such functions are modeled in ADABTPL as total functions over limited domains using derived types. It is imperative that the compiler make sure that functions are only used on their legal domains. For example, consider the function maxlist below which returns the maximum

Inheritance of theory in the ADABTPL language

161

element of a list of integers. This function is only defined over non-null lists as there is no maximum element in a list of zero elements. We could define such a function as follows: function maxlist (l:where(x:list(inteoer),consp(x))):integer;

case I { alnil - ) a; a!m - ) max(a,maxlist(m))

};

Recall that c o n s p is the recognizer function for non-null lists. The where clause on the domain of maxlist ensures that the function is total. Without the where clause it would be possible to cause a run-time error, since neither of the patterns in the case statement match nil, the empty list. The compiler ensures that all uses of the function maxlist adhere to this protocol by using the standard type checking process. All actual parameters to maxlist must be a subtype of the type:

where(x:iist (integer),consp(x)) The type checker computes a static environment as it "walks" the expression tree constituting the body of a function. Type checking a function call consists of verifying that the types of all actual arguments are subtypes of the types of the corresponding formal arguments. If a formal argument has a derived type then the predicate of the derived type must be proved for the actual argument under the static environment at the point of call. For example, the first call to maxlist in the function below type checks correctly but the second fails.

function test (l:list (integer)); case I { aim - ) f(a) + maxlist(I); nil - > 1 + maxlist(I) The static environment of the first call includes the fact that the parameter I matches the pattern a !m which can be used to prove that I is not null. Unfortunately (or fortunately if you worry about your program failing at run-time) that is not the case for the second call, and the compilation will fail. Restricting the domains of functions guarantees the totality of all A D A B T P L functions by never allowing a function to compile if it has a call to another function whose preconditions cannot be proven. If a function has a parameter with a where clause the compiler must enforce the restriction on all calls. A call where the restriction cannot be proven from available context (information about the actual parameters, and semantic information about what must be true when execution flow reaches a certain point) causes a semantic error, and the function is not compiled. Since our theorem prover is sound but not complete, there exist legitimate function calls which will be classified as semantic errors. If a semantic error is flagged the user has two courses of action. First, the use of the function that caused the error should be reconsidered. The program could have unconsidered execution paths. If the user is convinced the precondition should be provable, hints can be given the compiler via the verify statement. For a simple example consider:

verify member(a,I) = ) (consp(I) ( - ) ) function test(a:integer, l:list(integer)); if member(a,I) then 3 + maxlist(I) else 0

true);

This suggests to the theorem prover that member(a,I) implies consp(I), and that the theorem prover should prove and remember this fact. Then in type checking the test function this theorem is used to verify that the actual parameter I meets the condition in the derived type of the formal parameter to maxlist. The user may explicitly call the theorem prover by using the verify command. The use above causes the theorem prover to prove the rewrite rule member(a,I) = ) (consp(I) ( - ) ) true) and

162

TIM SHEARD and DAVID STEMPLE

enter it into its global knowledge base. Verify clauses direct the compiler to use the rewrite rule in the type checking process, as a program transformation, or as a hint to proving some more complicated theorem that will need verification later in the compilation process. Verify expressions can also be used to give local scope to theorems. The form of such a verify is:

verify THEOREM1, THEOREM2 .... in EXPRESSION This causes the theorems to be available only in the scope of EXPRESSION. The result of a successful verify is to make its theorem available in the scope in which the verify appears. This is important for controlling the mechanical reasoning, a subject that is beyond the scope of this paper [4]. 3. H I G H E R

ORDER FUNCTIONS

AND T H E O R Y I N H E R I T A N C E

A function type with a free type variable is a second order type. A function with a second order domain is a second order function. Second order functions are a powerful means of abstraction. (We only deal with second order domains, not ranges.) Consider the selreduce function below. It tests each element of a list with predicate p. If the predicate succeeds it applies a transformation function f and accumulates an answer using the accumulator function acc (selreduce is a selection, a mapping and a reduction combined). function(t1 ,t2,t3) selreduce(l: list(t1 ),

p: [tl ] -)boolean, f: [tl ] - )t2, acc:[t2,t3] - ) t 3 , basev:t3) :t3; case I { nil - ) basev; headltail : p(head) - ) acc(f(head),selreduce(tail,p,f,acc,basev)); headltail - ) selreduce(tail,p,f,acc,basev)

};

This second order function is an abstraction which can be used to define many other functions. Consider: • Universal quantification over lists. Forall returns true if all elements in I meet the predicate p, and false otherwise.

function (alpha) forall (1: list(alpha), p: [alpha] - )boolean):boolean; selreduce(I, [x] - >true,p,[x,y] - )x && y,true); • length returns the length of the list I. function (alpha) len (l:list(alpha)):integer; selred uce (I, [x] - ) t r u e , [x] - ) 1 , [x,y] - )x + y,0); • member returns true if some element of I is eqf to a, and false otherwise. function(alpha) member(a:alpha, I:list(alpha), eqf: [alpha,alpha] - ) b o o l e a n ) : b o o l e a n ; selreduce(I, [x] - ) e q f (a,x), Ix] - )true, [x,y] - )x !l y,false); • filter returns a list of those elements from I meeting predicate p. function filter(l:list(alpha),p: [alpha] - ) b o o l e a n ) : l i s t ( a l p h a ) ; selred uce (I,p, [x] - )x, [a,I] - )all,nil); • a p p e n d returns the concatenation of the lists x and y. function(a) a p p e n d (x,y:list(a)):list (a);

selreduce(x,[z] -)true, [z] - ) z , [e,I] -)ell,y);

Inheritance of theory in the ADABTPLlanguage

163

Many other functions can be defined this way. The point is not that the user cannot define these functions in some direct manner but rather that if the theorem prover knows special properties about the selreduce function, these newly defined functions, being instances of the selreduee abstraction, inherit these properties. The compiler effects this theory inheritance during the compilation of function definitions that consist of a call to a second order function. We call a theorem about second order functions a meta-theorem since it stands for a large class of structurally similar theorems. If the compiler can prove a meta-theorem it can generate theorems about the instantiated functions defined in terms of the second order function. A simple meta-theorem about the second order function selreduce follows.

selreduce(a!s,p,f,acc,base) ( - ) ) if p(a) then acc(f(a),selreduce(s,p,f,acc,base)) else selreduce(s,p,f,acc,base)) When the forall, len, member, filter, and apprend functions are defined and compiled this theorem becomes instantiated for them, and the following is added to the theorem prover's knowledge base. (Boolean or and and are denoted by l! and &&.)

member(e,all,eqf) ( - > ) eqf(e,a) l! member(e,l,eqf) len(a!l)

(-))

1 +len(I)

forall(a!l,p) filter(a!l,p)

( - ) ) p(a) && forall(I,p) ( - ) ) if p(a) then a ! filter(I,p) else filter(I,p)

append(alx,y)

(-))

a ! append(x,y)

These are very simple theorems, and should not be hard for a user to pose to the theorem prover. But in our system, many functions are defined in terms of selreduce and this simple theorem is added for all of them without any interaction with the user. This constitutes a case of automatic theory inheritance. Meta-theorems are a means of encapsulating knowledge, which becomes available when users define functions in terms of second order functions. Such encapsulation is very important as a complexity controlling mechanism. The semantics of our programs would be exactly the same if these theorems were not inherited. However, proofs of properties would be more complex. For example, without inheritance, any proof about length becomes a proof about selreduce. This is definitely not a good strategy. The more complicated the function, the harder the proof, for the simple reason that the extra detail just provides a larger solution space for the proof mechanism to search. Proofs about selreduce per se are not necessarily difficult, but proofs about selreduce instantiated to length include details which produce a larger search space. At compile-time many of these details are removed in the proof of the meta-theorem due to the treatment of the instantiating functions as abstractions. Thus, the meta-theorem knowledge is inherited as a theorem (already proven), rather than as a proof-time obligation. Second order functions like selreduce do not necessarily complicate formal reasoning about functions, but can instead provide a powerful abstraction mechanism for simplifying proofs. Exploitation of higher order theory requires the combination of proofs of higher order functions such as the selreduce theorem above, and a powerful inheritance mechanism. It is possible to perform theory inheritance with significantly shallower reasoning than would be required to prove the inherited theorems. In fact the proof of the higher order theorem is often simpler than proving any of the inherited theorems. Mathematicians have known this for a long time, and thus often choose to prove a more general theorem than the one they had originally started with. In addition to providing a mechanism to prove theorems with less effort, a meta-theorem also provides a central repository for a large class of related theorems. This abstraction can saw~ the user from having to individually specify a large percentage of the theorems that the theorem prover knows about. 3. I, Second order theorem proving

In this section, we will discuss a technique for extending Boyer-Moore theorem proving techniques to second order theorems, in particular, theorems containing variables universally

164

TIM SHEARD and DAVIDSTEMPLE

quantified over typed functions variables. In general, such second order reasoning is intractable, but our approach focuses on higher order functions like selreduce and invariants that depend on fairly simple properties of their input functions. Such second order functions have been shown to be useful abstractions and placing theorems about them in repositories accessible to the compiler is a way of controlling the complexity of reasoning about particular uses of the functions. The example in the previous section used a theorem which did not include any properties on the function's variables other than their types. In general, we deal with meta-theorems of the following form:

METAHYPOTHESIS => (HYPOTHESIS => EXPR1 <->> EXPR2) where M ETAHYPOTH ESIS is a predicate universally quantified over inputs to function variables, and the right-hand side predicate, which we call the body, only uses the function variables as arguments to higher order functions. For example, the following states a theorem in this form about selreduce.

(acc(m,acc(n,z)) <->> acc(n,acc(m,z)) && (acc(m,acc(m,z)) <->> acc(m,z)))

=>

(member(a,l,eq)

=> (selreduce(a!l,f,p,acc,b) ( - > > selreduce(I,f,p,acc,b))

In this, M ETAHYPOTH ESIS is considered to be universally quantified over m, n and z. In order to prove such theorems, we take advantage of the ability of Boyer-Moore theorem prover to ignore the details of function definitions when a user has decided that this is a good tactic. In such cases (examples of information hiding in theorem proving), the prover uses only properties of the function that have been proven as theorems. This approach can be used to fool the theorem prover into treating the function variables as concrete functions with hidden details. This is accomplished by placing the M ETAHYPOTHESIS in the list of proven theorems during the attempt to prove the body of the theorem. If the theorem is proven, the M ETAHYPOTH ESIS is retained as a proof obligation to be verified whenever the property needs to be proven about a particular use of selreduce. For example, the following is a theorem about contains.

member(a,l,eq) => contains(all,b) ( - > > contains(I,b) by virtue of the meta-theorem above and that contains is defined by

contains(a,b) = selreduce (a, i'x] - >true, Ix] - >member(x,b,eq), [x,y] - >x&&y,true) While this theorem is not overly difficult to prove (it does require a proof by induction), in this environment it is generated as a result of very shallow reasoning (a trivial proof of the commutativity and absorptive properties of &&) after the meta-theorem is proven. This explanation of higher order extension of Boyer-Moore theorem proving is brief. The effective use of such theory requires a combination of eager and lazy theory inheritance. See [4] for an account of database programming language verification using these techniques. 4. PROGRAM TRANSFORMATION It is possible to use program transformation rules to derive better implementations. This is a well studied area but the use of a theorem prover tied to the compiler allows users to suggest and define their own transformations. For a simple example consider the following transformation of our definition of member into a more efficient version.

member(a,l,feq) is transformed by unfolding its definition into a call to selreduce, the second order function used to define it. s e l r e d u c e (I, [x] - >fu ncall (feq,a,x), [x] - >true, [x,y] - >x l i y,false)

Inheritance of theory in the ADABTPL language

165

By unfolding selreduce we get case I { nil - ) false; head t tail :feq(a,head) - > true !1 selred uce (tail, [x] - )feq (a,x), [x] - ) t r u e , [x,y] - ) x !! y,false); head ! tail - ) selreduce (tail, [x] - >feq (a,x), Ix] - ) t r u e , [x,y] - >x !l y,false)

}

Applying some simplification to the term true II selreduce(tail...) we get case l { nil - ) false; head ! tail :feq(a,head) - > true; head ! tail - > selred uce (tail, Ix] - >feq (a,x), [x] - ) t r u e , Ix,y] - ) x i! y,false)

} Finally, folding the call to selreduce back into a call to member we obtain a new and better definition for member. case l nil - > false; head ! tail :feq(a,head) - > true; head ! tail - > member(a,tail,feq)

} This example follows the standard unfold, simplify, fold strategy described in [7] and is applicable to all the selreduce functions we have defined in this paper. Meta-theorems open up many new possibilities for this paradigm. They allow the user to abstract large classes of transformations into single abstractions. Then through the use of properly seeped verify clauses and through the use of the SI M PLI FY compiler directive the user can control the transformation process. Consider the alternative reduction function below. function(t1 ,t2,t3) selreduce2(l: list(t1 ), p: It1 ]->boolean, f: [tl ] ->t2, acc: [ t 2 , t 3 ] - >t3, basev: t3):t3; case I { n i l - >basev; head!tail : p(head) - ) selreduce2(tail, p, f, acc, acc(f(head), basev)); head!tail - ) selreduce2(tail, p, f, acc, basev)

};

This function is similar to the selreduce function of the previous section, except that it carries a partial accumulation of its results down the recursive call as a new base object. Selreduce2 is tail recursive and the compiler might want to transform calls (or definitions of new functions) to selreduce into selreduce2. When is this transformation meaning preserving? Exactly when the accumulator is both commutative and associative. These conditions will be used as the meta-hypothesis in the meta-theorem: verify (acc(x,acc(y,z)) ( - - > > acc(acc(x,y),z) & & acc(x,y) (-->> acc(y,x) )

=)

selreduce(I, p, f, acc, base) (->)

selreduce2(I,

p, f, acc, base)

This instructs the compiler to attempt the proof, and if successful, use it when proving program properties. Note that simple type considerations preclude the use of this theorem on some instantiations of selreduee. For example, filter and append, which use cons as acc, cannot inherit from this meta-theorem because it is only applicable when ace has the same type for both inputs.

166

Txr,l SHF~.RD and D^vto STIEMPLE

The first part of the theorem places the commutative and associative restrictions on the accumulator function, and the second part states the equality of the two reduction functions in this case. This meta-theorem is proved by the compiler and is cataloged for further use. When a function is defined the user can direct the compiler to use all its knowledge to simplify some expression using the SIMPLIFY compiler directive. Properly scoped verify statements can specialize knowledge to particular program components.

function(alpha) len (l:list (alpha)):integer; SI M PLIFY(selreduce(I,[x] - ~>true,[x]-~>l,[x,y]-~>x + y,O)); The definition above directs the compiler to attempt to use the theorem prover to simplify the selreduce expression. Because of the selreduce to selreduce2 meta-theorem above the prover attempts to prove that the accumulator [ x , y ] - ) x + y is associative and commutative. Since it is, the body is transformed into a call to selreduce2. Then the standard unfold, simplify, unfold techniques yield a tail recursive version that can be implemented in a straightforward manner as a loop. For example the compiler translates the length function using the body of selreduce2 to:

function(alpha) length(l: list(alpha)):number; len2(I,O); where the definition of len2 comes from the tail recursive selreduce2 which carries its partial result down in the base parameter.

function(alpha) len2(l: list(alpha), base:integer):integer; case I of { nil -~> base: headltail -~> len2(tail,1 + base) which is easily implemented as a loop. Many transformations like this are easily specified in ADABTPL. The paradigm of specifying, proving, then applying transformations is a reliable one. Systems where the transformations are applied without proof of their validity are prone to error. 5. S U M M A R Y

ADABTPL is a functional specification language designed to be concise, formal, natural and flexible. A major goal is the support of mechanical reasoning about properties of programs specified in ADABTPL. In pursuing this goal, ADABTPL types are defined by axioms in a stereotyped manner following the suggestions of Hoare. Type construction means adding axioms to a specification. Type definitions can contain predicates that shrink the allowable values of the types, thus producing predicate-based subtypes or what we term derived types. Proving that all uses of values of derived types obey their predicates extends type checking into incomplete inference, but adds to the quality assurance achievable by type checking. In order to make this extended type checking and other proofs about ADABTPL specifications effective in realistic cases, a method of theory inheritance has been implemented. In this approach, theorems about second order functions are proven and specific versions of them are inherited whenever the second order functions are used to define less general functions. Theory inheritance can be used to reduce the search space involved in theorem proving. Another technique involves proving helper theorems that are to be used only in some program scope. This is accomplished with ADABTPL's verify clause. Limiting the scope of a theorem's use can significantly reduce the resources necessary to prove a particular proposition. The same Boyer-Moore style inference engine used in the theorem prover can also be directed by the user to assist in symbolic program transformation and manipulation. This can lead to a new programming paradigm where solutions are written in a style of programs = specification + transformation in a manner which is natural to existing programmers. High level abstractions embodied in second order functions and meta-theorems promote concise programs and actually assist in the efficient proofs of system properties.

Inheritance of theory in the ADABTPL language

167

REFERENCES 1. Cardelli, L. and Wegner, P. On understanding types, data abstraction, and polymorphism. ACM Comput. Surv. 17: 471-522; 1985. 2. Hoare, C. A. R. Recursive data structures. Int. J. Comput. Inform. Sci. 4: 105-132; 1975. 3. Boyer, R. S. and Moore, J. S. A Computational Logic. New York: Academic Press; 1979. 4. Sheard, T. and Stemple, D. Automatic verification of database transaction safety. A C M Trans. Database Syst. 14: 322-368; 1989. 5. Witstrom, A. Functional Programming Using Standard ML (Series editor Hoare, C. A. R.), Prentice-Hall International Series in Computer Science. Englewood Cliffs, N.J.: Prentice-Hall; 1987. 6. Peyton Jones, S. L. The Implementation of Functional Programming Languages (Series editor Hoare, C. A. R.), Prentice-Hall International Series in Computer Science. Englewood Cliffs, N.J.: Prentice-Hall; 1987. 7. Burstall, R. M. and Darlington, J. A transformation system for developing recursive programs. J. Ass. Comput. Mach. 2 4 : 4 4 67; 1977.

About the Author--TIM SrmAao earned his Bachelor's degree from Harvard University in 1977, his Master's degree from University of Vermont in 1979 and his Ph.D. in Computer and Information Science at the University of Massachusetts in 1985. He is currently an Assistant Professor of Computer Science at the Oregon Graduate Institute. His research interests include database systems, programming languages, functional programming, reflection and other systems which perform formal manipulation of programs as data, such as automatic theorem provers, program transformation systems, and partial evaluators. Dr Sheard's most recent work involves the design and implementation of the TRPL language system which supports linguistic reflection, a mechanism which supports with the safety provided by a staticly typed languages, but removes by using reflection, some of the narrowing restrictions (normally associated with strongly typed languages) on which computations can be specified. About the Author--DAVm STEMPLEearned his Bachelor's and Master's degrees in Mathematics at West Virginia University in 1959 and 1960, and his Ph.D. in Computer and Information Science at the University of Massachusetts in 1977. He has been involved since 1960 in the development of compilers, operating systems and database management systems in industry and at the University of Massachusetts, where he worked in the academic computing center from 1966 to 1980. For the last 10 years he has been an Associate Professor in the Computer and Information Science Department at the University of Massachusetts at Amherst. His research focuses on the problems of specifying and developing complex database systems.

CL 17/3--B