Teaching Logic by Computer: A Hacker's Guide

Teaching Logic by Computer: A Hacker's Guide

Logic Colloquium '86 F.R. Drake and J.K. TNSS (Editors) 0 Elsevier Science Publishers B.V.(North-Holland), 1988 Peter Gibbins, University of Bristol ...

599KB Sizes 1 Downloads 68 Views

Logic Colloquium '86 F.R. Drake and J.K. TNSS (Editors) 0 Elsevier Science Publishers B.V.(North-Holland), 1988

Peter Gibbins, University of Bristol

Students of logic get a firmer idea of 'proof' if they learn how to prove theorems, valid arguments, in a formal system of logic. Such is our assumption. Thus the demand from philosophers, and from some mathematicians, for labour-saving devices to assist in the automatable aspects of teaching logic, of which checking a student's attempts at proving wffs or sequents is the most obvious example. Consider also software engineering, an activity which with its disputes and ideologies is unfortunately closer to philosophy than one might think. There is a less obvious, but probably more urgent, need for software tools which can automatically check the software engineer's reasoning about his formal specifications of software, descriptions of which are ideally developed, and formally reasoned about prior to their implementation in a real programming language. A formal specification language - the best developed example currently being that of the Vienna Development Method [see Jones (1986)l - is syntactically sugared logic and set theory, sugared semantically with abstract data structures. The current VDM employs a 3-valued logic of partial functions, along with an associated non-standard natural deduction system, with which A proof optimists think that software designers will reason. checker can be expected to come to the rescue: Of course, while software can assist the teaching and pervade computer application of logic, logic itself has come to science, partly because programming is in the process of advancing

113

P Gibbins

114

from the status of the hacker's Black Art to a form of engineering which has its own mathematical basis, in logic. Indeed, logic itself has become a programming language, or so one programming - one school among software engineers - asserts. ideology Philosophers too have become interested in logic programming, The circle seems to be fully closed. and in PROLOG in particular. So we begin with the following question, to which, surprisingly, some philosophers seem satisfied with an affirmative answer:

Is Teaching PROLOG Teaching Logic ? Before answering this negatively, we first ask: what's so great We begin with a simple and hence unrealistic about PROLOG ? problem: write a function which returns the sum of the first n integers (for n >= 0). Using an imperative (or procedural) programming language like PASCAL - programs written in which consist of a sequence of commands (or statements), constructs whose meaning consists (ultimately, and speaking very crudely) in the updating of a store one might, with luck, write a correct iterative version of 'sumfo-N', having wasted only a little time hitting on a good combination of boolean-valued loop expression, order of assignment statements in the loop body, and initialisation of local variables 'sum' and 'count', one of which is found in:

function su m-to-N (n umber : integer) :integer ; { number >= 0 } var sum,count : integer; begin sum := 0; count := 0; {initialisation} while count <> number do b e g i n {loop body} count := count + 1; sum := sum + count end; {loop body} sum-to-N := sum {right !!} end; {of sum-to-N}

Teaching Logic by Computer :A Hacker's Guide

115

Self-respecting computer scientists prefer PASCAL to most other imperative programming languages in common use. They particularly prefer it to FORTRAN and BASIC and COBOL, the last and crudest of these being the favourite of the data processing professional. There is more than mere snobbery in this preference PASCAL is superior to these of the academic computer scientist. older languages in a great many ways, one of which consists in the Thus, we could have fact that PASCAL supports r e c u r s i o n . declared the function f u n c t i o n sum-to-N(number : integer):integer; { number >= 0 } begin i f (number = 0) t h e n sum-to-N := 0 else sum-to-N := number + sum-to-N(nurnber {obviously (?) correct} end; {of sum-to-N} which (I want to say) iterative version.

is

-

1)

at least more obviously correct than the

PASCAL, like almost every usable programming language, embodies both a declarative part - that which (roughly speaking) denotes a value - and a procedural part - that which (again roughly speaking) denotes an effect on the state of the computer store. The built-in integer functions of PASCAL are part of the declarative part, the assignment statement part of the procedural part. The recursive implementation of 'sum-to-" plays down the procedural aspect of PASCAL, the only assignment being to the name That, rather than the simple fact that it employs of the function. recursion, is why the recursive version is more perspicuous than the iterative version. It is also slower, more costly of computer memory, and will crash for values of 'number' (generally quite small values) which cause stack overflow.

Imperative programming languages are by far the most commonly used, especially in the sheltered world outside academia. The alternatives to imperative languages are the functional languages and the relational (or logic programming) languages. Languages of these latter types play down. (if not out) their procedural aspects. The chief characteristic of imperative

P. Gibbins

116

languages is that the assignment statement - updating in its simplest form - plays a fundamental role. (I should say that some languages resist this simple classification - Tempura, the temporal logic language which is both a logic programming language and which exploits assignment, is an example. I should also say that across this axis cuts another, broadly speaking orthogonal axis separating those languages which are purely sequential from those which support parallel evaluation or execution). A recursive declaration of 'sum-to-", though perhaps not the most suggestive in PASCAL, would be the most natural in a functional language like LISP. Notice first that the 'if ...then ...else ...' of the PASCAL implementation is represented explicitly by the primitive LISP function 'cond', and secondly that LISP has built-in arithmetic, the atoms ' O ' , ' l ' , '2', ... denoting the corresponding integers.

I*

sum-to-N

in (Franz-) Lisp, a functional language

(defun sum-to-N (n) (cond ((eq n 0) 0 ) (t (plus n (sum-to-N

*I

(minus n 1))))))

A LISP program typically consists of a library of such functions which are called (perhaps mutually) from a main function applied to some argument(s). In the spectrum which runs from imperative to logic programming languages - a spectrum which roughly coincides with the 'low-level' to 'high-level' hierarchy the functional and relational languages are bunched close together. A PROLOG program consists of a database of facts and rules rather than functions. If evaluation of functions applied to arguments is the central activity of a LISP interpreter, that of a PROLOG interpreter is the making of inferences from a database. And so a PROLOG interpreter incorporates an inference engine. Pure PROLOG, a subset of the real PROLOG you will find implemented on your local VAX, works by pattern-matching. Its inference engine is a resolution thearem-prover. A PROLOG data-base is a sequence of clauses. A clause is a disjunction of A literal is an atom (a positive literal), or the negation literals. of an atom (a negative literal).

Teachirig Logic by Computer :A Hacker's Guide

117

But PROLOG permits only certain clauses - Horn clauses - in a data-base - that is, a clause in a PROLOG data-base can have at most one positive literal. Clauses which have one positive and no negative literals are facts. Clauses which consist of negative literals figure as queries, used 'outside' the data-base. Clauses which have one positive literal and one or more negative literals, that is clauses of the form P V - Q v ... V-R are rules and can be re-written P :-Q, ..., R. is read as '...if...', and each of the commas as '...and...'. where the ':-I

Pure PROLOG - PROLOG without the extra-logical features which damage its interpretation as a logic programming language naturally lacks arithmetic. But one can represent arithmetic using the constant '0' and the functor 'succ'.

I* sum-to-N(N1,N)

in Pure PROLOG *I

sum-t o-N( 0,O).

I* fact *I

s um-t o-N (succ( X) ,Z) :sum-t o-N( X,Zl), p Ius( s ucc( X), Z1,Z).

I* rule *I

I* fact *I

plus(X,O,X).

I* rule *I

plus(x,succ(Y),succ(z)) :plus(X,Y,Z).

The query

? - sum-to-N(succ(..(succ(O)..),Sum).

[lo succ's in all] succeeds with Sum = succ(succ(succ ................ (0)4) [55 succ's in all]

Of course, real PROLOG incorporates arithmetic and therefore

P. Gibbins

118

transcends pure logic. The 'is' of the second clause evaluates the expression which follows it and is thus not a part of PROLOG's pattern-matcher.

I* sum-to-N(N1,N) in PROLOG with arithmetic evaluation *I sum-to-N(O,O).

sum-to-N(N,S) sum-to-N(Nl,Sl), N is N1 + 1, S is S1 + N.

The query

:-

I* 'is' evaluates *I

-

? sum-to-N(l0,X). succeeds with

x = 55.

Notice that PROLOG, like LISP, must implement some form of 'if ...then ...else...'. In PROLOG it is the order of clauses in a data-base which implements the 'if ...then ...else ...' which operates between clauses. The advantages to the software engineer of PROLOG are clear. A PROLOG program will typically be an order of magnitude smaller than an equivalent PASCAL program. It is sometimes said (but rather dubiously) that a programmer's productivity - in lines of code per unit time - is the same whatever language he writes in. If this is even approximately true, programmer productivity will be much higher in higher-level languages. A program written in a more approximately declarative and less completely procedural language will be correspondingly easier to understand, maintain and modify. Of course, a programming methodology has grown up for imperative languages - developed by Dijkstra and Hoare among others - which demands that the software engineer develop his programs via proofs of correctness. The software engineer looks for loop-invariants, establishes (weakest) pre-conditions for his desired post-conditions, and proves termination. But both the functional and the relational languages, by suppressing the imperative aspects of programs, seem (at least) to offer a short cut to program correctness. Viewed as an unusual programming language which has many of

Teaching Logic by Computer :A Hacker's Guide

119

the usual procedural features in addition to many others, PROLOG is a perfectly good programming language. But one must ask: is PROLOG really a logic programming language ? and should teaching logic be a substitute for teaching old-fashioned Like any programming language worth the name, PROLOG logic ? has the power to compute the computable functions. But one may criticise its expressive power and adequacy as a logic programming language. In expressive power PROLOG is clearly highly restricted. First, PROLOG programs - PROLOG data-bases - permit only definite Horn clauses. Horn clause (first-order) logic is a small, though undecidable fragment of first-order logic. More importantly, conditionals with disjunctive consequents simply cannot be expressed in a PROLOG data-base. It is surprising that these severe limitations count for little in so many 'logic programming' applications of PROLOG. Explicit PROLOG has a very non-standard disjunction. are allowed in data-bases, but only in the disjunctions antecedents of conditionals (that is, in the body of rules) where they constitute a kind of syntactic sugar. Thus (I...;...')

really means p :- q,r,t. p :- q,s,t.

etc., though we shall encounter a case in which the former is more efficient than its equivalent, the two clauses without the Explicit disjunctions are allowed as queries, though this can lead to some odd behaviour. Thus the query 'p;q' may not succeed even when the query 'q' does, as for example with the data-base I;'.

'q;p' does succeed however, an example. which shows that classically logically equivalent data-bases may be quite different PROLOG programs (data-bases). This, and many other oddities, are due to the fact that PROLOG incorporates a depth-first search for the empty clause, via resolution and unification, and that the order

P. Gibbins

120

of clauses in a data-base and the order of literals in each clause are significant. Depth-first searches, though often the most efficient, can go careering down unprofitable tree branches. Indeed no algorithm based on depth-first search can avoid all such unnecessary failures to terminate. PROLOG has no negation at all, except the 'metalinguistic' built-in functor 'not' which behaves according to the program

not(X) :calI(X), !, fail.

not (X)

.

so that 'not(p)' will be entailed by any data-base which does not entail 'p'. There is, finally, the contingent fact that, on grounds of improved efficiency, all (?) current implementations of PROLOG omit the 'occur check' from the unification algorithm, so that the query ? - X=f(X) given the empty data-base, 'succeeds' with

x = f(f(f(f( ...... None of this, except possibly the last point, embodies a My criticism of PROLOG as a procedural programming language. own experience with it suggests that most programmers use PROLOG as a terse, high-level logic-like procedural programming language, a language which offers a very rough approximation to the probably impossible ideal of the purely declarative logic programming It is a powerful and unstructured language with which language. one must be careful. It is certainly no practical substitute for logic, even for the non-logician. As a procedural language, PROLOG is an excellent language in which to prototype programs, particularly those which manipulate logics.

Teaching Logic by Computer :A Hacker's Guide

Writing Proof Checkers: Parsing in PROLOG

A proof-checker for the propositional and/or predicate calculi, a program which accepts proofs written by the user in a natural deduction system such as Lemmon's [1965] or Newton-Smith's [1985] and diagnoses errors in an illuminating way, is typically a program of a few thousand lines of PASCAL code. Apart from making the user interface friendly, the problem is an exercise in writing parsers and pattern-matchers. The textual representation of a proof in such a natural deduction system can be thought of as a sequence of triples, each of which is a string of input characters, the elements of each triple being a p r e m i s e - s t r i n g , a w f f , and a justification. Each element in a triple is a string in one of three simple languages. Each language may be defined by a grammar which is either regular or context-free (or, in some versions of the language of predicate calculus wffs, an attribute grammar). Each grammar suggests a parser which both accepts strings of the language it defines, and builds an appropriate representation of the string. Take Newton-Smith's version of the propositional calculus PC as an example. Each of Newton-Smith's premise-strings is either 'Prem' - which may be thought of either as introducing wffs as premises or, perhaps better, as introducing an axiom sequent (which is of the form P 1- P) - or is a list of line-numbers, each line-number corresponding to the wff on the right-hand side of the axiom sequent introduced at that line-number. A type 3 grammar which defines this language, in BNF ('Backus Naur Form', not 'British Nuclear Fuels'), is ::= 'Prem' I ::= I

1

I,'



::= I ::= '0' I '1' I

.... I '9'

In practice, one wants to restrict this grammar with some 'semantic' rules which prevent the numerals naming numbers which

121

122

P. Gibbins are too large. Since 'too large' means 'not less than the current line-number', 'too large' may be quite small. Strictly speaking, the parser for the language of premise-strings really implements an attribute grammar: in this case a regular grammar in which A numbers are checked for size as the parser scans the string. sensible parser should, as it scans the input string, simultaneously build a representation of the string as (say) a set of numerals. If the string is 'Prem', the numeral should be the current line-number. If the parser does not accept the string, it should deliver an appropriate error message.

The language of justifications is complex. One can think of it as defined by a strong LL(1) context-free grammar, a fact which directly implies a simple and efficient parser. (A grammar is strong LL(k) if, for any given string, at most a k symbol look-ahead is required to determine which production should be applied at any point during 'left-to-right-scan'). In BNF the grammar is

::= clist_of_line_numbers> tlist-of-line-numbers> ::= I I,'
where '' is defined as before. Naturally, the attribute restrictions placed on premise-strings must also be placed on justifications. When we turn to the language of wffs - PC-Wffs - we encounter something that the logician can learn from the computer scientist. If a proof is to be machine-checkable, what counts as a proof must be spelled out without any ambiguity. What is to count as a wff and how a wff is to be parsed must be perfectly clear. But compare the typical logician's treatment of wffhood with that of the computer scientist. What's in a PC wff ? The typical answer, which begins simply and is free from ambiguity, is as follows. Let the variables' 'a' and 'b' range over Then say strings of symbols from the vocabulary of PC-Wffs. (1) that if a is a propositional letter, then a is a wff;

Teaching Logic by Computer :A HackerS Guide

123

(2) that if a is a wff, then so is -a; (3) that if and b are wffs, then so are (i) (a 8 b); (ii) (a v b); (iii) (a -> b); and (iv) (a <-> b); (4) and that there are no other wffs.

This recursive definition is adequate, except that it multiplies is brackets. Every occurrence of a connective, except accompanied by a pair of brackets. What counts as a wff is up to us, and so there is a strong impulse to do something about the brackets. One response is to associate a relative precedence with 'v', and each connective. We say that the connectives are written in order of decreasing precedence, that I-' has and '8,' than 'v', and 'v' than and higher precedence than than Connectives of higher precedence bind more tightly so that the string -P->PvQ<->R is understood to mean (((-P)->( PvQ))<->R) I-',

I-',

I&',

'->I

'<->I

'&I,

'+I,

'->I

I<->'.

But there are at least two objections to this procedure. First, the abbreviated, bracket-less string is strictly speaking not a wff at all, but is at best a conventional representation of the All this is awkward, the original simplicity of the wff below it. definition of wffhood having been lost. We are allowing a non-wff A better solution is to define wffs as a to be the proxy for a wff. context-free language from scratch, so that both fully bracketed and abbreviated strings count as wffs. The definition we give leads Our context-free to a simple exercise in parser construction. grammar leads straightforwardly to a recursive descent parser which we code in PROLOG. Secondly, if we have a bracket-dropping convention such as the one above, then we need an explicit convention about the associativities of the binary connectives. As an alternative, we construct a context-free grammar for the propositional calculus which contains its associativities implicitly. In fact in wffs as we define them, all the binary connectives are rig ht-associative. The terminals of the language of the propositional calculus are the propositional letters (or 'atoms'), the brackets and the logical connectives. PC-Wffs is then the class of strings defined by the

P. Gibbins

124

following context-free grammar, in BNF.

The language of the propositional calculus PC-Wffs is defined by the following context-free grammar Gram(PC) in BNF

::=

I

<->

::= atom I ( ) ::= A

I ....12

One of the elegant features of PROLOG, as usually implemented, is its Grammar Rules, which enable it to parse context-free grammars directly. A pre-processor built into the interpreter translates a clause of the form head(X) --> body-1 (Y-I), body-2(Y-2). ..., body-n(Y-n). into a PROLOG rule which can be read as 'X is a sentence which consists of a sub-sentence V-1 of type body-1 concatenated with a subsentence Y2 of type body-2 concatenated with .... a subsentence of type Y-n of type body-n'. [See Clocksin and Mellish (1984) Ch. 91 Thus our grammar can be transcribed almost directly into PROLOG as follows. The ugly clause for 'prop-var' is ordinary PROLOG. It is needed so that the pre-processor does not do its work on 'name', a built-in predicate of PROLOG.

Teaching Logic by Computer :A Hacker's Guide

/* PARSER 1 : Inefficient Parser for PC wffs */

wff(['iff',Left,Right]) --> conditional-wff(Left), iff, wff( Right). wff(X)

-->

conditional-wff(X).

conditional-wff(['if',Left,Right]) disjunction-wff(Left), if, conditional-wff(Rig ht).

conditional-wff(X)

-->

disjunction-wff(X).

disjunct ion-wff( ['or', Left ,Rig ht]) conjunct ion-wff (Left), or, disjunction-wff(Right). disjunction-wff(X)

-->

-->

-->

negation-wff(X).

negation-wff( ['n o t ' , Rig ht]) negat io n-sig n, negation-wff(Rig ht). ne g at io n-wf f (X)

-->

conjunction-wff(X).

conjunct io n-wff( ['and', Left, R i g ht]) negation-wff(Left), and, conjunction-wff(Rig ht). conjunction-wff(X)

-->

-->

--a factor (X).

factor(X) --> left-bracket, wff(X), right-bracket. factor( X) --> pro p-var (X).

125

P. Gibbins

126

iff --> "c->''. if --> ''->". or --> "v". and --> "&". negation-sign --> "-". left-bracket --> "(". right-bracket --> ")"-

I* End of PARSER 1 *I

This PROLOG implementation of the parser is correct, immediate, but very inefficient. We would prefer an efficient recursive descent parser for PC-wffs which accepts all and only strings of the language and which builds the expression tree for any acceptable If Gram(PC) were strong LL(1) then writing a recursive string. descent parser would be easy. But Gram(PC) is not LL(k) at all, for any k, since we have the initial choice -> CC-Wff> and -> <-> and since a string derivable from may be indefinitely long. So we must look for a new LL(k) grammar for PC-wffs, with k = 1 if we are lucky. And we are. The recipe is as follows. We first introduce an end-of-input marker 'end-char' (say since we want a grammar which involves 'looking ahead' one character. And we now aim for a context-free grammar which is LL(l), which has start symbol IS', whose only production with 'S' of the left hand side is s -> W$ where the strings derivable from ' W are those in PC-wffs. A grammar which achieves all this is (where ' W corresponds to '', and 'C' to 'cC-wff>', etc) is the grammar Gram'(PC): '$I),

w -> C W w -> <-> w W' -> $

Teaching Logic by Computer :A Hacker's Guide

C -> DC' c -> -* c c ->

s

D -> KD' D->V D D -> $ K -> N K K->& K K -> 8 N->-N N -> F F->(K) F -> A A -> A

A -> Z

Now we have a non-trivial use of the PROLOG disjunction I;' in a data-base rule. We can compress the three productions with W and W on the left-hand-side into one PROLOG rule, and similarly for C and C', D and D', and K and K , leading to the following efficient parser which needs no significant back-tracking. In PROLOG we can (cheat a bit and) put the end-of-input character '$' to be the empty Iist """ .

127

P. Gibbins

128

P PROLOG PARSER 2 : Efficient Parser for PC Wffs */

wff(Tree)

--> conditional-wff(Tree-Left), ( (empty, equals(Tree,Tree-Left)) ; (iff,wff(Tree-Right), equals(Tree,pff',Tree-Left ,Tree-Rig ht])) ).

c o n dit i o nal-wff (Tree) --> disjunction-wff(Tree-Left), ( (empty, equals(Tree,Tree-Left)) ; (if,conditional-wff(Tree-Right), equals(Tree,[if,Tree-Left,Tree-Right]))

).

disjunction-wff(Tree) --> conjunction-wff(Tree-Left), ( (empty, equals(Tree,Tree-Left)) ; (or,disjunction-wff(Tree-Right), equals(Tree,[or,Tree-Left,Tree-Right]))

).

c o njunction-wff(Tree) --> negat ion-wff(Tree-Left), ( (empty, equals(Tree,Tree-Left)) ; (and,conjunction-wff(Tree-Right), equals(Tree,[and,Tree-Left,Tree-Right]))

).

neg at ion-wff (['n o t',Tree-Rig ht]) negation-sign, negation-wff(Tree-Rig ht). negation-wff(Tree)

-->

-->

factor(Tree).

factor(Tree) --> left-bracket, wff(Tree), right-bracket. fact o r (Tr ee) --> pro p-var (T ree)

empty -->

"".

.

Teaching Logic by Computer :A Hacker's Guide

129

iff --> "<->''. if --> or --> "v". and --> "&". negation-sign --> left-bracket --> "(". rig ht-bracket -->

"+".

'I-".

'I)".

equaIs(Xl,Yl,X2,Y2) :- X1

= Y1, X2 = Y2.

I* End of PARSER 2 *I One can think of a line in a proof in a typical natural deduction system as a sequent plus justification, the representation of the sequent as being a pair of strings - the first string displaying the premises in the sequent and the second the wff which is their consequent. The grammars both for the (language of) premise-strings and the (language of) justification-strings should be regular or at least LL(l), and hence at least as easy to parse as that of the wffs. A line in Newton-Smith's system for the propositional calculus will be acceptable iff the sequent it represents follows from previous sequents cited in the justification by the rule cited in the justification. For example, if the current - say 20th - line of the entered proof were

6,7,8 (20) (P->Q)&(Q->P)

12,13&1

the proof checker should check (perhaps in the following order): (1) that the premise-string 6,7,8 parses producing a premise-set

-

{6,7,8}- whose elements have values less than the current

line-number - the elements of the set name (the line-numbers of) wffs introduced on the right-hand-sides of introductions of axiom sequents; (2) that ( P - > Q ) & ( Q - r P ) is a wff, and parses yielding an expression-tree <'&',tree.left, tree.right>;

(3) that the justification-string parses producing a list < I 2,13> of line-numbers and a rule &I;

(4) that the expression trees of the wffs in lines 12 and 13 are

130

I! Gibbins tree.left and tree.right (respectively, or vice versa);

(5) and that {6,7,8}is the union of the premise-sets associated with line 12 and 13. If one or more of these conditions is not satisfied, the error should be trapped (as early as possible) and an error message should be delivered to the user. It is This is where the interest begins. easy enough to identify some error like "no such rule of inference" or "wrong number of lines cited in justification" or "for rule ... premises do not match" etc. But to display the most germane error: and to supply advice, is a real problem in artificial intelligence and a worthwhile research project. We might expect an advanced logic teaching package to incorporate first, an expert system to give advice, given a target sequent, as to which rules to apply, and secondly, an automatic theorem prover of sufficient power and efficiency to prove, though perhaps not optimally, all the likely target sequents.

Finally, let us return briefly from a philosopher's logic to software engineering. Because of the impact of software development methodologies like VDM, we can expect the teaching of logic to become more widespread, and the use of logic teaching programs to be of real help in the handling of the down-to-earth All this business of teaching the skill of constructing proofs. should be more good news for logicians. In their recent text Manna and Waldinger [ 1985, p. vii ] suggest that computer science demands that logic

should replace calculus as a requirement for undergraduate majors. The practical business of generating reliable software looks likely to give logic and logic teaching the kind of boost that physics gave to analysis.

Teaching Logic by Computer :A Hacker's Guide

Clocksin WF and Mellish CS (1984) Springer-Verlag, 2nd edition. Gibbins PF (1987) Press, New York.

Programming in PROLOG,

Logic Plus for the IBM PC, Oxford University

Systematic Jones CB (1986) VDM, Prentice-Hall.

Lernrnon EJ Paperbacks.

131

(1965)

Software

Beginning

Development

Logic,

Nelson

Manna Z and Waldinger R (1985) The Logical Computer Programming Vol. I, Addison-Wesley. Newton-Smith WH (1985) Routledge & Kegan Paul.

Using

University

Basis for

Logic: An Introductory

Course,