Journal of Applied Logic 13 (2015) 188–196
Contents lists available at ScienceDirect
Journal of Applied Logic www.elsevier.com/locate/jal
Reflecting rules: A note on generalizing the deduction theorem Gillman Payette 1
a r t i c l e
i n f o
Article history: Received 1 February 2015 Accepted 18 February 2015 Available online 6 March 2015 Keywords: Abstract Consequence Relations Proof-Theory Admissible Rules Deduction Theorem Uniform Substitution Substructural Logic Universal Logic Modal Logic
a b s t r a c t The purpose of this brief note is to prove a limitative theorem for a generalization of the deduction theorem. I discuss the relationship between the deduction theorem and rules of inference. Often when the deduction theorem is claimed to fail, particularly in the case of normal modal logics, it is the result of a confusion over what the deduction theorem is trying to show. The classic deduction theorem is trying to show that all so-called ‘derivable rules’ can be encoded into the object language using the material conditional. The deduction theorem can be generalized in the sense that one can attempt to encode all types of rules into the object language. When a rule is encoded in this way I say that it is reflected in the object language. What I show, however, is that certain logics which reflect a certain kind of rule must be trivial. Therefore, my generalization of the deduction theorem does fail where the classic deduction theorem didn’t. © 2015 Elsevier B.V. All rights reserved.
1. Introduction
It is an often discussed question whether the deduction theorem fails for modal logics. One of the simple resolutions to this question is to say that it depends on how deduction or derivability—and so the deduction theorem—is construed. A quick way to see two different ways of construing deduction centers around the rule of necessitation. Usually the deduction theorem is construed as: if Γ, β α, then Γ β ⊃ α. In this case, Γ α is generally interpreted as: the conclusion α (single formula) is derivable from the premises Γ (set of formulas). But the rule of necessitation, viz. that α is derivable from α, is a standard rule of so-called normal modal logics. But generally, α ⊃ α, i.e., not all such conditionals are theorems. The failure of the deduction theorem results from using the same sense of ‘derivable’ in the rule of necessitation and for the turnstile in Γ α. In Γ α rules for deduction can be applied to the elements of Γ to get α. In the case of necessitation,
E-mail address:
[email protected]. I would like to thank both the Killam Trusts for a Killam Postdoctoral fellowship at Dalhousie University from September 2012–September 2014 and the Social Sciences and Humanities Research Council of Canada for a Banting postdoctoral fellowship which have made this research possible. I would also like to thank Peter Schotch for inspiring this project and Clayton Peterson, Andrew Irvine and the two referees for their many, very helpful editorial comments. 1
http://dx.doi.org/10.1016/j.jal.2015.03.001 1570-8683/© 2015 Elsevier B.V. All rights reserved.
G. Payette / Journal of Applied Logic 13 (2015) 188–196
189
the rule can only be applied to theorems. So α is derivable from α, but only when α is a theorem. For a contemporary and complete discussion of the deduction theorem, one can consult Hakli and Negri [8]. Distinct notions of derivability were unavailable to early discussions of modal logic, i.e., Lewis [11]. As Parry [14] points out, Lewis intended his system2 of strict implication to be ‘one whose “p q” best represents “q is deducible from p” as ordinarily used’ (p. 134). Thus the senses of ‘derivable’ were limited. The deduction theorem for strict implication would be important, then, since it should serve to translate all valid cases of ‘q is deducible from p as ordinarily used’ in to a theorem of strict implication, i.e., p q. Although there were early results clarifying the correctness of the deduction theorem for modal logic, i.e., Barcan [1], the full range of possible senses of ‘derivability’ was not investigated until much later. Modern studies of inference rules in proof-theory have introduced a number of distinctions that help clarify the notion of derivability, see Fagin et al. [4] and Iemhoff and Metcalfe [9]. In what follows I will fix some concepts and notation, then rehearse some ways of making sense of derivability via inference rules. The goal is to prove a limitative theorem for a general conception of the deduction theorem which I call the ‘Failure of Reflection Theorem’. 2. Rules in logic 2.1. Logics Before getting to rules, I have to say something about logics and how they are specified. First, each logic L is specified by its consequence relation L , that is, a logic is a relation between sets of formulas and a single formula. The formulas are specified as part of a recursively constructed propositional language L, built up from a set of atomic formulas At = { p, q, . . . }. In the interests of generality, the specific structure of the language is left open. As per usual, there are two ways of generating a logic: by proof or by truth. When we give a proof theory for a logic, we then have to specify the conditions under which a proof (be it in a sequent system or a Hilbert-style system) validates a claim of consequence Γ L α. In sequent systems, it is done by saying that the sequent Γ ⇒ α is producible from the rules for manipulating sequents. In the case of a Hilbert-style system, usually, Γ L α is validated iff there are γ1 , . . . , γn ∈ Γ such that (γ1 ∧ . . . γn ) ⊃ α is a theorem of the logic L, i.e., (γ1 ∧ . . . γn ) ⊃ α is the last formula in a sequence of formulas in which each formula is either an axiom or producible from formulas earlier in the sequence using the rules of the system. Of course that requires a language with conjunction and a conditional. I will leave further discussion of proof-theory until later. A logic L is specified by a semantics when there is a set of structures S, and a relation of satisfaction () between the structures of S and L which says when a formula of L is true at a structure M ∈ S. Then, Γ L α iff for all M ∈ S, if M γ for all γ ∈ Γ, then M α. Some examples of this: classical logic can be specified by the set of truth valuations: v : At → { T, F }, where v α iff v ∗ (α) = T where v ∗ is the unique extension of v to the whole of L for classical logic. Similarly, for modal logics: A formula α is satisfied in a structure M, w which is an ordered pair of a model M = W, R, v (in which W is a non-empty set also referred to as |M|, R ⊆ W × W , with v : At → P(W )), and w ∈ |M|. The set of structures for a modal logic can then be specified by a set of models M by setting S = { M, w : w ∈ |M| & M ∈ M }. In the case of the modal logic K, M is the set of all models, in the case of S4, it can be specified by setting M to be all models in which the relation R on |M| is reflexive and transitive. These few examples will suffice for the moment. In order to display whether the consequence relation of a logic is specified by a proof theory or semantics we shall use Γ L α for proof theory, and Γ L α for semantics. Accordingly, completeness proofs are 2
The system Lewis thought was the system of strict implication is his S2, see Parry’s article for more specifics
190
G. Payette / Journal of Applied Logic 13 (2015) 188–196
theorems which say that we can determine the same consequence relation by a semantics and a proof theory. That is Γ α ⇐⇒ Γ L α ⇐⇒ Γ α. Just as a brief point of terminology, I will refer to a sentence of L, α, such that L α as a theorem of L regardless of whether L is generated by a proof-theory. However they are determined, there are certain properties that “nice” consequence relations have. In some cases these properties are thought to be necessary for consequence relations to be considered as a notion of inference or deduction, cf. Scott [16]. These properties, having been first laid down in Tarski [19], characterize so-called Tarskian consequence relations. These properties are: α∈Γ [R] Γα Γ α & Γ ⊆ Γ [M] Γ α Γ, α β & Γ α [T] Γβ and Γ α [C] for some finite subset Γ of Γ. Γ α The labels suggest certain properties of relations. R is to suggest reflexivity, and T transitivity; the T rule is often referred to as cut. M suggests the name monotonicity and C is intended to suggest the property of compactness. Although the properties are stated as though they are rules of inference, they are not necessarily rules of inference for a proof theory. They are meta-proof theoretic rules in this context; although in sequent calculi they are sometimes bona fide rules for manipulating sequents, but given the special title of structural inference rules. Structural inference rules are so-called because they have to do with the structure of the sequents rather than with the connectives of the language.3 Another important class of consequence relations are the structural relations; not to be confused with structural inference rules. Structural consequence relations preserve consequence under substitutions. A substitution is any function from atomic formulas to formulas. Such functions can then be used to generate substitution instances s(α) of any formula α of L by replacing each atomic formula p in α by s(p). A consequence relation is structural when if Γ α, then s[Γ] s(α) for any substitution s, where s[Γ] = { s(γ) : γ ∈ Γ }. We will say that a logic L is structural when its consequence relation L is structural. Many common systems of logic are structural. Indeed, Hilbert-style axiomatizations that include the rule of uniform substitution: L α US L s(α) are structural. Sometimes US is not included when an axiom system is presented as a set of axiom schemata, but using such a formulation amounts to building in uniform substitution implicitly. Sequent systems are also specified by schematic inference rules, and so will give rise to a structural consequence relation. Although 3 The distinction between structural and so-called operational rules is rather fuzzy as demonstrated by investigations in substructural logic, cf. Paoli [13], which I will not go into here.
G. Payette / Journal of Applied Logic 13 (2015) 188–196
191
these examples come from generating a logic proof-theoretically, semantically determined logics can of course be structural. In the past few decades so-called substructural logics have arisen as being philosophically important, see Paoli [13] for further elaboration. Somewhat confusingly, substructural logics are those which relax the structural rules of inference rather than the rule of substitution. Logics which don’t obey the rule of substitution are non-structural logics. Indeed, every structural rule has been questioned, even the structure of the sets is up for grabs. In substructural sequent calculi the sets of formulas are assumed to be multi-sets, which means that although the order of the elements on the left of the turnstile is irrelevant, there can be multiple copies of elements. That makes the rules for contraction and multiplication of elements new structural rules: Γ, α, α β [Con] Γ, α β Γ, α β [Mul] Γ, α, α β Without those rules it may be that Γ along with two copies of α implies β, but Γ with only one copy of α will not imply β. This kind of rule has become important in recent debates concerning whether there is a paradox of validity: Beall and Murzi [2] and Shapiro [18]. One of the fundamental innovations of sequent calculi from Gentzen [5] was to introduce the idea of sets on the right of the turnstile which introduced multi-conclusion consequence: Γ Δ. In spite of its importance, I am setting that complication aside for future development. In the sequel consequence relations will be sets of pairs Γ, α , and all other assumptions about the relation will be made explicit. With these concepts fixed, I can turn to rules and their characterization. 2.2. Rules In general, rules are constructed as follows: α1 , . . . , αn X. β Axioms are special rules which don’t have premises which is to say, as I have been displaying them, that there is nothing above the line—as in β
Axiom.
Rules, as presented above, can also be thought of as ordered pairs Γ, β , where Γ = { α1 , . . . , αn }. I will, for the most part, use the ordered pair type presentation. Any rule Γ, β can be interpreted in different ways. The discussion of rules in logic usually centers around which properties, be they proof-theoretic or semantic, the rules preserve. In proof-theory sometimes the concern is whether theorem-hood is preserved; in semantics the concern may be whether truth is preserved. Since I am working with consequence relations in this paper, and consequence relations can be determined by either proof-theory or semantics, I will focus on interpretations of rules which are common to both paradigms. Although semantic studies of rules, as in Fagin et al. [4], can make many more distinctions than seem to be available in proof-theory, I will focus on a few notions of rules from proof-theory. In the usual proof-theoretic literature, one commonly distinguishes two types of rules: derived rules and admissible rules.4 4 There are some terminological issues here, but these names of these kinds of rules are by far the most ubiquitous albeit least descriptive of the variations.
192
G. Payette / Journal of Applied Logic 13 (2015) 188–196
Definition 1. Γ, β is a derived rule for L , indicated by (DerL (Γ, β)), iff for every substitution s, s[Γ] L s(β). There is another sort of rule that leads to a kind of conservative extension of a logic. It was introduced in Lorenzen [12] to characterize the rules that preserve the set of theorems of a given logic. Informally, every substitution instance which preserves the theoremhood of the antecedent formulas of the rule preserves the theoremhood of the conclusion of the rule. Formally, Definition 2. Γ, β is an admissible rule for L , indicated by (AdmL (Γ, β)), iff for every substitution s, if L s(α) for all α ∈ Γ, then L s(β). It is clear from the definitions that if X is a derived rule, then X is an admissible rule. But the converse is not always the case. A logic in which all admissible rules are derivable is said to be structurally complete. Admissible rules and derivable rules come apart. Consider the well-known rule admissible—but not derivable—in the modal logic K: p, p .5 Considering this fact in detail is instructive albeit well-worn. First we show that it is not derivable via a semantic argument. Let M = W, R, v such that W = { w } and R = ∅ while v(p) = ∅. Since worlds that are not related to any worlds will make ⊥ true M, w ⊥, but M, w ⊥, thus ⊥ K ⊥ (by completeness of K with respect to all models). So the substitution which takes p to ⊥ is such that s(p) K s(p), which means p, p is not a derived rule. To see that it is admissible argue by contraposition using a ‘lead in’ argument.6 Suppose s(p) = α, but α isn’t a theorem of K. By completeness, there is a model M and w ∈ |M| such that M, w α. Form a new model M = |M| ∪ { w } , R ∪ { w , w } , v where w is a new element not in |M|. Since M must also be a model of K, there is a model and w ∈ |M | such that M , w α. By completeness α is not a theorem. Since α was chosen arbitrarily as was s, p, p is admissible for K. Note also that the admissibility of rules is not preserved in all extensions. I will refer to an extension of a logic L by a formula α as L ⊕ α. Since a logic in this general setting is simply a set of pairs Γ, α , extensions must be specified carefully. What is wanted is something analogous to adding an axiom in classical modal logic. When a formula is added as an axiom, all of its substitution instances are theorems, i.e., in L ⊕ α, for all s, L⊕α s(α). In the case of simply adding α, one would simply have L ∪ { α } = L ∪ { ∅, α }. But adding an axiom is more than either adding a formula or all of its substitution instances since the new axiom can be used to derive new consequence relationships. What that amounts to is requiring that the extension be closed under T. I will define an extension as follows. Definition 3. Let L be a logic, and α ∈ LL , then L ⊕ α is the minimal structural consequence relation such that L ⊆ , α and obeys T. When a consequence relationship obeys T I will say that it is transitive. Notice that if L isn’t transitive, any extension of it will be. What this amounts to is saying that transitivity is a sine qua non for consequence relations in this essay. An example, then, where admissibility isn’t preserved by extension is when p, p isn’t admissible in K ⊕ ⊥; note that the only substitution instance of ⊥ is ⊥. In L = K ⊕ ⊥, ⊥ is a theorem, but ⊥ isn’t. Therefore, when s(p) = ⊥, we have a case where L s(p) but L s(p). Proof-theoretic studies of rules are usually confined to these two types. The interest in rules centers around rules that do not interfere with what is in the logic already, i.e., rules that are conservative over the theorems of the logic. But there are a number of other questions that one could pose. 5 6
The example can be found in Kracht [10, p. 500]. I thank Peter Schotch for showing me this technique.
G. Payette / Journal of Applied Logic 13 (2015) 188–196
193
The so-called MacIntosh7 rule p ⊃ p, ♦p ⊃ p isn’t admissible in K, so it isn’t derivable. But consider the following proof, that one might say is done in K: (1)p ⊃ p (2)¬p ⊃ ¬p [US], 1 (3)¬¬p ⊃ ¬¬p [MT], 2 (4)♦p ⊃ p def, double negation, 3. This proof doesn’t use any rules that are not available in K, uniform substitution and modus tollens are clearly available, as is double negation. What this proof shows is that the extension of K by p ⊃ p has ♦p ⊃ p as a theorem, i.e., K⊕p⊃p ♦p ⊃ p. This kind of argument concerns what happens in extensions of a logic. So in K, the MacIntosh rule “works” in a certain sense. I will identify the ‘sense’ in question by saying that the rule is demonstrable for K.8 Definition 4. Γ, β is a Demonstrable rule (DemL (Γ, β )) for L iff if each α ∈ Γ is added as axiom to L , then α is a theorem of L thus extended. Alternatively, Γ, β is demonstrable in L L⊕Γ β. A demonstrable rule is one that shows what will happen in an extension of a logic in which the antecedent formulas of the rules are added as new axioms. Such a logic represents a species of subjunctive; one asks ‘what would happen were we to take on new axioms?’ It is easy to see that if Γ, β is a derived rule, then it is a demonstrable rule. Observation 1. Let L be a structural, transitive logic. If DerL (Γ, β ), then DemL (Γ, β ). Proof. If Γ, β is derived, then for any substitution s, s[Γ] L s(β). But then L ⊕ Γ will be such that for any substitution s, L⊕Γ s(γ) for all γ ∈ Γ. So by [T], L⊕Γ s(β). So L⊕Γ β. 2 However, demonstrable rules needn’t be derived. For instance, MacIntosh’s rule is a demonstrable rule for K, but it isn’t a derived rule in K, i.e., p ⊃ p K ♦p ⊃ p. Thus Dem(X) doesn’t imply Der(X), generally. Similarly, the MacIntosh rule isn’t admissible in S4. In S4 p ⊃ p is a theorem, but ♦p ⊃ p isn’t. Letting s(p) = p, we have a counterexample to the admissibility of p ⊃ p, ♦p ⊃ p . However, the MacIntosh rule is demonstrable in S4. That follows from the general fact that demonstrability is preserved by extensions, unlike admissibility. The preservation of demonstrability by extensions follows immediately from the definitions of demonstrability and extension. Therefore in general, Dem(X) doesn’t imply Adm(X) either. There are also admissible rules that aren’t demonstrable. p, p is an admissible rule of K, but it isn’t demonstrable. If p is added as a theorem to K, then for all θ, K⊕p θ, which implies K ⊕⊥ = K ⊕p. But it is easily see that K⊕p p. I will now turn to generalizing the deduction theorem. 7 So called by Brian Chellas and Krister Segerberg in honor of Jack MacIntosh Chellas and Segerberg [3], although it has been mentioned by others in unpublished work dating from the 1970’s. 8 I am taking the notion of demonstration from Aristotle where a demonstration is a derivation which starts from certainties, and axioms are treated as certainties within a logic. One might suggest that demonstration better describes admissible rules since those start from theorems of the logic, whereas demonstrable rules would start by introducing new certainties. However, I am not willing to change a very common piece of terminology.
194
G. Payette / Journal of Applied Logic 13 (2015) 188–196
3. Reflecting rules: generalizing the deduction theorem A way to conceive of the putative, but illusory, failure of the deduction theorem for modal logics is to view logicians as expecting too much from those logics. The ‘too much’ that is expected from a logic is that logicians want admissible rules to also be encoded into the object language in the same way derived rules are. I will elaborate. If Γ, α is a derived rule of a normal, classical and propositional modal logic, then Γ ⊃ α will be a theorem of the logic where ⊃ is the material conditional. However, if Γ, α is an admissible rule of such a logic, Γ ⊃ α may not be a theorem of the logic. This is what happens with the “failure” of the deduction theorem for p, p in the modal logic K. Although, p, p is admissible, not every substitution instance of p ⊃ p is a theorem. What is interesting is that for classical propositional logic admissible rules can be encoded in the same way as derivable rules. Classical propositional logic is structurally complete. But it is far too much to ask every logic to be like that. But an obvious question to raise is: what does it take to encode the various kinds of rules in a uniform manner? (In a way similar to what can be done with derivable rules in classical modal logics). Derivable rules can always be encoded when the logic has a conditional and a combinator for formulas of appropriate sorts. Namely the sort of conditional (→ defined or basic) and combinator (∗ defined or basic) for which (in a structural logic) all finite Γ Γ β ⇐⇒ ∗(Γ) → β is provable as a metatheorem.9 If a logic has a conditional for which the deduction theorem is provable, then that logic can encode its derived rules. The notion of encoding used in the deduction theorem can be generalized in two ways. First, one can talk about a logic encoding the rules of another logic. Perhaps the paradigm examples of this are the provability logics which encode the provability relation of arithmetic, see Verbrugge [20], Visser [21]. But one can also generalize it to other kinds of rules as well, i.e., so that it includes admissible and demonstrable rules. This is what I call ‘reflecting rules’. Definition 5. A logic L reflects the Val-type10 rules of a logic L iff there is an L-formula θ(p, q) such that α for any rule Γ, α in the language of L , there are translations of Γ and α into formulas of L: Γ, for which L θ(Γ/p, α /q) ⇐⇒ ValL (Γ, α ). = Γ while The deduction theorem for K is the special case where L = L , θ(p, q) = p ⊃ q and Γ α = α. The obvious question is how far can this concept be taken? There is a sufficient condition for a modal logic reflecting its own admissible rules which can be gleaned from a result of Dana Scott, see Scott [17], and Scott [16]. This result also implies that any normal modal logic that extends K4C4, i.e., K ⊕ { p ⊃ p, p ⊃ p }, in a certain way, will be able to reflect admissible rules. That result depends on the logic being representable in a sequent calculus with certain restrictions. Logics which are generated by adding axioms to normal modal logics satisfy those conditions. But we have an initial result that puts a limit on what kinds of logics can reflect demonstrable rules. The result, unlike Scott’s, doesn’t depend on the logic being generated by a certain kind of proof theory or semantics. 9 What that amounts to in category-theoretic speak is to say that we have an adjoint pair (∗, →). This is a promising direction for further research. I would like to thank Clayton Peterson for reminding me of this fact. 10 Val is a variable for Der, Adm or Dem.
G. Payette / Journal of Applied Logic 13 (2015) 188–196
195
Theorem 1 (Failure of reflection theorem). Suppose L is a structural, transitive logic over a language extending L which has theorems, and suppose that the structural logic L reflects the demonstrable rules of L via the (possibly defined) binary connective . Then every formula is a theorem of L. Proof. By definition L ⊕ p will contain every formula since every formula is some substitution instance of p and L ⊕ p is structural. So p, q is a demonstrable rule of L because L ⊕ p is structural. By hypothesis, then, L p q. Since L is also structural, for all θ, θ ∈ L, θ, θ is a demonstrable rule for L because s(p) s(q) is a theorem of L for s(p) = θ and s(q) = θ . Let ϕ be any theorem of L. Then for any formula ψ ∈ LL , ϕ, ψ is demonstrable for L. By definition of demonstrable, that means L⊕ϕ ψ, but since ϕ is a theorem of L and L is structural and transitive, L ⊕ ϕ = L. So ψ is a theorem of L. 2 Theorem 1 has two important corollaries. First, it is interesting to ask how this plays out when a logic reflects its own demonstrable rules. The result is immediate: Corollary 1. Suppose L is a structural, transitive logic over a language extending L which has theorems, and reflects its own demonstrable rules via the (possibly defined) binary connective . Then every formula is a theorem of L. What is surprising about Corollary 1 and Theorem 1 is how weak the assumptions are which imply that reflecting demonstrable rules results in every sentence being a theorem of the “reflected” logic. If those assumptions are augmented things get worse. Corollary 2 (Triviality of demonstrability). Suppose L is a structural, transitive logic which obeys M over a language extending L which has theorems, and suppose that the structural logic L reflects the demonstrable rules of L via the (possibly defined) binary connective . Then L is trivial, i.e, L = P(L) × L. Proof. Since L ψ for each ψ ∈ L by Theorem 1, for any Γ ⊆ L, Γ L ψ by M. 2 The only logic that can have its demonstrable rules reflected—under certain circumstances—must be trivial. It is an immediate corollary from Corollaries 1 and 2 that a structural, Tarskian logic with theorems that reflects its own demonstrable rules is trivial. This result applies very widely. Although the assumptions about the logic in Theorem 1 are meager, there are a few. The theorem applies to all standard modal logics, i.e., normal modal logics that obey substitution. It even applies to modal logics which are extensions of base logics other than classical logic. Indeed, most substructural logics which are of interest have theorems and are structural (obey uniform substitution). It is easier to specify the logics which don’t satisfy the conditions rather than talking about the logics that do. Among propositional logics, being structural is ubiquitous. It is relaxed from time to time, but it isn’t very common. When uniform substitution is relaxed it is because one needs to place side conditions on axiom schemata, e.g., each formula of the form α ⊃ α is an axiom when α is an atomic sentence. Such side conditions are relevant for axiomatizing certain classes of models, see Payette [15]. For first-order logics obeying uniform substitution is not satisfied generally. As soon as identity is included, uniform substitution will fail, for example. Any first-order logic which has special predicates which satisfy special axioms is no longer structural. Relaxing the transitivity constraint is something not unheard of these days, but still quite rare. Even the point of proving cut elimination was to show that the cut rule was admissible, which meant that it was superfluous as a basic inference rule. If T fails though, it means that deductions can’t be strung together, and that seems like something essential to our conception of deduction. As for a logic not having theorems, that is even rarer. A logic that doesn’t have any theorems can’t reflect any rules at all since rules are reflected by a logic’s theorems. But there are logics without theorems
196
G. Payette / Journal of Applied Logic 13 (2015) 188–196
which are fairly well-known: the Kleene strong three-valued logic doesn’t have theorems for example. When a logic doesn’t have theorems, that means it may be possible—I don’t know that it is not—to reflect its demonstrable rules in another logic which does have theorems. This is akin to the object language metalanguage relationship where one language is used to express truth or deducibility of another, “lower level” language. Following that interpretation, Theorem 1 says one cannot study all forms of propositional deducibility inside of one structural propositional language. There is the possibility of another option, but I am less optimistic about its chances for success. One may be able to encode demonstrable rules in a non-uniform manner. That is, one could translate the rules in many different ways. If there were a non-uniform way of reflecting demonstrable rules in a logic L, the concept would be difficult to investigate. It might be consistent in that case, but too capricious to track properly. A sufficient condition which allows logics which are not structurally complete to reflect admissible rules in a non-uniform manner is found in Ghilardi [7]. It is shown to apply to a certain class of modal logics which extend K4. The formulas that reflect the admissible rules are not necessarily composed of the constituent formulas of the rules that they are reflecting. If such a method of reflecting demonstrable rules could be found, Theorem 1 would not apply, but that is because the definition of reflection has changed. To conclude, I raise the following question: what does this mean for the deduction theorem? Theorem 1 doesn’t say much about the deduction theorem qua a metatheorem concerning reflecting derivable rules which is how I characterize the classic deduction theorem. But it does put a limit on generalizing it (i.e., there is a limit to a logic’s ability to reflect rules in general). Indeed, most (propositional) logics that are studied can’t reflect all of the kinds of rules. In particular, the logics can’t reflect their own demonstrable rules. Therefore, there is a limit on generalizing the deduction theorem because its generalization is destined to fail. References [1] Ruth Barcan, The deduction theorem in a functional calculus of first order based on strict implication, J. Symb. Log. 11 (4) (1946) 115–118. [2] Jc Beall, Julien Murzi, Two flavors of curry paradox, J. Philos. 110 (3) (2013) 143–165. [3] Brian F. Chellas, Krister Segerberg, Modal logics with the MacIntosh rule, J. Philos. Log. 23 (1) (1994) 67–86. [4] Ronald Fagin, Joseph Y. Halpern, Moshe Y. Vardi, What is an inference rule?, J. Symb. Log. 57 (3) (1992) 1018–1045. [5] Gerhard Gentzen, Untersuchungen über das Logische Schliessen, Math. Z. 39 (1935) 176–210 and 405–431, English translation in Gentzen [6], pp. 68–131. [6] Gerhard Gentzen, The collected papers of Gerhard Gentzen, in: M.E. Szabo (Ed.), Studies in Logic and the Foundations of Mathematics, North-Holland, 1969. [7] Silvio Ghilardi, Best solving modal equations, Ann. Pure Appl. Logic 102 (2000) 183–198. [8] Raul Hakli, Sara Negri, Does the deduction theorem fail for modal logic?, Synthese 187 (3) (2012) 849–867. [9] Rosalie Iemhoff, George Metcalfe, Proof theory for admissible rules, Ann. Pure Appl. Logic 159 (2009) 171–186. [10] Marcus Kracht, Modal consequence relations, in: Patrick Blackburn, Johan van Benthem, Frank Wolter (Eds.), Handbook of Modal Logic, in: Studies in Logic and Practical Reasoning, Elsevier B. V., Amsterdam, The Netherlands, 2007, pp. 491–545. [11] Clarence Irivng Lewis, A Survey of Symbolic Logic, University of California Press, Berkeley, 1918. [12] Paul Lorenzen, Einführung in die operative Logik und Mathematik, Grundlehren der mathematischen Wissenschaften, vol. 78, Springer-Verlag, 1955. [13] Francesco Paoli, Substructural Logics: A Primer, Trends in Logic, vol. 13, Springer, Dordrecht, 2002. [14] William Tuthill Parry, In memorium: C.I. Lewis, Notre Dame J. Form. Log. 11 (2) (April 1970) 129–140. [15] Gillman Payette, Decidability of an Xstit logic, Stud. Log. 102 (3) (June 2014) 577–607. [16] Dana Scott, Completeness and axiomatizability in many-valued logic, in: Alfred Tarski, Leon Henkin (Eds.), Proceedings of the Tarski Symposium, 2nd (1979) edition, in: Proceedings of Symposium in Pure Mathematics, vol. 25, American Mathematical Society, 1974, pp. 411–436. [17] Dana Scott, Rules and derived rules, in: Sören Stenlund (Ed.), Logical Theory and Semantic Analysis, in: Synthese Library, vol. 63, D. Reidel Publishing Co., Dordrecht, 1974, pp. 147–161. [18] Lionel Shapiro, Deflating logical consequence, Philos. Q. 61 (243) (2011) 320–342. [19] A. Tarski, On some fundamental concepts of metamathematics, in: Logic, Semantics, Metamathematics: Papers from 1923 to 1938, Clarendon Press, Oxford, 1956. [20] Rineke (L.C.) Verbrugge, Provability logic, in: Edward N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy, 2010 edition, Winter, 2010. [21] Albert Visser, An overview of interpretability logic, in: Marcus Kracht, Maarten de Rijke, Heinrich Wansing, Michael Zakharyaschev (Eds.), Advances in Modal Logic, vol. 1, CSLI Publications, Stanford, 1998, pp. 307–359.