Hypersequent calculi for intuitionistic logic with classical atoms

Hypersequent calculi for intuitionistic logic with classical atoms

Annals of Pure and Applied Logic 161 (2009) 427–446 Contents lists available at ScienceDirect Annals of Pure and Applied Logic journal homepage: www...

990KB Sizes 0 Downloads 67 Views

Annals of Pure and Applied Logic 161 (2009) 427–446

Contents lists available at ScienceDirect

Annals of Pure and Applied Logic journal homepage: www.elsevier.com/locate/apal

Hypersequent calculi for intuitionistic logic with classical atoms Hidenori Kurokawa Department of Philosophy, The City University of New York, Graduate Center, 365 Fifth Avenue, New York, NY 10016, United States

article

info

Article history: Available online 22 August 2009 Communicated by Guest Editor (ESNL) MSC: 03B05 03B20 03B45 Keywords: Hypersequent Intuitionistic logic Classical logic Modal logic

abstract We discuss a propositional logic which combines classical reasoning with constructive reasoning, i.e., intuitionistic logic augmented with a class of propositional variables for which we postulate the decidability property. We call it intuitionistic logic with classical atoms. We introduce two hypersequent calculi for this logic. Our main results presented here are cut-elimination with the subformula property for the calculi. As corollaries, we show decidability, an extended form of the disjunction property, the existence of embedding into an intuitionistic modal logic and a partial form of interpolation. © 2009 Elsevier B.V. All rights reserved.

1. Introduction The general topic of combining logics has been widely discussed in contemporary studies of non-classical logic. Among these, the combination of intuitionistic logic and classical logic can be motivated by a desire to find a sequent calculus in which it is possible to formalize constructive and classical reasoning. For instance, J.-Y. Girard described this problem in the following way. Could there be a way to reunify logical systems – let us say those systems with a good sequent calculus – into a single sequent calculus? Is it possible to handle the (legitimate) distinction classical/intuitionistic not through a change of systems but through a change of formulas? Is it possible to obtain classical effects by a restriction to classical formulas . . .? [15], p.202. In the existing literature of combining logics, the aforementioned question has been pursued in two ways.1 On the one hand, we can introduce two different connectives, namely intuitionistic implication (or negation) and classical implication (or negation), in one system. Introducing distinct forms of implication (or negation) into the language does, however, generally come at some cost. For instance, as is shown by [12,17,20], a restriction on substitution into axioms (in the case of Hilbert systems) or rules (in the case of Gentzen systems) must be imposed in order to prevent the collapse of the intuitionistic connectives into the classical ones. On the other hand, [19,24,26]2 have sought to combine constructive and

E-mail address: [email protected]. 1 We confine our discussion to the cases in which only intuitionistic propositional logic and classical propositional logic are combined, although more general approaches to combine constructive and classical reasoning exist. For instance, Girard’s work itself combines intuitionistic, classical, and linear logic [15]. Also, modal operators can be used to represent constructive reasoning (say, [14]). 2 Sakharov’s approach is slightly different, but we take his works to be in this category. 0168-0072/$ – see front matter © 2009 Elsevier B.V. All rights reserved. doi:10.1016/j.apal.2009.07.013

428

H. Kurokawa / Annals of Pure and Applied Logic 161 (2009) 427–446

classical reasoning by introducing distinct classes of classical and intuitionistic propositional variables. This is the approach which will be pursued here.3 This second approach may be motivated on the basis that the same class of mathematical formulas will be regarded as being decidable by both classical and intuitionistic mathematicians. Although such theorists may disagree about the meaning of these formulas in other ways, they will at least agree about the way in which we should logically reason about them. Hence it seems reasonable to introduce a distinct class of propositional variables to represent these formulas in a system of formal logic. We introduced such a system for combining the classical and intuitionistic propositional calculi (CPC and IPC) in [18]. This is known as IPCCA .4 The language of IPCCA (LIPCCA ) consists of the intuitionistic propositional connectives and two sets of propositional variables: intuitionistic propositional variables VarI := {p1 , . . . , pn , . . .} and classical propositional variables VarC := {X1 , . . . , Xn , . . .}. The latter satisfy an additional condition for classicality. A formula F in LIPCCA is specified as5 : F ::= pi |Xi |⊥|F1 → F2 |F1 ∧ F2 |F1 ∨ F2 . The axiomatic system is the following:

• Axioms: 1. Axioms for IPC • Rule of inference: Modus Ponens

2. X ∨ ¬X

In this paper, we introduce two hypersequent calculi for IPCCA : a multi-succedent one (mLIC) and a single-succedent one (sLIC). We use hypersequents because no sequent calculus that satisfies the following conditions is known: containing ordinary rules for intuitionistic or classical logic, admitting cut-elimination, and satisfying the full subformula property. We take these conditions to be desirable because we follow the methodology posed by Kosta Došen [11] of treating logical constants as punctuation: i.e., differences in logical systems should be formulated in terms of the structural features of sequent calculi while holding their logical rules fixed. From this point of view, hypersequent calculi provide an appropriate framework for formulating IPCCA smoothly without introducing additional rules pertaining to logical constants. Two hypersequent calculi are required, since cut-elimination for mLIC may be treated by a semantic method while it turns out that a proof-theoretic argument that we used can be applied to sLIC. As corollaries to cut-elimination, we show how to construct an embedding of sLIC into two intuitionistic modal logics satisfying decidability for modalized formulas, i.e. global intuitionistic logic (GI) introduced by Titani [28] and a partial form of Craig interpolation. 2. Hypersequent Calculus mLIC We fix some notational conventions: (1) B◦ is a formula containing only classical variables. (2) A, B, C, . . . (without any extra symbols) can be any formula. We start with a multi-succedent sequent calculus for intuitionistic logic with classical atoms (mLIC) where only R → lacks symmetry. In mLIC, sequents are sets of formulas and hypersequents are multi-sets of sequents.6 (1) Axioms:

A⇒A

⊥⇒

(2) External structural rules: EW

G G|H

EC

G|Γ ⇒ ∆|Γ ⇒ ∆|H G|Γ ⇒ ∆|H

(3) Internal structural rules (IW) : LW

G|Γ ⇒ ∆ G|A, Γ ⇒ ∆

RW

G|Γ ⇒ ∆ G|Γ ⇒ ∆, A

(4) Logical rules L∧

G|A, B, Γ ⇒ ∆ G|A ∧ B, Γ ⇒ ∆

R∧

G|Γ ⇒ ∆, A

G|Γ ⇒ ∆, B

G|Γ ⇒ ∆, A ∧ B

3 The problem of combining the two kinds of connectives in an optimal way is, no doubt, an interesting technical question. But there is also a philosophical dimension to this approach. An intuitionistic ‘‘proposition’’ is an object for which the principle of bivalence does not hold. However, the interpretation of classical connectives (classical implication or negation) is specified relative to a truth function in bivalent semantics. Hence, applying classical connectives to intuitionistic propositions seems to be a mismatch, to say the least. It seems more reasonable, on the other hand, to consider the result of applying intuitionistic connectives to classically conceived propositions in case these propositions are decidable. For in this case, intuitionists will themselves acknowledge that the propositions in question are either true or false (and hence obey the rules of classical logic). What intuitionists reject, of course, is that this property applies to all propositions. 4 This approach has a connection to the proof theory of basic intuitionistic logic of proofs (iBLP) [2]. iBLP has formulas of the form ‘‘x : F ’’, which means ‘‘x is a proof of F ’’, which are decidable. 5 We take ¬ϕ to be an abbreviation of ϕ → ⊥. 6 Hypersequents were introduced by Avron [3–5] to formulate cut-free sequent calculi for non-classical logics, in particular Gödel–Dummett logic.

H. Kurokawa / Annals of Pure and Applied Logic 161 (2009) 427–446

L∨

G|A, Γ ⇒ ∆

L→

G|B, Γ ⇒ ∆

R∨

G|A ∨ B, Γ ⇒ ∆ G|Γ ⇒ ∆, A

G|B, Γ ⇒ ∆

G|Γ ⇒ ∆, A, B G|Γ ⇒ ∆, A ∨ B

R→

G|A → B, Γ ⇒ ∆

429

G|A, Γ ⇒ B G|Γ ⇒ A → B, ∆

G|Γ1 , Γ2 ⇒ ∆1 , ∆◦2 |H ◦

(5) Classical Splitting7

(6) Cut

G|Γ1 ⇒ ∆1 |Γ2◦ ⇒ ∆◦2 |H

G1 |Γ1 ⇒ ∆1 , A|H1

G2 |A, Γ2 ⇒ ∆2 |H2

G1 |G2 |Γ1 , Γ2 ⇒ ∆1 , ∆2 |H1 |H2

Remark. (X → p) ∨ (p → X ) is valid w.r.t. the class of Kripke models for IPCCA (cf. [18]). However, it seems difficult to add a rule to an ordinary cut-free sequent calculus so that the above formula can be derived in it only by using its subformulas. Due to Classical Splitting, we have a cut-free proof of it in mLIC. X ⇒X X ⇒|⇒X X ⇒ p|p ⇒ X

Classical Splitting LW , RW

R→ ⇒ X → p, p → X | ⇒ X → p, p → X R∨ ⇒ (X → p) ∨ (p → X )| ⇒ (X → p) ∨ (p → X ) EC ⇒ (X → p) ∨ (p → X ) The deductive equivalence between IPCCA and mLIC followingV theorem via a translation of a V can beVstated in the V hypersequent Γ1 ⇒ ∆1 | . . . |Γn ⇒ ∆n into a formula ( Γ1 → ∆ 1 ) ∨ · · · ∨ ( Γn → ∆ n ) . Theorem 1. mLIC` Γ1 ⇒ ∆1 | . . . |Γn ⇒ ∆n if and only IPCCA ` (

V

Proof. Induction on the length of proofs.

Γ1 →

V

∆1 ) ∨ . . . ∨ (

V

Γn →

V

∆n ).



3. Kripke models for intuitionistic logic with classical atoms Now we state the cut-elimination theorem. Here mLIC− means mLIC without Cut. Theorem 2. If mLIC` Γ1 ⇒ ∆1 | . . . |Γn ⇒ ∆n , then mLIC− ` Γ1 ⇒ ∆1 | . . . |Γn ⇒ ∆n . We demonstrate this theorem by a semantic method. A Kripke model K for the language of IPCCA is defined as an ordered triple (K , ≤, ), where K is a non-empty set and called a set of states, ≤ is a partial order of the states, and is a forcing relation. The ordered pair of the first two components (K , ≤) is called a Kripke frame. A forcing relation satisfies the condition of ‘‘monotonicity’’ propositional variables: for any s, t ∈ K and for any propositional variable p, if s ≤ t and s p, then t p.

satisfies the standard inductive clauses for logical connectives →, ∧,∨ and propositional constant ⊥ for IPC. Without loss of generality, we consider only think finite-tree Kripke models. In addition, we have the following condition. The new condition for classical atoms (Stability): Let s0 be the root node of a finite Kripke tree model. For each Xi ∈ VarC , one of the following holds: sj Xi for all sj ≥ s0 or sj 1 Xi for all sj ≥ s0 . Here K ψ means that a formula ψ is valid in K , which means that ψ is forced in all states in K . A formula ψ is ‘‘valid’’ if it is valid in all Kripke models. We extend our forcing relation and the notion of validity to hypersequents. Definition 3. 1. K , s 1 Γ ⇒ ∆, if ∃s0 ∈ K , s0 ≥ s, s.t. for any ϕ ∈ Γ , K , s0 ϕ and for any ψ ∈ ∆, K , s0 1 ψ . 2. K , s Γ ⇒ ∆ otherwise. Definition 4. K , s Γ1 ⇒ ∆1 | . . . |Γn ⇒ ∆n iff K , s Γ1 ⇒ ∆1 or . . . or K , s Γn ⇒ ∆n . Also, a hypersequent G = Γ1 ⇒ ∆1 | . . . |Γn ⇒ ∆n is valid if for any K and any s ∈ K , K , s Γ1 ⇒ ∆1 | . . . |Γn ⇒ ∆n . 4. The completeness theorem for the multi-succedent hypersequent calculus without the cut-rule Cut-elimination is obtained as a consequence of the following theorem. Theorem 5. mLIC− ` Γ1 ⇒ ∆1 | . . . |Γn ⇒ ∆n if Γ1 ⇒ ∆1 | . . . |Γn ⇒ ∆n is valid in any Kripke model of mLIC. We need soundness for mLIC to show cut-elimination, but this is a simple consequence of the above equivalence theorem and the soundness theorem of IPCCA with respect to the same class of Kripke models as specified above [18]. We now prove the contrapositive of the completeness theorem for mLIC− .

7 This Splitting would yield classical logic if we did not have any restriction of the language of formulas (see [6]). Without Classical Splitting, the hypersequent calculus would be a hypersequent for IPC in LIPCCA [8].

430

H. Kurokawa / Annals of Pure and Applied Logic 161 (2009) 427–446

4.1. Saturation lemma Definition 6. A saturated hypersequent G0 = Γ10 ⇒ ∆01 | . . . |Γn0 ⇒ ∆0n for mLIC− is a hypersequent satisfying the following conditions.8 1. For any component Γi0 ⇒ ∆0i of G0 , if A ∨ B ∈ Γi0 , then A ∈ Γi0 or B ∈ Γi0 . 2. For any component Γi0 ⇒ ∆0i of G0 , if A ∨ B ∈ ∆0i , then A ∈ ∆0i and B ∈ ∆0i . 3. For any component Γi0 ⇒ ∆0i of G0 , if A ∧ B ∈ Γi0 , then A ∈ Γi0 and B ∈ Γi0 . 4. For any component Γi0 ⇒ ∆0i of G0 , if A ∧ B ∈ ∆0i , then A ∈ ∆0i or B ∈ ∆0i . 5. For any component Γi0 ⇒ ∆0i of G0 , if A → B ∈ Γi0 , then A ∈ ∆0i or B ∈ Γi0 . 6. For any component Γi0 ⇒ ∆0i of G0 , if A → B ∈ ∆0i , then there exists a component Γiσ0 ⇒ ∆0iσ of G0 , s.t. A ∈ Γiσ0 and B ∈ ∆0iσ .9 Also, if conditions 1–5 are satisfied for a component, we call the component ‘‘saturated component’’.10 Definition 7. A hypersequent Γ1 ⇒ ∆1 | . . . |Γi ⇒ ∆i |Γi ∪ {A} ⇒ B| . . . |Γn ⇒ ∆n is an associated hypersequent of a hypersequent Γ1 ⇒ ∆1 | . . . |Γi ⇒ ∆i | . . . |Γn ⇒ ∆n with respect to Γi ⇒ ∆i , s.t. A → B ∈ ∆i . Also, we call this Γi ∪ {A} ⇒ B an associated component of Γi ⇒ ∆i , s.t. A → B ∈ ∆i . Lemma 8 (Saturation Lemma). For any hypersequent G = Γ1 ⇒ ∆1 | . . . |Γn ⇒ ∆n , s.t. mLIC− 0 Γ1 ⇒ ∆1 | . . . |Γn ⇒ ∆n , there exists a saturated hypersequent G0 = Γ10 ⇒ ∆01 | . . . |Γn0σn ⇒ ∆0nσn satisfying the following conditions. (α) For each component Γi ⇒ ∆i and its associated components Γiσ ⇒ ∆iσ ,11 (α1 ) Γiσ ⊆ Γiσ0 ; (α2 ) ∆iσ ⊆ ∆0iσ ; (α3 ) Γiσ0 ∩ ∆0iσ = ∅ and ⊥ ∈ / Γiσ0 . (β) Let SP and N be the following: P := S{Γiσ0 |(Γiσ0 ⇒ ∆0iσ ) is a component of saturated hypersequent G0 }; N := {∆0iσ |(Γiσ0 ⇒ ∆0iσ ) is a component of saturated hypersequent G0 }. Then {X ∈ VarC |X ∈ P } ∩ {X ∈ VarC |X ∈ N } = ∅. Proof. The lemma is proved by describing the saturation procedure. (1) We start from the leftmost component Γ1 ⇒ ∆1 of the original hypersequent G = Γ1 ⇒ ∆1 | . . . |Γn ⇒ ∆n , and we continue to construct saturated components and associated components for ‘‘→’’ formulas on the succedent of components until we develop all the saturated components. The following rules provide us with the means of constructing a saturated component of a saturated hypersequent. For Γi ⇒ ∆i in G, proceed through the following steps. 1. If A ∨ B ∈ Γi , then put A into Γi or put B into Γi . 2. If A ∨ B ∈ ∆i then put A into ∆i and put B into ∆i . 3. If A ∧ B ∈ Γi , then put A into Γi and put B into Γi . 4. If A ∧ B ∈ ∆i , then put A into ∆i or put B into ∆i . 5. If A → B ∈ Γi , then put A into ∆i or put B into Γi . 6. If Γi ∩ ∆i 6= ∅ or ⊥ ∈ Γi , then backtrack. (2) We continue the procedure until we can no longer parse any formula in the set,12 except for implicational formulas on the succedent. If we obtain a sequent that satisfies all the conditions, then we can terminate.13 If we have Case 6, then we backtrack. For Γ1 ⇒ ∆1 , we may terminate with ‘‘success’’ or ‘‘failure’’. The former means that we terminate by constructing a saturated component without any application of 1–6. The latter means that we terminate with no possibility of backtracking and have only the cases where Γ10 ∩ ∆01 6= ∅ or ⊥ ∈ Γ10 . If Γ10 ⇒ ∆01 does not have A → B in ∆01 , then we are done with Γ10 ⇒ ∆01 . (3) If there is a saturated component Γ10 ⇒ ∆01 for Γ1 ⇒ ∆1 satisfying all the conditions listed above and that A1 → B1 , . . . , Ak → Bk ∈ ∆01 , then we construct k-many associated components for Γ1 ⇒ ∆1 . Let Γ1j ⇒ ∆1j be as Γ1j = Γ ∪ {Aj } and ∆1j = {Bj } (1 ≤ j ≤ k). Then our new hypersequent looks like Γ10 ⇒ ∆01 |Γ11 ⇒ ∆11 | . . . |Γ1k ⇒ ∆1k | . . . |Γn ⇒ ∆n . 0 (4) By taking steps from (1) to (3) for Γ11 ⇒ ∆11 , we get Γ11 ⇒ ∆011 . If there is any A → B ∈ ∆011 , then we have to construct an associated component Γ111 ⇒ ∆111 (possibly Γ112 ⇒ ∆112 , . . . , Γ11l ⇒ ∆11l ) for Γ11 ⇒ ∆11 . We also saturate

0

8 In general, we call each sequent Γ ⇒ ∆ of a hypersequent G a component of G. Here i is an index for a component in a hypersequent. In Section 3, i i means that the sequent is saturated. The saturation procedure here is based on [13]. 9 σ stands for a finite sequence of numbers. When we have an implication on the succedent, we will have a new sequent and name it by an appropriate

number. So we have some sequence of numbers. The particular construction of a new component will be given below. However, σ ’s are convenient labels. The construction does not essentially depend on them. 10 We use this terminology regardless of whether it has A → B in ∆0 or not. i

11 We identify Γ ⇒ ∆ with Γ ⇒ ∆ , and once saturated, it becomes Γ 0 ⇒ ∆0 . An index of an associated component starts with 1. Also, the notion i i i0 i0 i0 i0 of ‘‘associatedness’’ is transitive. 12 After a formula is used once, it becomes unavailable in one component.

13 The procedure for the component Γ ⇒ ∆ will terminate because we only go through the subformulas of the component, and the complexity strictly i i decreases at each step.

H. Kurokawa / Annals of Pure and Applied Logic 161 (2009) 427–446

431

Γ111 ⇒ ∆111 , . . . , Γ11l ⇒ ∆11l . For any implication formula on any succedent, we unfold all associated components, and 0 0 0 we obtain Γ10 ⇒ ∆01 |Γ11 ⇒ ∆011 |Γ111 ⇒ ∆0111 | . . . |Γ1k0 ⇒ ∆01k |Γ1k1 ⇒ ∆01k1 | . . . |Γn ⇒ ∆n . (5) We take steps in (1)–(4) until we finish saturating all the associated components for Γn ⇒ ∆n . Since the number of A → B formulas on the succedents is finite, our procedure terminates in a finite number of steps.14 (6) After we systematically construct all saturated components (including associated ones), we take the union of Γiσ0 and the union of ∆0iσ (P and N) and check whether or not the condition of disjointedness of the classical variables (β) in P and N holds. If (β) is violated, then we backtrack. We have two possible cases of termination: (1) with ‘‘success’’, i.e. by constructing a saturated component without violating (α3 ) or (β); (2) with ‘‘failure’’, i.e. violating (α3 ) or (β) with no case of backtracking. If we have a successful case, then the other conditions of (α) are indeed satisfied with respect to any component Γiσ0 ⇒ ∆0iσ . To all steps except the case A → B ∈ ∆0iτ , we simply add some new formulas. Therefore, once Γiσ ⇒ ∆iσ is constructed, we only add formulas on Γiσ and ∆iσ . Hence, conditions (α1 ) and (α2 ) of the lemma are satisfied for all successful cases of components. But Case (2) is impossible. Claim 9. If a component (or any pair of components) in a candidate of a saturated hypersequent G0 violate(s) condition (α3 ) or condition (β) of the disjointedness of classical variables or both without any possibility of backtracking, then we can construct a cut-free proof of the hypersequent G. Proof. Essentially, by tracing the saturation procedure backwards (from the rightmost component to the leftmost), we first construct a cut-free derivation of the original hypersequent G from all the cases of saturation in which the condition (α3 ) or (β) is violated. Assume that we have violations of (α3 ) or (β) with implications on the succedent developed and we have no case of backtracking. We arrange all those possible alternatives of saturated hypersequents which violate condition (α3 ) or (β)15 so that we can construct a derivation tree of G whose leaves are saturated hypersequents. We first construct a tree labeled by hypersequents (at this point, not necessarily a derivation tree yet) by rules: (1) The root is the original hypersequent G; (2-1). If a saturation step is deterministic, then put the resulting G2 above G1 ; (2-2). If a saturation step is non-deterministic, then put the two alternatives G2 and G3 above the previous one G1 ; (2-3). If a saturation step is A → B ∈ ∆0iσ ,16 then write the hypersequents as follows.

Γ10 ⇒ ∆01 | . . . |Γi0 ⇒ ∆0i |Γiσ0 ⇒ ∆0iσ |Γiσ0 1 , A ⇒ B| . . . |Γn ⇒ ∆n Γ10 ⇒ ∆01 | . . . |Γiσ0 ⇒ ∆0iσ | . . . |Γn ⇒ ∆n (3) Leaf nodes are saturated hypersequents with (α3 ) or (β) violated. Subclaim 1: With some minor modifications, our finite tree of saturated hypersequents becomes a derivation of the original hypersequent G from all the alternative saturated hypersequents with violation of either (α3 ) or (β). Proof. Regarding (1) and (3): First, the construction traces saturation of G backwards, so we end with G. By assumption, all the topmost saturated hypersequents satisfy the following conditions. In case of (α3 ), all such saturated hypersequents have a component Γiσ0 , A ⇒ ∆0iσ , A or Γiσ0 , ⊥ ⇒ ∆0iσ ; in case of (β) (possibly with (α3 )), all such saturated hypersequents have at least one pair of components Γiσ0 , X ⇒ ∆0iσ and Γjτ0 ⇒ ∆0jτ , X (or a component Γiσ0 , A ⇒ ∆0iσ , A or Γiσ0 , ⊥ ⇒ ∆0iσ ). Regarding (2): (Outline) We show inductively (on the number of applications of logical rules) that the constructed tree with some modifications is the desired derivation. For (2-1), (2-2), by IH, we have a derivation up to G2 (and G3 ) from the failed cases of saturated hypersequents. We can take this as a derivation of the lower hypersequent G1 from G2 (and G3 ), where a principal formula is obtained as a result of applying the rule. Hence, we have a desired derivation of G1 from the failed saturated hypersequents. For (2-3): A → B ∈ ∆0iσ . In this case, we need a slight modification. In our saturation, we put some new sequents adjacent to the component that has → on the succedent. To accommodate this, we insert one intermediate line that has another copy of Γiσ0 ⇒ ∆0iσ to the tree. So we have the following derivation.

Γ10 ⇒ ∆01 | . . . |Γiσ0 ⇒ ∆0iσ |Γiσ0 1 , A ⇒ B| . . . |Γn ⇒ ∆n Γ10 ⇒ ∆01 | . . . |Γiσ0 ⇒ ∆0iσ |Γiσ0 ⇒ ∆0iσ | . . . |Γn ⇒ ∆n Γ10 ⇒ ∆01 | . . . |Γiσ0 ⇒ ∆0iσ | . . . |Γn ⇒ ∆n

R→ EC

Here Γiσ0 1 = Γiσ0 , and by IH, we have a derivation up to the top line from the failed saturated hypersequents. Note that the first step is just R → and the second is EC. So, we have a derivation of the bottom line.  (subclaim 1) Subclaim 2: Any hypersequent of the form Γ1 ⇒ ∆1 | . . . |A, Γi ⇒ ∆i , A| . . . |Γn ⇒ ∆n or Γ1 ⇒ ∆1 | . . . |⊥, Γi ⇒

∆i , | . . . |Γn ⇒ ∆n is provable in mLIC− .

Subclaim 3: Any hypersequent of the form Γ1 ⇒ ∆1 | . . . |X , Γi ⇒ ∆i | . . . |Γj ⇒ ∆j , X |Γn ⇒ ∆n is provable in mLIC− .

14 A repetition of the same formula may occur every time we construct a new associated sequent; however, we can stop the repetition of the same step. 15 Even in a case which violates (β), in some alternatives, there may be a violation of (α ). 3

16 If the succedent has more than one implication formula, then we have to deal with all of them by putting one new line of a hypersequent whenever we apply a case of → in ∆0iσ .

432

H. Kurokawa / Annals of Pure and Applied Logic 161 (2009) 427–446

Proof (For Subclaim 2). The entire hypersequent can be taken as the result of applying LW, RW, and EW to an axiom. (For Subclaim 3) We have a proof as follows. X ⇒X X ⇒|⇒X

Classical Splitting

X , Γi ⇒ ∆i |Γj ⇒ ∆j , X

several LW or RW

Γ1 ⇒ ∆1 | . . . |X , Γi ⇒ ∆i | . . . |Γj ⇒ ∆j , X | . . . |Γn ⇒ ∆n

several EW

The failed saturated hypersequents have the form of the hypersequents in the subclaims shown above. So, by putting proofs in Subclaims 2, 3 on top of our cut-free derivation from saturated hypersequents, we can construct a cut-free proof of the original hypersequent G based on all cases of violation of the condition (α3 ) or (β) (possibly with those of (α3 ), respectively). This completes the transformation of cases of failure into a cut-free proof in mLIC− .  (claim) By the claim, the existence of a failed case (2) would be contradictory to the assumption of unprovability of the original hypersequent. Therefore, the existence of a saturated hypersequent satisfying the conditions has been proved.  (Lemma) Let G0 be a saturated hypersequent satisfying all the conditions of the lemma. First, list the classical variables X1 , . . . , Xp , S i.e., ‘‘safe variables,’’ that are in the set P = {Γiσ |Γiσ ⇒ ∆iσ is a component of the saturated hypersequent G0 }. We consider the hypersequent whose components are of the form Γiσ+0 ⇒ ∆0iσ such that (1) Γiσ+0 = Γiσ0 ∪ {X1 , . . . Xp } and (2) ∆0iσ is as before. We call this hypersequent a ‘‘modified saturated hypersequent’’ G+ . Proposition 10. Suppose mLIC− 0 Γ1 ⇒ ∆1 | . . . |Γn ⇒ ∆n (= G), and let G0 be a saturated hypersequent satisfying the conditions of Saturation Lemma with ‘‘safe variables’’ X1 , . . . , Xp . Then there is a modified saturated hypersequent G+ s.t. for each saturated component Γiσ+0 ⇒ ∆0iσ of G+ , the following are satisfied: (α) Γiσ+0 ∩ ∆0iσ = ∅ and ⊥ ∈ / Γiσ+0 . + (β) Let P Sand N be the following. • P + :=S {Γiσ+0 |(Γiσ+0 ⇒ ∆0iσ ) is a component of G+ }. • N := {∆0iσ |(Γiσ+0 ⇒ ∆0iσ ) is a component of G+ }. Then {X ∈ VarC |X ∈ P + } ∩ {X ∈ VarC |X ∈ N } = ∅. Proof. Given a G0 and X1 , . . . , Xp , add Xi ’s to the antecedent of each Γiσ0 ⇒ ∆0iσ in G0 . We check that the resulting hypersequent satisfies the conditions of G+ . The hypersequent is still saturated even after Xi ’s are added since these are all variables and inactive in the saturation procedure. The condition (α) is satisfied because, by definition, safe variables Xi ’s are the classical variables such that Xi ∈ / N. So, adding Xi to the antecedent keeps the antecedent of each saturated component in G0 disjoint with its succedent. Similarly, for condition (β), by definition, there are no Xi ’s such that Xi ∈ N.  Based on this modified saturated hypersequent G+ , we construct a Kripke countermodel for G.17 Except for classical variables, a model to be constructed from a (modified) saturated hypersequent must have almost the same structure as a Kripke model for intuitionistic logic. However, to construct a Kripke model from one saturated hypersequent, we first construct Kripke models for components, and we glue those Kripke models to construct a Kripke model for G. +0 +0 Suppose we are given a modified saturated sequent G+ = Γ10 ⇒ ∆010 |Γ11 ⇒ ∆011 | . . . |Γi0+0 ⇒ ∆0i0 | . . . |Γiσ+0 ⇒ +0 0 0 +0 0 ∆iσ | . . . |Γn0 ⇒ ∆n0 | . . . |Γnτ ⇒ ∆nτ . We construct Kripke models that falsify the original components Γi ⇒ ∆i (1 ≤ i ≤ n). Let a Kripke model Ki be the following triple (Si+ , ≤i , i ) based on G+ . 1. Si+ = {Γi0+0 ⇒ ∆0i0 , Γi1+0 ⇒ ∆0i1 , . . . , Γiσ+0 ⇒ ∆0iσ } (finite). 2. ≤i is defined as: (Γiσ+0 ⇒ ∆0iσ ) ≤i (Γiτ+0 ⇒ ∆0iτ ) iff Γiσ+0 ⊆ Γiτ+0 . S 3. i is defined as follows18 : for any p, X ∈ i≤n (Sb(Γi ) ∪ Sb(∆i )), (a) (Γiσ+0 ⇒ ∆0iσ ) i p iff p ∈ Γiσ+0 ; (b) (Γiσ+0 ⇒ ∆0iσ ) i X iff X ∈ Γiσ+0 . S For any p, X ∈ / i≤n (Sb(Γi ) ∪ Sb(∆i )), (Γiσ+0 ⇒ ∆0iσ ) 1i p and (Γiσ+0 ⇒ ∆0iσ ) 1i X .

By this definition, we can immediately obtain some desirable properties S of a model of our logic. First, ≥i is a partial order, since this is an inclusion. Second, for any intuitionistic variable p ∈ i≤n (Sb(Γi ) ∪ Sb(∆i )), monotonicity clearly holds. Third, by construction, the following holds for classical variables: for any (Γiσ+0 ⇒ ∆0iσ ) ≥i (Γi0+0 ⇒ ∆0i0 ), {X |X ∈ Γi0+0 } = {X |X ∈ Γiσ+0 }. Proposition 11. For any X ∈ VarC , stability holds, i.e., For any (Γiσ+0 ⇒ ∆0iσ ) ≥i (Γi0+0 ⇒ ∆0i0 ), (Γiσ+0 ⇒ ∆0iσ ) i X or for any (Γiσ+0 ⇒ ∆0iσ ) ≥i (Γi0+0 ⇒ ∆0i0 ), (Γiσ+0 ⇒ ∆0iσ ) 1i X .

17 In the following, we say ‘‘model’’ unless we emphasize that one is a countermodel. 18 Sb(Γ ) stands for the set of subformulas contained in the set of formulas Γ .

H. Kurokawa / Annals of Pure and Applied Logic 161 (2009) 427–446

433

S Proof. If X ∈ / i≤n (Sb(Γi ) ∪ Sb(∆i )), then by definition, for any (Γiσ+0 ⇒ ∆0iσ ), (Γiσ+0 ⇒ ∆0iσ ) 1i X . The statement easily S S follows. If X ∈ i≤n (Sb(Γi )∪ Sb(∆i )), then suppose that for X ∈ i≤n (Sb(Γi )∪ Sb(∆i )) of G, ∃(Γiσ+0 ⇒ ∆0iσ ) ≥i (Γi0+0 ⇒ ∆0i0 ), (Γiσ+0 ⇒ ∆0iσ ) 1i X and ∃(Γiσ+0 ⇒ ∆0iσ ) ≥i (Γi0+0 ⇒ ∆0i0 ), (Γiσ+0 ⇒ ∆0iσ ) i X . For particular ρ and τ , (Γiτ+0 ⇒ ∆0iτ ) ≥i +0 0 (Γi0+0 ⇒ ∆0i0 ) and (Γiτ+0 ⇒ ∆0iτ ) 1i X , and (Γiρ+0 ⇒ ∆0iρ ) ≥i (Γi0+0 ⇒ ∆+0 i0 ) and (Γiρ ⇒ ∆iρ ) i X . So, by definition, +0 +0 +0 +0 +0 +0 X ∈ / Γiτ and X ∈ Γiρ . However, by the claim, X ∈ Γi0 iff X ∈ Γiτ and X ∈ Γi0 iff X ∈ Γiρ . Then, X ∈ Γi0+0 iff X ∈ / Γi0+0 . Contradiction.



Proposition 12. Let (Γiσ+0 ⇒ ∆0iσ ) be a component of G+ in Si+ and (Γiσ+0 ⇒ ∆0iσ ) ≥i (Γi0+0 ⇒ ∆0i0 ). For any X ∈ Sb(Γi+ ) ∪ Sb(∆i ),19 1. X ∈ Γiσ+0 H⇒ ∀(Γiτ+0 ⇒ ∆0iτ ) ≥i (Γi0+0 ⇒ ∆0i0 ), (Γiτ+0 ⇒ ∆0iτ ) i X and 2. X ∈ ∆0iσ H⇒ ∀(Γiτ+0 ⇒ ∆0iτ ) ≥i (Γi0+0 ⇒ ∆0i0 ), (Γiτ+0 ⇒ ∆0iτ ) 1i X . This is a simple consequence of the above proposition. Proposition 13. For any ψ ∈ Sb(Γi+ ) ∪ Sb(∆i ), (Γiσ+0 ⇒ ∆0iσ ) ≤i (Γiτ+0 ⇒ ∆0iτ ) and (Γiσ+0 ⇒ ∆0iσ ) i ψ H⇒ (Γiτ+0 ⇒

∆0iτ ) i ψ .

Proof. By induction on the complexity of formulas.



For purely classical formulas, we have another statement to show. Proposition 14. For any formula ψ ◦ ∈ Sb(Γi+ ) ∪ Sb(∆i ) and for any (Γiσ+0 ⇒ ∆0iσ ) ≥i (Γi00 ⇒ ∆0i0 ), 1. ψ ◦ ∈ Γiσ+0 H⇒ ∀(Γiτ+0 ⇒ ∆0iτ ) ≥i (Γi0+0 ⇒ ∆0i0 ), (Γiτ+0 ⇒ ∆0iτ ) i ψ ◦ ; 2. ψ ◦ ∈ ∆0iσ H⇒ ∀(Γiτ+0 ⇒ ∆0iτ ) ≥i (Γi0+0 ⇒ ∆0i0 ), (Γiτ+0 ⇒ ∆0iτ ) 1i ψ ◦ .

Proof. By induction of the complexity of ψ ◦ .



Lemma 15 (Semantic Lemma). Let G+ be a modified saturated hypersequent, and let (Γi0+0 ⇒ ∆0i0 ) be a component of G+ in Si+ . For any (Γiσ+0 ⇒ ∆0iσ ) ≥i (Γi0+0 ⇒ ∆0i0 ) and for any formula ϕ ∈ Sb(Γi+ ) ∪ Sb(∆i ), 1. ϕ ∈ Γiσ+0 H⇒ (Γiσ+0 ⇒ ∆0iσ ) i ϕ ; 2. ϕ ∈ ∆0iσ H⇒ (Γiσ+0 ⇒ ∆0iσ ) 1i ϕ . Proof. By induction on the complexity of formulas. Case (1.1) ϕ = p. For 1, by definition. For 2, suppose p ∈ ∆0iσ . By Condition 3 of the Saturation Lemma , p ∈ / Γiσ+0 . So, by +0 0 definition, (Γiσ ⇒ ∆iσ ) 1i p. Case (1.2) ϕ = X . This is a corollary of Proposition 12. Case (2) ϕ = A ∗ B (∗ =→, ∧, ∨). We have four subcases of combinations of a mixed formula and a classical formula (1) A = C ◦ and B = D◦ , (2) A is mixed and B is mixed, (3) A = C ◦ and B is mixed, and (4) A is mixed and B = D◦ . However, classical formulas are special cases of mixed formulas, so it suffices to prove the lemma for generic formulas. The proof is similar to the case of IPC.  4.2. Proof of completeness and cut-elimination Assuming mLIC− 0 Γ1 ⇒ ∆1 | . . . |Γn ⇒ ∆n , we first construct a Kripke countermodel for a component Γi ⇒ ∆i . From a modified saturated hypersequent G+ , we have constructed Kripke models Ki (1 ≤ i ≤ n). For each of these, by construction, we have a (root) saturated component (Γi0+0 ⇒ ∆0i0 ) in Si+ , s.t. {X1 , . . . , Xp } ∪ Γi ⊆ Γi0+0 and ∆i ⊆ ∆0i0 . By the Semantic Lemma, definition i for sequent and reflexivity of ≤i , Ki , (Γi0+0 ⇒ ∆0i0 ) 1i Γi ⇒ ∆i (1 ≤ i ≤ n). Next, we show the completeness theorem of the hypersequent calculus itself. Assume mLIC− 0 Γ1 ⇒ ∆1 | . . . |Γn ⇒ ∆n . V We want to show that there is a Kripke model K = (K , ≤, ) and a state sr ∈ K s.t. 1≤i≤n (K , sr 1 Γi ⇒ ∆i ). We construct the desired Kripke model by ‘‘gluing’’ constructed Kripke models. For all safe variables X1 , . . . , Xp in G+ , X1 , . . . , Xp ∈ Γi0+0 (1 ≤ i ≤ n). We already have Ki = (Si+ , ≥i , i ) s.t. Ki , (Γi0+0 ⇒ ∆0i0 ) 1i Γi ⇒ ∆i (1 ≤ i ≤ n). Then, we first take the disjoint union20 of the models Ki (1 ≤ i ≤ n) and add a new node below the roots of the models. Let sr = (Γr0 ⇒ ∆0r ), Γr0 = {X1 , . . . , Xp } and ∆0r = ∅. sr is the new root node. Let (Γiσ+0 ⇒ ∆0iσ ) be a component of G+ .

19 Γ + is the antecedent of the original component with safe variables added for a fixed G+ . i 20 U stands for the disjoint union operator.

434

H. Kurokawa / Annals of Pure and Applied Logic 161 (2009) 427–446

Now let K = (U S + , ≤, ), where + 1. S = {sr } ∪ 1≤i≤n Si+ ;

2. ≤= {(sr , s) ∈ {sr } × S + |Γr0 ⊆ Γiσ+0 or Γr0 ⊆ Γr0 } ∪ 1≤i≤n (≤i ), where either s = (Γr0 ⇒ ∆0r ) or s = (Γiσ+0 ⇒ ∆0iσ ) s.t. U +0 (Γiσ ⇒ ∆0iσ ) ∈ 1≤i≤n Si+ ; U 3. = {(sr , Xl )|Xl ∈ Γr0 } ∪ 1≤i≤n ( i ). Such a model K must exist since, by construction, we never have a conflict among classical variables here. Since the partial order in the new model is the inclusion on the antecedents of the saturated sequents, monotonicity obviously holds for intuitionistic atoms. The new condition for classical variables must be a consequence of the claim: for any (Γiσ+0 ⇒ ∆0iσ ) ≥ (Γr0 ⇒ ∆0r ) in S + , {X |X ∈ Γr0 } = {X |X ∈ Γiσ+0 }. The proof is essentially the same as that given for the nodes of Ki . Also, the following are the immediate consequences of the definition. 1. sr Xl iff Xl ∈ Γr0 iff ∀s ∈ S +, s Xl ; 2. ∀s ∈ Si+, s ϕ ⇐⇒ s i ϕ for any formula ϕ .

U

Claim 16. If mLIC− 0 Γ1 ⇒ ∆1 | . . . |Γn ⇒ ∆n , then

V

1≤i≤n

(K , sr 1 Γi ⇒ ∆i ).

Proof. In Ki , at (Γi0+0 ⇒ ∆0i0 ), each component is falsified, respectively. So, for each i (1 ≤ i ≤ n), at the state (Γi0+0 ⇒ ∆0i0 ) ∈ S + , the following hold: (1) (Γi0+0 ⇒ ∆0i0 ) ≥ (Γr0 ⇒ ∆0r ); (2) (Γi0+0 ⇒ ∆0i0 ) ϕ for all ϕ ∈ Γi ; 1 ψ for all ψ ∈ ∆i . (3) (Γi0+0 ⇒ ∆0i0 ) V So, by definition, 1≤i≤n (K , sr 1 Γi ⇒ ∆i ).  (claim) Hence, mLIC− 0 Γ1 ⇒ ∆1 | . . . |Γn ⇒ ∆n implies 1≤i≤n (K , sr 1 Γi ⇒ ∆i ). So, if mLIC− 0 Γ1 ⇒ ∆1 | . . . |Γn ⇒ ∆n , then K , sr 1 Γ1 ⇒ ∆1 | . . . |Γn ⇒ ∆n . So, if Γ1 ⇒ ∆1 | . . . |Γn ⇒ ∆n is valid, then mLIC− ` Γ1 ⇒ ∆1 | . . . |Γn ⇒ ∆n . This establishes completeness of mLIC− . Since soundness for mLIC holds w.r.t. the same class of Kripke models, as already discussed, if there is a Kripke model K and s ∈ K , s.t. K , s 1 Γ1 ⇒ ∆1 | . . . |Γn ⇒ ∆n , then mLIC0 Γ1 ⇒ ∆1 | . . . |Γn ⇒ ∆n . So, if mLIC` Γ1 ⇒ ∆1 | . . . |Γn ⇒ ∆n , then mLIC− ` Γ1 ⇒ ∆1 | . . . |Γn ⇒ ∆n . This gives a semantic proof of cut-elimination for mLIC. As a corollary, the subformula property holds for mLIC, since Cut is the only rule that spoils the subformula property in mLIC. Due to this corollary, IPCCA is a conservative extension of its intuitionistic and classical fragments, respectively, since for any purely intuitionistic (or classical) formula provable in mLIC, there is a proof using only subformulas of it. Also, the constructed model is finite, so we have decidability of mLIC. We have another corollary of completeness, namely, we can prove a refined version of the disjunction property for mLIC. The disjunction property itself is defined as follows. A sequent system S has the disjunction property (DP) if S `⇒ A ∨ B H⇒ S `⇒ A or S `⇒ B. To state the proposition, we need a definition of a positive and negative occurrence of a formula in a sequent. We put + or − in front of a formula to state that the given formula has a positive occurrence and a negative occurrence. ‘‘+ϕ ’’ means ϕ has a positive occurrence and ‘‘−ϕ ’’ means ϕ has a negative occurrence. The occurrences of a subformula of a given formula are determined inductively as follows. For a sequent Γ ⇒ ∆, (1) If ϕ ∈ Γ , then −ϕ : if −ϕ and ϕ = (A ∧ B), then −A and −B; if −ϕ and ϕ = (A ∨ B), then −A and −B; if −ϕ and ϕ = (A → B), then +A and −B; (2) If ϕ ∈ ∆, then +ϕ : if +ϕ and ϕ = (A ∧ B), then +A and +B; if +ϕ and ϕ = (A ∨ B), then +A and +B; if +ϕ and ϕ = (A → B), then −A and +B. We say ‘‘the Extended Disjunction Property (EDP) holds’’ when the following statement holds, since DP for IPC is a special case of this.

V

Proposition 17. If no classical subformula of A ∨ B has both negative and positive occurrences in A and B of mLIC `⇒ A ∨ B, then mLIC `⇒ A or mLIC `⇒ B. Proof. Proof by contradiction. Suppose that there exists A ∨ B such that not ⇒ A and not ⇒ B, but that ⇒ A ∨ B s.t. there is no classical subformula of A ∨ B whose positive and negative occurrences appear in A and B of A ∨ B. First, observe that the inductive characterization of positive and negative occurrences of a subformula of ϕ in Γ or ∆ corresponds to a step in the saturation procedure that puts a subformula of a formula in Γ and ∆. Hence, −ϕ in G iff ϕ occurs in the antecedent of some saturated component of some G0 , and +ϕ in G iff ϕ occurs in the succedent of some saturated component of some G0 . By assumption, we have a case of ⇒ A ∨ B where we do not have any occurrence of a classical subformula ϕ ◦ in A (in B) s.t. +ϕ ◦ and in B (in A) s.t. −ϕ ◦ . In particular, there is no X such that +X (−X ) in mLIC− 0⇒ A and −X (+X ) in mLIC− 0⇒ B. So we can construct saturated hypersequents G0A of mLIC− 0⇒ A and G0B of mLIC− 0⇒ B such that there is no classical variable in some succedent (antecedent) of G0A and in some antecedent (succedent) of G0B . By completeness, we can construct countermodels KA and KB based on G0A and G0B s.t. KA , rA 1⇒ A and KB , rB 1⇒ B. By construction, there is no classical variable X such that at some sA of KA , sA X (or sA 1 X ) and at some sB of KB , sB 1 X (or sB X ). By the gluing method, we can construct a countermodel K , r 1⇒ A ∨ B. By soundness, ⇒ A ∨ B is not mLIC-provable, which contradicts our assumption ⇒ A ∨ B. 

H. Kurokawa / Annals of Pure and Applied Logic 161 (2009) 427–446

435

5. Syntactical proof of cut-elimination In this section, we introduce a single-succedent hypersequent calculus sLIC for IPCCA and demonstrate cut-elimination for sLIC by a proof-theoretical method. In such a method, we must carefully trace formulas in applications of EC. To handle EC, Avron discovered two methods: the history method [3] and simultaneous cut [4]. However, Baaz and Ciabattoni [7] have introduced a device called the Schütte–Tait method to prove cut-elimination of a hypersequent calculus for firstorder Gödel–Dummett logic. Intuitively, the difference between this method and Gentzen’s method of cut-elimination is the following: in the latter case, we eliminate the uppermost cut, whereas, in the Schütte–Tait method, we eliminate the largest cut. This can handle the problem raised by external contraction. Ciabattoni [9] applied the method to a hypersequent calculus for Global Intuitionistic Logic (HGI). It happens that GI and our logic IPCCA have a natural relationship and Ciabattoni’s method is applicable to sLIC. To prove cut-elimination for a hypersequent calculus for HGI, the Schütte–Tait method of cut-elimination is particularly important, since unlike the case of Gödel–Dummett logic, extending an ordinary cut to a simultaneous cut has the effect of strengthening the logic into one in which (p → q) ∨ (q → p) is derivable, whereas this is not provable in intuitionistic logic [9]. 5.1. Hypersequent calculus sLIC The system sLIC is obtained from mLIC by restricting the number of formulas on the succedent of each component of a hypersequent to at most 1 (|∆i | ≤ 1) and assuming that sequents are multi-sets of formulas and hypersequents are multisets of sequents. We present only the rules of sLIC different from those of mLIC. (1) Axioms:

X ⇒ X (X : classical)

p ⇒ p (p: intuitionistic)

⊥⇒

(2) External structural rules are the same as before. (3) Internal structural rules: Due to the modification of setting, we need LC. G|A, A, Γ ⇒ B

LC

No RC.

G|A, Γ ⇒ B

(4) Logical rules: R∧, L∨ simply have at most one formula on the succedent. G|B, Γ ⇒ A G|C , Γ ⇒ A G|Γ ⇒ A R∨ G|B ∧ C , Γ ⇒ A G|B ∧ C , Γ ⇒ A G|Γ ⇒ A ∨ B G|A, Γ ⇒ B G|Γ ⇒ A G|B, Γ ⇒ C R→ L→ G|A → B, Γ ⇒ C G|Γ ⇒ A → B L∧

G|Γ ⇒ B G|Γ ⇒ A ∨ B

(5) Additional Rule: Atomic Classical Splitting: ACS (1)

G|Γ , X ⇒ A

(2)

G|Γ ⇒ A|X ⇒

(6) Cut:

G|Γ ⇒ A

G|Γ , X ⇒ Y G|Γ ⇒ |X ⇒ Y

G|A, Γ ⇒ B

G|Γ ⇒ B The deductive equivalence between IPCCA and sLIC V can be stated in the V following theorem via a translation of a hypersequent Γ1 ⇒ A1 | . . . |Γn ⇒ An into a formula ( Γ1 → A1 ) ∨ . . . ∨ ( Γn → An ). Theorem 18. sLIC` Γ1 ⇒ A1 | . . . |Γn ⇒ An if and only IPCCA ` ( Proof. Induction on the length of proofs.

V

Γ1 → A1 ) ∨ . . . ∨ (

V

Γn → An ).



Remark. More general forms of Classical Splitting are admissible via ACS. General Classical Splitting GCS (1)

G|Γ1 , Γ2◦ ⇒ A G|Γ1 ⇒ A|Γ2 ⇒ ◦

(2)

G|Γ1 , Γ1◦ ⇒ A◦ G|Γ1 ⇒ |Γ2◦ ⇒ A◦

Definition 19. The complexity |A| of a formula A is inductively defined: 1. |A| = 0 if A is atomic; 2. |A ∨ B| = |A ∧ B| = |A → B| = max(|A|, |B|) + 1. The right (left) rank of a Cut is the number of consecutive hypersequents containing the cut-formula, counting upward from the right (left) upper sequent of the Cut. A derivation d in sLIC is an upward rooted tree of hypersequents generated from subtrees by applying the inference rules.

436

H. Kurokawa / Annals of Pure and Applied Logic 161 (2009) 427–446

Definition 20. The length |d| of d is the maximal number of inference rules (but weakenings) +1 occurring on any branch of d. Definition 21. Let di , with i < k, be the direct subdivisions of d. The cut-rank ρ(d) of d is defined by induction as: 1. ρ(d) = 0 if d is cut-free; 2. ρ(d) = maxi
G|Γ 0 ⇒ C 0 G|Γ ⇒ C

.

Case 1.1. A is principal in R. Subcase 1.1.1. Suppose A∗ ∈ Γ . In the active component, A∗ ∈ Γ 0 iff A is a side formula of the inference. In the non-active components of the premise of R, the decoration is the same as in the conclusion, i.e. for each such component {Σ ⇒ B} ∈ G, A∗ ∈ Σ iff A∗ of the corresponding component belonging to the conclusion of R. Subcase 1.1.2. Suppose C = A∗ . The decoration of the non-active components of the premise R is the same as in the conclusion. Case 1.2. A is not principal in R. In the active component, if A∗ ∈ Γ , then A∗ ∈ Γ 0 . Or, if C = A∗ , then C 0 = A∗ . In the non-active components, the decoration in the premise of R is the same as in the conclusion. Case 2. R is EW or Cut. The decoration of the components in the premise of R is the same as in the conclusion. Case 3. R is LC. Subcase 3.1. A∗ is the contracted formula, then A∗ , A∗ belongs to the premise of R. The remaining formulas in the premise of R are marked as in the conclusion. Subcase 3.2. A∗ is not the contracted formula. Similar to Case 2. Case 4. R is LW and RW. Similar to Case 1. Case 5. R is EC. Similar to Case 3. Case 6. R is Splitting. Case 6.1. Form 1.

G|Γ , X ⇒ C G|Γ ⇒ C |X ⇒

Subcase 6.1.1. Suppose A∗ ∈ Γ or A∗ = X in Γ ⇒ C |X ⇒. Then, A∗ ∈ Γ , X of the active component in the premise of R. Subcase 6.1.2. Suppose C = A∗ in Γ ⇒ C |X ⇒. Then, it is also in Γ , X ⇒ C . Case 6.2. Form 2.

G|Γ , X ⇒ Y G|Γ ⇒ |X ⇒ Y

Subcase 6.2.1. Suppose A∗ ∈ Γ or A∗ = X in Γ ⇒ |X ⇒ Y . Then, A∗ ∈ Γ , X of the active component in the premise of R. Subcase 6.2.2. Suppose Y = A∗ in Γ ⇒ |X ⇒ Y . Then, it is also in Γ , X ⇒ Y . Case 6.3. The decoration in the non-active components of the premise R is the same as in the conclusion.

H. Kurokawa / Annals of Pure and Applied Logic 161 (2009) 427–446

Lemma 24 1. If d `sLIC 2. If d `sLIC 3. If d `sLIC

437

(Inversion). G|Γ , A ∨ B ⇒ C , then we can find d1 `sLIC G|Γ , A ⇒ C and d2 `sLIC G|Γ , B ⇒ C ; G|Γ ⇒ A ∧ B, then we can find d1 `sLIC G|Γ ⇒ A and d2 `sLIC G|Γ ⇒ B; G|Γ ⇒ A → B, then we can find d1 `sLIC G|Γ , A ⇒ B

such that ρ(di ) ≤ ρ(d) and |di | ≤ |d|, for i = 1, 2. Proof. The proof is by induction on |d|. We show some subcases of 1 and 3. In each case, we consider the last inference R in d. For 1. A ∨ B: Case 1.1. R is a logical rule. 1.1.1. The indicated occurrence of A ∨ B is the principal formula of R. d0 G|Γ , B ⇒ C

d00 G|Γ , A ⇒ C

G|Γ , A ∨ B ⇒ C 0

00

In this case, we can take d and d as the desired derivations d1 and d2 . 1.1.2. The indicated occurrence of A ∨ B is not the principal formula of R. (We omit this case, since this is straightforward.) Case 1.2. R is an internal or external structural rule. 1.2.1. R is EC. d0 G|Γ , A ∨ B ⇒ D|Π ⇒ C |Π ⇒ C G|Γ , A ∨ B ⇒ D|Π ⇒ C Consider the following derivation of the premise of the above inference. d01

d02

G|Γ , A ⇒ D|Π ⇒ C |Π ⇒ C

G|Γ , B ⇒ D|Π ⇒ C |Π ⇒ C

G|Γ , A ∨ B ⇒ D|Π ⇒ C |Π ⇒ C Here |d01 |, |d02 | < |d| and ρ(d01 ), ρ(d02 ) ≤ ρ(d). So, by IH, d01 `sLIC G|Γ , A ⇒ D|Π ⇒ C |Π ⇒ C and d02 `sLIC G|Γ , B ⇒ D|Π ⇒ C |Π ⇒ C . By applying EC, d1 `sLIC G|Γ , A ⇒ D|Π ⇒ C and d2 `sLIC G|Γ , B ⇒ D|Π ⇒ C . |d01 |, |d02 | ≤ |d| and ρ(d1 ), ρ(d2 ) ≤ ρ(d). So, d1 and d2 satisfy the conditions.21 1.2.2. R is Classical Splitting. 1.2.2.1. The first form (1) of Classical Splitting. The last inference is as follows. d0 G|Γ , X , A ∨ B ⇒ D G|Γ , A ∨ B ⇒ D|X ⇒ Consider the following derivation of the premise above. d01

d02

G|Γ , X , A ⇒ D

G|Γ , X , B ⇒ D

G|Γ , X , A ∨ B ⇒ D Here |d01 |, |d02 | < |d| and ρ(d01 ), ρ(d02 ) ≤ ρ(d). By IH, d01 `sLIC G|Γ , X , A ⇒ D and d02 `sLIC G|Γ , X , B ⇒ D. Applying Classical Splitting, we get d1 `sLIC G|Γ , A ⇒ D|X ⇒ and d2 `sLIC G|Γ , B ⇒ D|X ⇒. Also, |d1 |, |d2 | ≤ |d| and ρ(d1 ), ρ(d2 ) ≤ ρ(d). So, d1 and d2 satisfy the conditions for the desired derivation. 1.2.2.2. The second form (2) of Classical Splitting. Similar to (1). For 2. A → B: Case 2.1. R is a logical rule. 2.1.1. The indicated occurrence of A → B is the principal formula of R. d01 G|Γ , A ⇒ B G|Γ ⇒ A → B The derivation d0 obviously satisfies the conditions for the desired derivation.

21 EC may be applied to Γ , A ∨ B ⇒ D. In this case, apply IH to d0 and d0 again and derive G|Γ , A ⇒ D and G|Γ , B ⇒ D. 1 2

438

H. Kurokawa / Annals of Pure and Applied Logic 161 (2009) 427–446

2.1.2. The indicated occurrence of A → B is not the principal formula of R. For example, suppose d ends with a rule L → whose premises are d0 `sLIC G|Γ ⇒ E and d00 `sLIC G|F , Γ ⇒ A → B. d0

d00

G|Γ ⇒ E

G|F , Γ ⇒ A → B

G|Γ , E → F , ⇒ A → B By IH, we obtain d0 ` G|Γ ⇒ E and d00 `sLIC G|F , Γ , A ⇒ B, with |d0 |, |d00 | < |d| and ρ(d0 ), ρ(d00 ) ≤ ρ(d). Then we apply LW first to d0 `sLIC G|Γ ⇒ E. We get dĎ `sLIC G|Γ , A ⇒ E. After that, we apply L → to dĎ `sLIC G|Γ , A ⇒ E and d00 `sLIC G|F , Γ , A ⇒ B. d0 G|Γ ⇒ E G|Γ , A ⇒ E

LW

d00 G|F , Γ , A ⇒ B

G|Γ , E → F , A ⇒ B

L→

Since we do not count the number of applications of weakenings in |dĎ |, the resulting derivation d1 , s.t. d1 ` G|Γ , E → F , A ⇒ B satisfies the desired property. Other cases of logical rules with a principal formula different from A → B are similar. Case 2.2. R is an internal and external structural rule. Although there is a difference between one-premise inference and two-premise inference, the argument is essentially the same as case 1.2.22  Remark. To prove cut-elimination Schütte–Tait style, it suffices to make at least one of the premises of Cut invertible. We write d, H `sLIC G if d is a proof in sLIC from the assumption H. H [B/A] will indicate the hypersequent H in which we uniformly replace A by B. Lemma 25. Let d `sLIC G|Γ , A ⇒ B, where A is an atomic formula decorated in d that is not the formula of any cut in d. One can find a derivation d∗ , G|Γ ⇒ A `sLIC G|Γ ⇒ B such that ρ(d∗ ) = ρ(d). Proof. We consider the decoration of A in d starting from Γ , A∗ ⇒ B. We replace A∗ everywhere in d with Γ , and add G to all the hypersequents in d. For each hypersequent G|p ⇒ p, G|X ⇒ X or G|⊥ ⇒, we add an application of EW to recover the formula axioms of d. The resulting figure will not be a correct derivation, so we have to have a correction procedure. We classify the cases depending on whether or not we apply ACS in d. Case 1. ACS is not applied to X ∗ . With no ACS applied after introducing X ∗ , the argument is identical with that of [9]. Now, X ∗ is introduced 1.1. By internal weakening. The weakening on A∗ is replaced by stepwise weakenings of formulas B such that B ∈ Γ . 1.2. By external weakening. The weakening in the component S is replaced by a weakening on S [Γ /A∗ ]. 1.3. In an axiom. If A∗ = p∗ or A∗ = X ∗ but there is no application of ACS, as in [9], all axioms of the form p ⇒ p will be modified into G|Γ ⇒ p. Case 2. ACS is applied to at least one occurrence of X ∗ in d. We need special care in this case, since we have a counterexample for Ciabattoni’s procedure in [9] that works well for Case 1. X∗ ⇒ X X ⇒ X ∨ p1 ∗

⇒ X ∨ p1 |X ∗ ⇒

ACS

The above ACS is based on X ∗ in d, but if we replace all X ∗ ’s by Γ , then ACS cannot be applied at the indicated step. That is because Γ may not be classical anymore and X ∨ p1 is not classical either. We may have similar cases of ACS where X ∗ is introduced by IW or EW but replacing X ∗ by Γ spoils a proof. We suggest the following remedy. For the cases where X ∗ is introduced by IW and EW, we replace those introductions with the one that has an isolated component |X ∗ ⇒. If X ∗ originates in an axiom X ∗ ⇒ X , we modify d so that we apply ACS immediately after the axiom X ∗ ⇒ X and we get X ∗ ⇒ | ⇒ X . We keep the |X ∗ ⇒ isolated throughout the proof and modify the proof of G|Γ , X ∗ ⇒ B to a proof of G|Γ ⇒ B|X ∗ ⇒. When we isolate |X ∗ ⇒ in these ways, we can replace X ∗ with Γ without spoiling the derivation. Once we get G|Γ ⇒ B|X ∗ ⇒, we can derive G|Γ , X ∗ ⇒ B from that by IW and EC.23 Claim 26. Under the same assumption of the lemma, for any proof d of d ` G|Γ , X ∗ ⇒ B where ACS is applied to some X ∗ , there exists a proof d0 such that d0 ` G|Γ ⇒ B|X ∗ ⇒ and d0 satisfies the condition that X ∗ occurs only in a component X ∗ ⇒ throughout d0 except the axiom of form X ∗ ⇒ X .

22 Note that the second form of ACS is not applicable. 23 This procedure certainly modifies irrelevant parts of a proof and has many redundancies. However, to avoid the problem at hand, this is sufficient.

H. Kurokawa / Annals of Pure and Applied Logic 161 (2009) 427–446

439

Proof (Proof by Induction on |d|). Given d, by using IH on a subderivation that satisfies all the properties specified above, we will construct a derivation d0 . X∗ ⇒ X Base case: If d is an axiom X ∗ ⇒ X , then d0 will be by ACS. ∗ ⇒ X |X ⇒

Inductive case: consider the last application of a rule R in d. 1. R = LW. (The case of RW is similar to 1.2.) 1.1. X ∗ is introduced by LW.

d1 G|Γ 0 , X ∗ ⇒ B G|Γ 0 , X ∗ , X ∗ ⇒ B

By IH, d01 ` G|Γ 0 ⇒ B|X ∗ ⇒. Hence, d001 ` G|Γ 0 ⇒ B|X ∗ ⇒ |X ∗ ⇒. So, d0 ` G|Γ 0 ⇒ B|X ∗ ⇒.24 1.2. X ∗ is not principal. d1 G|Γ 0 , X ∗ ⇒ B G|Γ 0 , A, X ∗ ⇒ B By IH, d1 ` G|Γ 0 ⇒ B|X ∗ ⇒. Hence, d0 ` G|Γ 0 , A ⇒ B|X ∗ ⇒. (Note that in the case where A is the only formula in Γ 0 , A ⇒ B, this is the same as introducing A ⇒ by EW.) 2. R = EC (Other structural rules, ACS and Cut are similar to EC). d1 G|Γ , X ∗ ⇒ B|Γ , X ∗ ⇒ B G|Γ , X ∗ ⇒ B By IH, we get d01 ` G|Γ ⇒ B|X ∗ ⇒ |Γ ⇒ B|X ∗ ⇒. So, d0 looks as follows. d01 G|Γ ⇒ B|X ⇒ |Γ ⇒ B|X ∗ ⇒ ∗

G|Γ ⇒ B|X ∗ ⇒ 3. R = LC. 3.1. The case in which X ∗ is not principal is straightforward. 3.2. X ∗ is principal. d1 G|Γ , X ∗ , X ∗ ⇒ B G|Γ , X ∗ ⇒ B By IH, d1 is transformed into d01 ` G|Γ 0 ⇒ B|X ∗ ⇒ |X ∗ ⇒. So d0 looks as follows. d01 G|Γ ⇒ B|X ∗ ⇒ |X ∗ ⇒ G|Γ ⇒ B|X ∗ ⇒ d1

d2

G|Γ 0 , X ∗ ⇒ A

G|C , Γ 0 , X ∗ ⇒ B

4. R = L→.

G|Γ 0 , A → C , X ∗ ⇒ B By IH, we get d01 ` G|Γ 0 ⇒ A|X ∗ ⇒ and d02 ` G|C , Γ 0 ⇒ B|X ∗ ⇒. So, we have: d01

d02

G|Γ ⇒ A|X ⇒ 0



G|C , Γ ⇒ B|X ∗ ⇒ 0

G|Γ 0 , A → C ⇒ B|X ∗ ⇒ 5. R = R →; R = L∧; R = R∧; R = L∨; R = R∨ are similar to L →.

 (claim)

We can summarize our modification of Ciabattoni’s procedure as follows: 1. A derivation d, s.t. d `sLIC G|Γ , X ∗ ⇒ B with an application of ACS is modified into d0 s.t. d0 `sLIC G|Γ ⇒ B|X ∗ ⇒ (as specified in the above claim).

24 When X ∗ is not present,

G0 |Γ 0 ⇒ B0 G0 |Γ 0 ⇒ B0 is modified into 0 0 . G0 |Γ 0 , X ∗ ⇒ B0 G |Γ ⇒ B0 |X ∗ ⇒

440

H. Kurokawa / Annals of Pure and Applied Logic 161 (2009) 427–446

2. If d0 ` G|Γ ⇒ B|X ∗ ⇒, then there is a d00 s.t. d00 ` G|Γ , X ∗ ⇒ B by IW, EC. G|Γ ⇒ B|X ∗ ⇒ G|Γ , X ∗ ⇒ B|Γ , X ∗ ⇒ B G|Γ , X ⇒ B ∗

IW’s EC

3. Then we replace X ∗ by Γ and follow Ciabattoni’s procedure. The axiom X ∗ ⇒ X will turn into the premise G|Γ ⇒ X . ACS can always be applied. This can produce a derivation dĎ , G|Γ ⇒ A `sLIC G|G|Γ , Γ ⇒ C with ρ(d) = ρ(dĎ ). So, d∗ with ρ(d) = ρ(d∗ ) is obtained by applying EC and IC .  (lemma) Lemma 27 (Reduction). Let d0 `sLIC G|Γ ⇒ A and d1 `sLIC G|Γ , A ⇒ C , both with cut-rank ρ(di ) ≤ |A|. Then we can find a derivation d `sLIC G|Γ ⇒ C with ρ(d) ≤ |A|. If one derives G|Γ ⇒ C by an application of Cut, then the resulting derivation would have cut-rank |A| + 1. Proof. Note that in our system, all the axioms are atomic. Case 1. If A is atomic, then by the above lemma, there is a derivation d∗ , G|Γ ⇒ A `sLIC G|Γ ⇒ C such that ρ(d∗ ) = ρ(d1 ). The required derivation d is obtained by concatenating the derivation d0 and d∗ . Case 2. Suppose A is not atomic. (We show only → and ∨ cases.) Subcase 2.1. A = B → D: Let us consider the decoration of A in d1 starting from G|Γ , (B → D)∗ ⇒ C . (1) We first replace all instances (B → D)∗ in d1 with Γ . (2) We add G to all hypersequents in d1 . (3) For each hypersequent G|B ⇒ B, we add EW to recover B ⇒ B, from which we can recover the atomic axioms in d1 . Note that the resulting tree is no longer a derivation. We have to consider the following corrective steps. We have some different cases, depending on the cases in which the marked occurrence of B → D originated. 2.1.1. B → D is introduced as a principal formula of logical inference. We replace L → inferences of B → D by Cut rules as follows: every inference step of the following kind, G|G0 |Ψ ⇒ B

G|G0 |Ψ ⇒ C 0

G|G |Ψ , Γ ⇒ C 0 0

L→

is replaced by the following inference. d00 G|G0 |Ψ ⇒ B G|G0 |Ψ , Γ ⇒ B

IW

G|Γ , B ⇒ D G|G0 |Γ , Ψ , B ⇒ D

G|G0 |Γ , Ψ ⇒ D

IW, EW Cut

G|G0 |Ψ , D ⇒ C 0 G|G0 |Ψ , Γ , D ⇒ C 0

G|G0 |Ψ , Γ ⇒ C 0

IW Cut

where the missing premise of the first Cut rule G|G0 |Γ , Ψ , B ⇒ D is obtained by using the Inversion Lemma. 2.1.2. By IW. The internal weakening on (B → D)∗ is replaced by stepwise weakening of formulas ϕ ∈ Γ . It is easy to check that this procedure results in a derivation d0 `sLIC G|G0 |Γ , Γ ⇒ C with ρ(d0 ) ≤ |A|. Hence, the required derivation is obtained by applying EC and LC. 2.1.3. By EW. The external weakening on a component containing (B → D)∗ is replaced by weakening on H [Γ /(B → D)∗ ]. This does not change the length of the proof nor the cut-rank, so |d1 | and ρ(d1 ) remain the same. Subcase 2.2. A = B ∨ D. Let us consider the decoration of A in d0 starting from G|Ψ ⇒ B ∨ D. (1) We first replace all instances of {Ψ ⇒ B ∨ D} by {Ψ , Γ ⇒ C }. (2) We add G to all hypersequents in d0 . (3) For each hypersequent G|E ⇒ E, we add EW to recover E ⇒ E, from which we can recover the axiom of d0 . The resulting tree is no longer a derivation. We consider the corrective steps. There are some cases depending on how the marked occurrence of B ∨ D is introduced. 2.2.1. A = B ∨ D is introduced as a principal formula of a logical inference. We replace the R∨ inference of B ∨ D by Cut as follows: every inference step of the form G|G0 |Ψ ⇒ B G|G0 |Ψ , Γ ⇒ C is replaced by the following inference. G|G0 |Ψ ⇒ B G|G |Ψ , Γ ⇒ B 0

IW

G|G0 |Γ , B ⇒ C G|G0 |Ψ , Γ , B ⇒ C

G|G0 |Ψ , Γ ⇒ C

Cut

H. Kurokawa / Annals of Pure and Applied Logic 161 (2009) 427–446

441

Here the premise Γ , B ⇒ C is obtained by the Inversion Lemma from d1 . 2.2.2. By IW. The internal weakening on (B ∨ D)∗ is replaced by a weakening of C . It is easy to check that this procedure results in a derivation d0 `sLIC G|G0 |Ψ ⇒ C with ρ(d0 ) ≤ |A|. The required definition is obtained by IW, EC and LC. 2.2.3. By EW. The external weakening on a component containing (B ∨ D)∗ is replaced by weakening on H [C /(B ∨ D)∗ ]. This does not change the length of the proof or the cut-rank, so |d0 | and ρ(d0 ) remain the same. (lemma) Theorem 28 (Cut-elimination). If a hypersequent H is derivable in sLIC, then H is derivable in sLIC without using the Cut rule. Proof. Let d `sLIC H and ρ(d) > 0. The proof is done by double induction on (ρ(d), nρ(d)), where nρ(d) is the number of cuts in d with cut rank ρ(d). Indeed, let us take in d an uppermost cut with cut rank ρ(d). By applying the Reduction Lemma to its premises, either ρ(d) or nρ(d) decreases.  6. Applications of cut-elimination 6.1. Embedding of sLIC into global intuitionistic logic We next show the soundness and faithfulness of an embedding ∗ from sLIC into intuitionistic modal logics called HGIp. HGIp is a hypersequent calculus for the propositional fragment of global intuitionistic logic, which was introduced by Titani [28], as a variant of a logic used in Takeuti–Titani [27].25 The hypersequent calculus HGIp we use is the propositional fragment of Ciabattoni’s HGI (in [9]) with ⊥ as a primitive expression. The language of GIp is as follows. F ::= pi |⊥|F1 → F2 |F1 ∧ F2 |F1 ∨ F2 |F1 The logical rules, standard internal/external structural rules and Cut of HGIp are the same as those of sLIC. In addition, we have the following modal rules in HGIp. (MS means Modal Splitting.) L

G|A, Γ ⇒ B

R

G|A, Γ ⇒ B

G|Γ ⇒ A G|Γ ⇒ A

MS

G|Γ1 , Γ2 ⇒ A G|Γ1 ⇒ |Γ2 ⇒ A

The language of HGIp and that of sLIC are different, since sLIC has two-sorted propositional variables. For simplicity, we expand the language of HGIp by introducing propositional variables X into the language of HGIp without postulating any rule for this propositional variables except for the axiom X ⇒ X . We call the new system formulated in the new language ‘‘HGIp+ ’’.26 We now define our mapping ? from the formulas of the language of IPCCA into those of the language of GIp+ and HGIp+ as follows: p? = p and ⊥? = ⊥; X ? = X ; (ϕ ∧ ψ)? = ϕ ? ∧ ψ ? ; (ϕ ∧ ψ)? = ϕ ? ∨ ψ ? ; and (ϕ → ψ)? = ϕ ? → ψ ? . To facilitate our proof of the embedding theorem, we use the following derived rule in HGIP+ . Proposition 29. The following rule is a derived rule of HGI (with Cut). G|Γ , ϕ ⇒ ψ G|Γ ⇒ |ϕ ⇒ ψ Proof.

ϕ ⇒ ϕ ψ ⇒ ψ

ϕ ⇒ | ⇒ ϕ ϕ ⇒ ψ| ⇒ ϕ

G|Γ , ϕ ⇒ ψ

G|ϕ ⇒ ψ|Γ ⇒ ϕ

G|ϕ ⇒ ψ|Γ , ϕ ⇒ ψ

G|ϕ ⇒ ψ|Γ ⇒ ψ

ψ ⇒ |ψ ⇒ ϕ ⇒ ψ| ⇒ ϕ

G|ϕ ⇒ ψ|Γ , ψ ⇒ G|Γ ⇒ |ϕ ⇒ ψ

Now we prove the following lemma.

25 We consider only a -fragment of this intuitionistic modal logic because HGI has  only. 26 GI used in [28] is not the same as ‘‘global intuitionistic logic’’ in [27]. The original one (now called GI ) was formulated as an intuitionistic S5 modal TT ∗ logic without ϕ ∨ ¬ϕ . A Gentzen-style (propositional part of) sequent calculus GITT is obtained by adding L and R to LJ. Titani also formulated a

sequent calculus for her GI by adding L, R → and R below to a multi-succedent variant of LJ. Here Γ is a finite-sequence of -closed formulas. (A closed formula is inductively defined as follows [9]: 1. If Φ is a formula, then Φ is a -closed formula. 2. If Φ and Ψ are -closed formulas, then Φ ∧ Ψ , Φ ∨ Ψ , and Φ → Ψ are -closed formulas.) L

A, Γ ⇒ B A, Γ ⇒ B

R

Γ ⇒B Γ ⇒ B ∗

R→

Γ , A ⇒ ∆, B Γ ⇒ ∆, A → B

R

Γ ⇒ ∆, B Γ ⇒ ∆, B

.

∗ The propositional part of GITT is the same system as Ono’s in [25]. Ono showed a counterexample (p → ¬¬p) against cut-elimination of GITT . For a counterexample against cut-elimination for Titani’s sequent calculus, see [9].

442

H. Kurokawa / Annals of Pure and Applied Logic 161 (2009) 427–446

Lemma 30. 1. If sLIC` G|Γ1 ⇒ A1 | . . . |Γn ⇒ An , then HGIp+ ` G? |Γ1? ⇒ A?1 | . . . |Γn? ⇒ A?n . 2. If HGIp+ ` G? |Γ1? ⇒ A?1 | . . . |Γn? ⇒ A?n , then sLIC` G|Γ1 ⇒ A1 | . . . |Γn ⇒ An . Proof. Proof of 1. Proof by induction on the length of cut-free proofs in sLIC. Case 1. sLIC` ϕ ⇒ ϕ where ϕ is p or X . The first one is also provable in HGIp+ , since p ⇒ p is also an axiom in HGIp+ and that is identical with p? ⇒ p? . For the second case, obviously HGIp+ ` X ⇒ X by L and R. So, we can prove HGIp+ ` X ? ⇒ X ? . (The case ⊥ ⇒ is similar to p ⇒ p.) Case 2. We use some rule in sLIC to derive the lower hypersequent. All the rules except for Atomic Classical Splitting must be the same as those of HGIp+ . So it is sufficient to show that the G|X , Γ ⇒ A . translated rule preserves provability in HGIp+ . The first form of ACS in sLIC is as G|X ⇒ |Γ ⇒ A In HGIp+ , by IH, we can start from HGIp+ ` G? |X ? , Γ ? ⇒ A? . G? |X , Γ ? ⇒ A?

?

G |X ⇒ |Γ ? ⇒ A? G? |X ? ⇒ |Γ ? ⇒ A?

The second form of ACS in sLIC is

G|Γ , X ⇒ Y G|Γ ⇒ |X ⇒ Y

.

By IH, HGIp+ ` G? |Γ ? , X ⇒ Y . Then we can have the following derivation. The derived rule

G? |Γ ? , X ⇒ Y

G? |Γ ? ⇒ |X ⇒ Y

Since these cases of ACS are the only rules beyond intuitionistic logic in sLIC, we have shown the soundness of embedding

? from sLIC into HGIp+ .

Proof of 2. Proof by induction on the length of cut-free proofs in HGIp+ . To prove this, we first observe that due to the restricted nature of embedding, the only place we can put  in the range of mapping ? is in front of X . So, in the embedded fragment from sLIC in HGIp+ , we can apply MS only when we have a formula of the form X . Given a derivation of G? |Γ1? ⇒ A?1 |Γ2? ⇒ A?2 , we have a few cases of steps: 1. Axioms of HGIp+ are also axioms of sLIC, thus provable in sLIC, since axioms consist of atomic formulas or ⊥. 2. For any application of intuitionistic rules, by IH, we assume that the premise(s) of each rule must be faithful in the sense that the provability of the image of the mapping ? in HGIp+ implies the provability of a formula without ? in sLIC. Since intuitionistic rules are common in these two systems, we can obviously derive the conclusion of the corresponding rule in sLIC. So, we focus on modal rules. 2.1. For L and R, due to the nature of our mapping ? (note that for R→ to be applied, only Y must occur in Γ ? ), the following are the only relevant cases. G? |X , Γ ? ⇒ A?

L G? |X , Γ ? ⇒ A?

G? |Γ ? ⇒ X G? |Γ ? ⇒ X

R

The conclusions can be written as G? |X ? , Γ ? ⇒ A? and G? |Γ ? ⇒ X ? . If we strip off the modal operator  in front of X at the conclusion of these inferences, then we will have a redundant inferential step in sLIC. By IH, the premise G|X , Γ ⇒ A and G|Γ ⇒ X are derivable in sLIC. Hence the conclusions are also derivable in sLIC. 2.2. In HGIp+ , the only rule that goes beyond intuitionistic logic is Splitting. Modal Splitting in HGIp+ can be applied to arbitrary modalized formulas in the antecedent of the premise. However, due to the nature of embedding, the only modalized formula(s) that can occur in the premise of Modal Splitting to be split must be of the form X ’s. So the premise of Modal Splitting in HGIp+ looks like ‘G? |X1? , . . . , Xn? , Π ⇒ A, which is the same thing as ‘G? |X1 , . . . , Xn , Π ⇒ A. So an application of Modal Splitting looks as follows. G? |X1 , . . . , Xn , Π ? ⇒ A? G? |X1 , . . . , Xn ⇒ |Π ? ⇒ A?

Modal Splitting in HGIp+

Since the premise of the following is provable in sLIC by IH, by using ACS we can show that the conclusion is derivable in sLIC. G|X1 , . . . , Xn , Π ⇒ A Several Applications of Atomic Classical Splitting G|X1 , . . . , Xn ⇒ |Π ⇒ A

H. Kurokawa / Annals of Pure and Applied Logic 161 (2009) 427–446

443

Since the premise is provable in sLIC by IH, by using ACS several times we can show that the conclusion is derivable in LIC. These are all the inferences allowed in the fragment of HGIp+ that is obtained by embedding ? from sLIC. So for any step in the derivation of G? |Γ1? ⇒ A?1 | . . . |Γn? ⇒ A?n , we can construct a derivation of a hypersequent without any ?. Therefore, if G? |Γ1? ⇒ A?1 | . . . |Γn? ⇒ A?n is derivable in HGIp+ , then G|Γ1 ⇒ A1 | . . . |Γn ⇒ An is derivable in sLIC. So, the embedding is faithful.  We define a mapping 0 between the formula of the language of HGIp+ and HGIp: for any given proof of hypersequent in HGIp+ , replace all the variables X ’s in the proof by the variables q in the language of HGIp that are not used in the proof. Since the replacement does not increase the deductive power of HGIp, HGIp` G? |Γ1? ⇒ A?1 | . . . |Γn? ⇒ A?n iff ?0 ?0 HGIp+0 ` G?0 |Γ1?0 ⇒ A?0 1 | . . . |Γn ⇒ An . We define ∗ = ? ◦ 0. Theorem 31. For any G|Γ1 ⇒ A1 | . . . |Γn ⇒ An , there exists ∗ such that sLIC` G|Γ1 ⇒ A1 | . . . |Γn ⇒ An if and only if HGIp` G∗ |Γ1∗ ⇒ A∗1 | . . . |Γn∗ ⇒ A∗n . Proof. An obvious consequence of the above lemmas and the above observation.  As another corollary of cut-elimination for sLIC, we have partial results on the Craig interpolation theorem for our systems mLIC and sLIC, i.e., an interpolation theorem for intuitionistic fragment of sLIC, for which only the sequent part of sLIC is sufficient27 and a version of the theorem for the classical fragment of mLIC. Theorem 32 (Interpolation Theorem for Classical Fragment of mLIC). If mLIC ` Γ1◦ ⇒ ∆◦1 | . . . |Γn◦ ⇒ ∆◦n , then for any partition of Γ = Γ1◦ ∪· · ·∪ Γn◦ = Γ 0 ; Γ 00 and ∆◦1 ∪· · ·∪ ∆◦n = ∆0 ; ∆00 ,28 there exists a C s.t. Var (C ) ⊆ Var (Γ 0 ∪ ∆0 )∩ Var (Γ 00 ∪ ∆00 ), mLIC ` Γ 0 ⇒ ∆0 , C and mLIC ` C , Γ 00 ⇒ ∆00 . Proof. Proof by induction on the length of cut-free proofs of mLIC. Case 1. |d| = 1. Axioms: X ⇒ X and ⊥ ⇒. Since the axioms are just one sequent, the proof is standard (we do not show all cases). For instance, we can take a partition such as Γ 0 ; Γ 00 = X ; ∅ and ∆0 ; ∆00 = ∅; X . In this case, take X as C . Or, a case where a partition looks like Γ 0 ; Γ 00 = ⊥; ∅. In this case, take ⊥ as C , etc. Case 2. |d| > 1. Rules. We show only crucial subcases, L→ and R→.

Γ1 ⇒ ∆1 , A| · · · |Γn ⇒ ∆n B, Γ1 ⇒ ∆1 | · · · |Γn ⇒ ∆n Γ1 , A → B ⇒ ∆1 | · · · |Γn ⇒ ∆n Sn Sn Consider partitions of Γ1 ∪ {A → B} ∪ i=2 Γi and i=1 ∆i . We have two cases, i.e. subcase 2.1.1. Γ 0 , A → B; Γ 00 0 00 0 00 and ∆ ; ∆ , and also subcase 2.1.2. Γ ; Γ , A → B and ∆0 ; ∆00 . However, once we take a partition as above, the proof is Case 2.1. L→.

essentially the same as the case of ordinary classical sequent calculus.

Γ1 , A ⇒ B| · · · |Γn ⇒ ∆n Γ1 ⇒ A → B, ∆1 | · · · |Γn ⇒ ∆n Sn Sn Consider partitions of i=1 Γi and ∆1 ∪ {A → B} ∪ i=2 ∆i . We have two cases. Subcase 2.2.1. Γ 0 ; Γ 00 and ∆0 , A → B; ∆00 . Take partitions Γ 0 , A; Γ 00 and ∆0 , B; ∆00 . By IH, there exists a formula C s.t. Var (C ) ⊆ Var (Γ 0 ∪ {A} ∪ ∆0 ∪ {B}) ∩ Var (Γ 00 ∪ ∆00 ) and mLIC ` Γ 0 , A ⇒ ∆0 , B, C and mLIC ` C , Γ 00 ⇒ ∆00 . Due to Case 2.2. R→.

classicality, we have the following derivation. A, Γ 0 ⇒ B, ∆0 , C A, Γ 0 ⇒ B| ⇒ ∆0 , C

Classical Splitting

Γ 0 ⇒ A → B, ∆0 , C |Γ 0 ⇒ A → B, ∆0 , C Γ 0 ⇒ A → B, ∆0 , C Hence, mLIC ` Γ 0 ⇒ A → B, ∆0 , C and mLIC ` C , Γ 00 ⇒ ∆00 and Var (C ) ⊆ Var (Γ 0 ∪ ∆0 ∪ {A → B}) ∩ Var (Γ 00 ∪ ∆00 ). This C satisfies the required condition. Subcase 2.2.2. Γ 0 ; Γ 00 and ∆0 ; ∆00 , A → B. The proof is similar to the case 2.2.1. 

27 The proof method used for the proof, called Maehara’s method, does not work for the Craig interpolation theorem for a multi-succedent sequent calculus for IPC. It is proved by using a more intricate method in [22]. 28 Here we use the notation of partition ‘‘;’’ that is used in Troelstra and Schwichtenberg [29].

444

H. Kurokawa / Annals of Pure and Applied Logic 161 (2009) 427–446

7. Discussions In closing, we would like to compare our approach using hypersequents to combine IPC and CPC with other approaches. As we mentioned above, there are several methods of combining intuitionistic and classical logic in a similar manner to IPCCA . K. Nour and A. Nour [24] introduced a sequent calculus that combines minimal propositional logic (MPC), IPC, and CPC. However, their system does not enjoy cut-elimination. Laurent and Nour [19] modified the system in [24] with their own device called a ‘‘parameter’’ to obtain cut-elimination. They use an extended framework of sequent calculus Γ ⇒ ∆; Π . ∆ can have more than one formula and Π has at most one formula. ∆ is used for expressing the classical part and Π for the intuitionistic part. This Π is called ‘‘stoup’’. They also uses a device called ‘‘parameter P ’’, which is a set of formulas and not fixed for the system, and we can take formulas in P to be ‘‘classical’’ due to the following rule called ‘‘der’’.

Γ ⇒ ∆; A A ∈ P Γ ⇒ ∆, A; Negri and von Plato [23] set up a single-conclusion sequent calculus for classical propositional logic. Although it does not combine two sorts of propositional variables considered, their system can readily be adapted to formulate a proof-system for intuitionistic logic with classical atoms. Sakharov [26], in a system combining first-order intuitionistic logic with classical propositional variables, has a rule to handle classical (propositional) formulas. Both authors have the following additional rule. (RLEM)

ϕ◦, Γ ⇒ ψ ¬ϕ ◦ , Γ ⇒ ψ Γ ⇒ψ

In both [23] and [26], cut-elimination is showed. However, for RLEM itself, the subformula property does not hold, since RLEM may be taken to be a form of Cut. To obtain a more proof-theoretically satisfactory result, one has to go further. Negri and von Plato indeed showed that RLEM is admissible via the rule RLAEM.29 (RLAEM)

X, Γ ⇒ ψ

¬X , Γ ⇒ ψ Γ ⇒ψ

Strictly speaking, ¬X may not be a subformula of the lower sequent. However, the use of RLAEM for classical atoms occurring as subformulas of the conclusion is sufficient to prove it. So, the subformula property is only marginally violated. Historical note: A sequent calculus with RLEM traces back to a work by H.B. Curry [10]. Curry invented a very similar system for exploring a system of almost classical logic, namely a logical system obtained by adding RLEM to MPC. Note the following derivability and non-derivability relations among the law of double negation (DN), Peirce’s law (PL), and the law of the excluded middle (LEM) obtain. (See also [1].) (1) (¬¬A) → A `MPC ((A → B) → A) → A; ((A → B) → A) → A `MPC A ∨ ¬A, but (2) A ∨ ¬A 0MPC ((A → B) → A) → A; A ∨ ¬A 0MPC ¬¬A → A; ((A → B) → A) → A 0MPC ¬¬A → A. So the logic obtained by adding LEM (or LAEM) to MPC, called LD, is slightly weaker than full-classical logic. So is the one that has RLEM or ELAEM. Curry calls negation introduced by RLEM ‘‘completely refutable’’. Incidentally, Curry also formulates another system of classical logic that uses the Peirce Rule (and also adds the Peirce Rule to minimal logic), which he stated as follows [10].

Γ,ϕ → ψ ⇒ ϕ (PR, single-succedent) Γ ⇒ϕ

Γ , ϕ → ψ ⇒ ϕ, ∆ (PR, multi-succedent) Γ ⇒ ϕ, ∆

The logic obtained by adding this PR to MPC is also weaker than full-classical logic but stronger than MPC with RLEM. It is an interesting question whether we can obtain a complete Gentzen-style sequent calculus for IPCCA by using a variant of PR. If we take a classical formula for ϕ , then it may look possible to provide a sequent calculus for IPCCA . However, it is unclear how to handle a valid formula that can be derived only by using a mixed formula for ϕ that is provably equivalent to a classical formula in IPCCA , unlike the case of RLEM where RLEM is admissible by RALEM. Now, what might be the advantages of our hypersequent approach as compared to others? Laurent and Nour’s approach in [19] has a connection with Girard’s linear logic and their ‘‘stoup’’ may have its own proof-theoretic significance. On the other hand, their system has a greater number of rules than ours, and the formulas derivable in the system depends on which parameter we take. This can be a source of flexibility, but it seems to make the treatment of a mixed formula inconvenient. To derive a formula such as (X → p) ∨ (p → X ), we may need a clever choice of a parameter. This is not what the system

29 The use of RLEM here is crucial, since an atomic rule of double negation (ADN) is not enough to give a complete system for classical logic and only gives Γ , ¬p ⇒ p us an intermediate logic called ‘‘stable logic.’’ [23]. (ADN) for atomic p. Γ ⇒p

H. Kurokawa / Annals of Pure and Applied Logic 161 (2009) 427–446

445

does automatically. Negri and von Plato’s approach can handle IPCCA in a more traditional Gentzen-style sequent calculus with only mild violation of the subformula property. So one might ask why a hypersequent formulation is good compared with this. We can say that a proof-theoretic connection between IPCCA and HGI becomes clear by using hypersequents. Also, hypersequents present the potential for dealing with combinations of other logics, such as Gödel–Dummett logic and classical logic. In conclusion, we will point out some other flexibilities of the framework of hypersequents. First, our approach allows some variants in different fragments of LIPCCA . The potential for stating RALEM clearly rests on the expressive ability of the language. RALEM cannot even be stated in a language without negation. However, our hypersequent calculi can work, e.g., for the implicational fragment of IPCCA (without ⊥), since it can prove PL in our hypersequent calculi. Combined with Curry’s work, this flexibility leads one to consider the extent to which hypersequents can capture those fine-grained distinctions among the ‘‘classical’’ principles over minimal logic. If we drop axiom ⊥ ⇒, we can prove PL and LEM in the system, but not DN. In order to differentiate between PL and LEM, we may need a variant of splitting.30 G|Γ , X ⇒ Y

G|Γ , X ⇒ A

G|Γ ⇒ ⊥|X ⇒ Y

G|Γ ⇒ A|X ⇒ ⊥

So, the framework of hypersequent is flexible enough to express these differences. Second, hypersequents may behave better in an extended context. We give the following observation to illustrate this point. Consider the extention of this implicational fragment by ∨ and ∧ and propositional quantifiers only for classical variables,31 i.e., a second-order →, ∨, ∧-fragment of minimal logic with ‘‘classical’’ atoms. The specification of the language and the quantifier rules are the following: F ::= pi |Xi |F1 → F2 |F1 ∧ F2 |F1 ∨ F2 |∀XF1 |∃XF1 . G|Γ , ϕ(C ) ⇒ ψ

L∀

G|Γ , ∀X ϕ(X ) ⇒ ψ

L∃

G|Γ , ∃X ϕ(X ) ⇒ ψ

R∀

G|Γ , ϕ(Y ) ⇒ ψ

R∃

G|Γ ⇒ ϕ(Y ) G|Γ ⇒ ∀X ϕ(X ) G|Γ ⇒ ϕ(C ) G|Γ ⇒ ∃X ϕ(X )

Here, C is any formula that is quantifier-free, free from intuitionistic propositional variables, and a propositional variable Y satisfies the ordinary eigenvariable condition. This system is syntactically specified and we do not go into the details of semantics here. Cut-elimination must be straightforward by following the quantifier steps of proof in [9], since a similar cut-elimination procedure can be applied with induction slightly modified. In this system, we can have a cut-free proof of ∀X (p ∨ (q ∨ X )) → (p ∨ ∀X (q ∨ X )). q⇒q q⇒q∨X q ⇒ ∀X (q ∨ X )

X ⇒X X ⇒|⇒X

q ⇒ p ∨ ∀ X (q ∨ X )

X ⇒ p ∨ ∀X (q ∨ X )| ⇒ q ∨ X

q ∨ X ⇒ p ∨ ∀X (q ∨ X )| ⇒ q ∨ X

p⇒p p ⇒ p ∨ ∀X ((q ∨ X ))| ⇒ q ∨ X

p ∨ (q ∨ X ) ⇒ p ∨ ∀X (q ∨ X )| ⇒ q ∨ X ∀X (p ∨ (q ∨ X )) ⇒ p ∨ ∀X (q ∨ X )| ⇒ q ∨ X ∀X (p ∨ (q ∨ X )) ⇒ p ∨ ∀X (q ∨ X )| ⇒ ∀X (q ∨ X )

∀X (p ∨ (q ∨ X )) ⇒ p ∨ ∀X (q ∨ X )|∀X (p ∨ (q ∨ X )) ⇒ p ∨ ∀X (q ∨ X ) ∀X (p ∨ (q ∨ X )) ⇒ p ∨ ∀X (q ∨ X ) We do not see any way of proving this theorem by using RLAEM without a cut rule, although it has a cut-free proof if we have the axiom ⊥ ⇒ in the system.32 Finally, hypersequents may turn out not to be all that deviant from intuitionistic sequent calculi after all. In the literature of proof-search for IPC, hypersequent calculi have been used. For instance, Mints [21] introduces a system called a tableau system for IPC, which can be taken as a kind of hypersequent calculus. Even more explicitly, for some proof-search purposes, Harland [16] introduces a hypersequent calculus for intuitionistic logic.

30 To claim the underivability of PL, we need cut-elimination. To show cut-elimination, it suffices to prove a statement analogous to Lemmas 24 and 25, and Claim 26, since the reduction must be similar. 31 The full case of propositional quantifiers is yet to be investigated. 32 So we cannot claim the advantage for the full intuitionistic logic with ⊥ ⇒. The issue rests on whether the underlying logic to which we add ‘‘classical’’ principle is IPC or MPC. Such a difference is sometimes discussed in the literature of classical extension of typed λ-calculus. See, e.g. [1].

446

H. Kurokawa / Annals of Pure and Applied Logic 161 (2009) 427–446

References [1] Z. Ariola, H. Herbelin, Minimal classical logic and control operators, in: ICALP: Annual International Colloquium on Automata, Languages and Programming, 2003, pp. 871–885. [2] S. Artemov, R. Iemhoff, The basic intuitionistic logic of proofs, Journal of Symbolic Logic 72 (2007) 439–451. [3] A. Avron, A constructive analysis of RM, Journal of Symbolic Logic 52 (4) (1987) 939–951. [4] A. Avron, Hypersequents, logical consequence, and intermediate logics for concurrency, Annals of Mathematics and Artificial Intelligence 4 (1991) 225–248. [5] A. Avron, The method of hypersequents in the proof theory of propositional non-classical logics, in: W. Hodges, M. Hyland, C. Steinhorn, J. Truss (Eds.), Logic: From Foundations to Applications. Proc. Logic Colloquium, Keele, UK, 1993, Oxford University Press, New York, 1996, pp. 1–32. [6] A. Avron, Two types of multiple-conclusion systems, Journal of the Interest Group in Pure and Applied Logics 6 (5) (1998) 695–717. [7] M. Baaz, A. Ciabattoni, A Schütte–Tait style cut-elimination proof for first-order Gödel logic, in: TABLEAUX, 2002, pp. 24–37. [8] M. Baaz, A. Ciabattoni, C.G. Fermüller, Hypersequent calculi for Gödel logics — A survey, Journal of Logic and Computation 13 (2003) 1–27. [9] A. Ciabattoni, A proof-theoretical investigation of global intuitionistic (fuzzy) logic, Archive for Mathematical Logic 44 (4) (2004) 435–457. [10] H.B. Curry, Foundations of Mathematical Logic, McGraw-Hill, New York, 1963. [11] K. Došen, Logical constants as punctuation marks, Notre Dame Journal of Formal Logic 30 (1989) 362–381. Reprinted in a slightly amended version in: D.M. Gabbay (Ed.), What is a Logical System? Oxford University Press, Oxford, 1994, pp. 273-296. [12] L. Fariñas del Cerro, A. Herzig, Combining classical and intuitionistic logic, or: Intuitionistic implication as a conditional, in: F. Baader, K.U. Schulz (Eds.), Frontiers of Combining Systems: Proceedings of the 1st International Workshop, Munich, Germany, Kluwer Academic Publishers, 1996, pp. 93–102. [13] M. Fitting, Intuitionistic Logic, Model Theory and Forcing, North Holland, 1969. [14] R.C. Flagg, Integrating classical and intuitionistic type theory, Annals of Pure and Applied Logic 32 (1986) 27–51. [15] J.-Y. Girard, On the unity of logic, Annals of Pure and Applied Logic 59 (1993) 201–217. [16] J. Harland, An algebraic approach to proof search in sequent calculi, 2001, presented at the International Joint Conference on Automated Reasoning, Siena, July, 2001. [17] L. Humberstone, Interval semantics for tense logic: Some remarks, Journal of Philosophical Logic 8 (1979) 171–196. [18] H. Kurokawa, Intuitionistic logic with classical atoms, CUNY Ph.D. Program in Computer Science Technical Report TR-2004003, 2004. [19] O. Laurent, K. Nour, Parametric mixed sequent calculus, Technical report, Université de Savoie, 2005, Prépublication du Laboratoire de Mathématiques num 05-05a. [20] P. Lucio, Structured sequent calculi for combining intuitionistic and classical first-order logic, in: H. Kirchner, Ch. Ringeissen (Eds.), Proceedings of the Third International Workshop on Frontiers of Combining Systems, FroCoS 2000, in: LNAI, vol. 1794, Springer, 2000, pp. 88–104. [21] G. Mints, A Short Introduction to Intuitionistic Logic, Kluwer Academic, Plenum, 2000. [22] G. Mints, Interpolation theorems for intuitionistic predicate logic, Annals of Pure and Applied Logic 113 (1–3) (2001) 225–242. [23] S. Negri, J. von Plato, Structural Proof Theory, Cambridge University Press, Cambridge, UK, 2001. [24] K. Nour, A. Nour, Propositional mixed logic: Its syntax and semantics, Journal of Applied Non-Classical Logics 13 (2003) 377–390. [25] H. Ono, On some intuitionistic modal logic, Publ. Inst. Math. Sci. Kyoto 13 (1977) 687–722. [26] A. Sakharov, Median Logic, Technical report, St. Petersberg Mathematical Society, 2004. http://www.mathsoc.spb.ru/preprint/2004/index.html. [27] G. Takeuti, S. Titani, Globalizations of intuitionistic set theory, Annals of Pure and Applied Logic 33 (1987) 195–211. [28] S. Titani, Completeness of global intuitionistic set theory, Journal of Symbolic Logic 62(2) (1997) 506–528. [29] A. Troelstra, H. Schwichtenberg, Basic Proof Theory, second edition, Cambridge University Press, 2000.