A logic of plausible justifications

A logic of plausible justifications

Theoretical Computer Science 603 (2015) 132–145 Contents lists available at ScienceDirect Theoretical Computer Science www.elsevier.com/locate/tcs ...

467KB Sizes 4 Downloads 69 Views

Theoretical Computer Science 603 (2015) 132–145

Contents lists available at ScienceDirect

Theoretical Computer Science www.elsevier.com/locate/tcs

A logic of plausible justifications ✩ L. Menasché Schechter Department of Computer Science, Federal University of Rio de Janeiro, Brazil

a r t i c l e

i n f o

Article history: Received 31 January 2013 Accepted 13 June 2014 Available online 14 July 2015 Keywords: Epistemic logic Justifications Plausibility models Axiomatic system Multi-agent systems

a b s t r a c t In this work, we combine features from Justification Logics and Logics of Plausibility-Based Beliefs to build a logic for Multi-Agent Systems where each agent can explicitly state his justification for believing in a given sentence. Our logic is a normal modal logic based on the standard Kripke semantics, where we provide a semantic definition for the evidence terms and define the notion of plausible evidence for an agent, based on plausibility relations in the model. As we deal with beliefs, justifications can be faulty and unreliable. In our logic, agents can disagree not only over whether a sentence is true or false, but also on whether some evidence is a valid justification for a sentence or not. After defining our logic and its semantics, we provide a strongly complete axiomatic system for it, show that it has the finite model property, analyze the complexity of its Model-Checking Problem and show that its Satisfiability Problem has the same complexity as the one from basic modal logics. Thus, this logic seems to be a good first step for the development of a dynamic logic that can model the processes of argumentation and debate in Multi-Agent Systems. © 2015 Elsevier B.V. All rights reserved.

1. Introduction and motivation Epistemic logics [2] are a particular kind of modal logics [3] where the modalities are used to describe epistemic notions such as knowledge and belief of agents. Traditional epistemic logics are expressive enough to describe knowledge and belief of multiple agents in a multi-agent system, including higher-order notions, such as the knowledge of one agent about the knowledge of another, and some notions of knowledge and belief that are related to groups of agents, such as “everybody in a group knows. . .” or “it is common knowledge in a group. . .”. Nevertheless, such epistemic logics have two important limitations. The first is that the knowledge or belief of an agent is static, i.e., it does not change over time. One of the reasons for this is that, in such logics, it is not possible to describe communication between the agents. The second limitation is that the knowledge modeled by such logics is implicit, which means that if the agent knows something, then he knows it for some reason that remains unspecified. In order to deal with the first limitation, Dynamic Epistemic Logics [4] were developed. In these logics, we can describe acts of communication between the agents. Such acts consist of truthful announcements that are made by one of the agents (or an external observer) to the other agents (or a sub-group of them). In works such as [5–8], this framework of dynamic logics was extended so that not only knowledge, but also beliefs (which, unlike knowledge, may turn out to be actually false) could evolve over time. The semantics of such logics of dynamic beliefs is based on Plausibility Models, where each agent has a plausibility order for the possible states of the model and he



A preliminary version of this work was published in the proceedings of WoLLIC 2012 [1]. E-mail address: [email protected].

http://dx.doi.org/10.1016/j.tcs.2015.07.018 0304-3975/© 2015 Elsevier B.V. All rights reserved.

L.M. Schechter / Theoretical Computer Science 603 (2015) 132–145

133

believes in those sentences that are true in the most plausible states according to his plausibility order. The change of beliefs is then modeled as changes in the plausibility orders of the agents. In order to deal with the second limitation, Justification Logics [9–11] were developed. In these logics, instead of formulas simply stating “Agent i knows ϕ ”, we have formulas that state “t is agent i’s justification (or evidence) for knowing ϕ ”. Thus, these are logics of explicit knowledge, where every knowledge that an agent has is accompanied by an explicit justification for it. This is why Justification Logics are also called Logics of Explicit Knowledge or Logics of Evidence-Based Knowledge. In the processes of argumentation and debate, be it an internal debate or a public debate where each agent tries to convince an external observer of his particular point of view, it is unrealistic to say that all of the announcements are truthful. The realistic assumption is that the announcements are merely sincere, i.e., each agent believes in what he announces. However, in order to convince others of their belief, an agent should not only state what he believes in, but also why he believes in it. So, the appropriate logic to model these processes would be a dynamic logic of evidence-based beliefs. The approach that we propose in order to build such a logic is to define its semantics on Plausibility Models while enriching its language with explicit evidence terms, a feature that is inspired by the language of Justification Logics. The work [12] presents another approach to build a dynamic logic of evidence-based beliefs, but the two approaches present significant differences, as we discuss in more detail in the end of this section. Keeping in mind that we want to model beliefs, it is important to notice that, in the logic we propose, justifications can be faulty and unreliable. Using Plausibility Models, we give a notion of plausible evidence, or plausible justification, for an agent. So, if an agent has a plausible evidence for a sentence, then he will believe in that sentence, but, as the evidence can possibly be faulty, this belief has the possibility to be false. In this work, we take a first step in order to build such a logic for the description of the processes of argumentation and debate. We build a normal modal logic (for the definition of normal modal logics, [3] can be consulted) where we can describe the plausibility of evidences for all the different agents, give a sound and strongly complete axiomatic system for this logic, show that it has the finite model property and analyze the complexities of its Model-Checking and Satisfiability problems. As our next step, we plan to build a dynamic logic of explicit beliefs, adding to the present logic the actions that would model the communications between agents during the processes of argumentation and debate. This is not a trivial task. The standard announcements that describe changes of knowledge [4], sometimes called hard announcements, are too strong for our needs, since they are required to be truthful and not merely sincere (using such announcements without respecting the requisite that the announced formula should be true can generate logical inconsistencies). On the other hand, the standard announcements that describe changes of beliefs, called belief upgrades or soft announcements [13] are too weak, since, even tough they are only required to be sincere, they still make the agents receiving the announcement start believing in it, regardless of their current beliefs. In our desired framework, the announcement of a sentence should be accompanied by a justification as to why the agent performing the announcement believes in it. Then, each agent receiving the announcement should judge by himself whether he should start believing or not in the announced sentence, based on his current beliefs both about what was announced and about the justification that was given. There are, in the literature, works that combine features from Justification Logics and Dynamic Epistemic Logics. Yavorskaya [14] developed the first proposal of a Justification Logic with communication between the agents. However, these communication actions were extremely simple. Later, the series of works [15–17] developed logics that add to Justification Logics a series of communication actions, some rather complex. However, those actions are all from the family of hard announcements, so they cannot be used for our purpose. Our combination of explicit evidence terms, inspired by the language of Justification Logics, and Plausibility Models, using these evidence terms to model explicit beliefs of the agents in such models, seems to be a novel approach. As was mentioned above, [12] also developed a logic of evidence-based beliefs, but, unlike our logic, it has no explicit evidence terms in the language and some of the modalities are non-normal. Besides that, also unlike our logic, no complete proof system for that logic is presented. The rest of this paper is organized as follows. In Section 2, we introduce the necessary concepts that are used as building blocks for our logic: Justification Logic and Plausibility Models. The language and semantics of our logic, called Logic of Plausible Justifications (LPJ), is presented in Section 3, together with a sound and strongly complete axiomatic system for it. We also show that our logic has the finite model property and that the complexities of its Model-Checking and Satisfiability problems are the same as in the case of basic modal logics. In this section, we also present an extension of LPJ with a form of quantification over evidence terms, called LPJ Q , similar to what [18] did in the context of traditional Justification Logics. Finally, in Section 4, we state our final remarks and point out potential further developments, including the one which originally motivated this work: the construction of a dynamic logic that can model argumentation and debate in multi-agent systems. 2. Background concepts This section presents two important concepts for the construction of our logic: Justification Logic and Plausibility Models. 2.1. Justification Logic In this section, we provide a brief account of Justification Logic. For more details, [9–11,19] can be consulted.

134

L.M. Schechter / Theoretical Computer Science 603 (2015) 132–145

Definition 2.1. The language of basic Justification Logic consists of a countable set  of proposition symbols, a countable set C of evidence constants, a countable set X of evidence variables, all pairwise disjoint, and the boolean connectives ¬ and ∧. The formulas ϕ and the evidence terms t of the language are defined as follows:

ϕ ::= p |  | ¬ϕ | ϕ1 ∧ ϕ2 | t : ϕ , with t ::= c | x | t 1 · t 2 | t 1 + t 2 | !t , where p ∈ , c ∈ C and x ∈ X . We denote the set of all evidence terms of the language by T and the set of all formulas by F . In this logic and in every other logic described in this paper, we use the standard abbreviations ⊥ ≡ ¬,

¬(¬ϕ ∧ ¬φ), ϕ → φ ≡ ¬(ϕ ∧ ¬φ) and ϕ ↔ φ ≡ (ϕ → φ) ∧ (φ → ϕ ).

ϕ∨φ≡

The initial semantics presented for the basic Justification Logic was not a Kripke-style modal semantics. In a later work, Fitting provided a modal semantics for the logic [19]. This is the semantics that we present below. Definition 2.2. A frame for Justification Logic is a tuple F = ( W , R ) where W is a non-empty set of states and R ⊆ W × W is a binary relation that is reflexive and transitive. Definition 2.3. A Fitting Model for Justification Logic is a tuple M = (F , V, E ), where F is a frame, V is a valuation function V :  → 2 W and E is an evidence function E : W × T → 2 F satisfying the following conditions:

• • • • •

If (ϕ → ψ) ∈ E ( w , s) and ϕ ∈ E ( w , t ), then ψ ∈ E ( w , s · t ). E ( w , s ) ∪ E ( w , t ) ⊆ E ( w , s + t ). If ϕ ∈ E ( w , t ), then t : ϕ ∈ E ( w , !t ). If wRw , then E ( w , t ) ⊆ E ( w , t ). If ϕ ∈ E ( w , c ) and c ∈ C , then ϕ must be valid, as defined below.

Definition 2.4. Let M = (F , V, E ) be a Fitting model. The notion of satisfaction of a formula notation M, w  ϕ , can be inductively defined as follows:

• • • • •

ϕ in a model M at a state w,

M, w  p iff w ∈ V( p ). M, w   always. M, w  ¬ϕ iff M, w  ϕ . M, w  ϕ1 ∧ ϕ2 iff M, w  ϕ1 and M, w  ϕ2 . M, w  t : ϕ iff ϕ ∈ E ( w , t ) and, for all w such that wRw , M, w  ϕ .

If M, w  ϕ for every state w, we say that ϕ is globally satisfied in the model M, notation M  ϕ . If ϕ is globally satisfied in all models M of a frame F , we say that ϕ is valid in F , notation F  ϕ . Finally, if ϕ is valid in all frames, we say that ϕ is valid, notation  ϕ . It is worth noting that, since the last item of Definition 2.3 uses the notion of validity, which depends on Definition 2.4, Definitions 2.3 and 2.4 are, in fact, described by mutual recursion. From the semantical definition above, we can think of · as a form of evidence-controlled Modus Ponens, + as a form of evidence combination, ! as a constructor of evidence for formulas that already contain evidence terms and evidence constants as atomic evidence for formulas that do not need further justification (since they are valid). 2.2. Plausibility Models In this section, we present Plausibility Models for the single-agent case. The multi-agent case is covered in the presentation of our logic in the next section. Plausibility Models in the present form were introduced in [5,6] and [7,8]. Definition 2.5. A Plausibility Frame is a tuple F = ( W , ≥), where W is a non-empty set of states and ≥ ⊆ W × W is a relation that satisfies reflexivity, transitivity and totality1 (for all v , w ∈ W , v ≥ w or w ≥ v or both), thus being a total pre-order. ≥ is called a plausibility order. When we think about the relation ≥ as an epistemic relation, we consider that, if v ≥ w, then the agent does not know for sure in which of the states v or w he actually is, but he considers that the state w is equally or more plausible than the state v. Specifically, if v ≥ w and w ≥ v, the agent considers both states v and w to be equally plausible and, if v ≥ w but w  v, the agent considers w more plausible than v. The choice of the most plausible states as the minimal states

1

Also referred to as connectivity.

L.M. Schechter / Theoretical Computer Science 603 (2015) 132–145

135

according to the pre-order ≥, which seems counter-intuitive, comes from the fact that if we add the hypothesis that ≥ is well-founded, then we guarantee that the set of most plausible states is always well-defined. As ≥ is transitive and connected, the agent considers that it is possible for him to be in any state of the model, but he considers some more plausible than others. Definition 2.6. A Plausibility Model is a tuple M = (F , V), where F is a Plausibility Frame and V is a valuation function V :  → 2 W , mapping proposition symbols to sets of states. Let us now discuss the kinds of belief that can be described in a Plausibility Model. One thing that all sorts of beliefs have in common, and what differentiates them from knowledge, is that there is always the possibility that a belief can be false. Nevertheless, beliefs are consistent, which means that an agent cannot simultaneously believe in φ and ¬φ . For our discussion of beliefs, let us consider that the model M is finite or that the relation ≥ is well-founded. We can define the set Best(M) = { w ∈ W : v ≥ w , for all v ∈ W }. Then, we can define the weakest notion of belief (denoted by B ) as M, w  B ϕ iff M, v  ϕ , for all v ∈ Best(M). Thus, an agent believes in ϕ if the formula is satisfied in the most plausible states, according to his plausibility order. The modality B is a normal modality (B (ϕ → ψ) → (B ϕ → B ψ)). However, B is not the modality directly associated with the relation ≥. Let us then define M, w  2ϕ iff M, v  ϕ , for all v such that w ≥ v. The notion described by 2 is called safe belief. An agent has safe belief in a formula if it is satisfied in all states that are more or equally plausible than the current one. Thus, safe belief implies belief. Safe belief is also normal. However, the “safety” of a belief can only be inferred by an external observer, because for an agent to know that one of his beliefs is safe he would need to know in which state he currently is. Finally, we define the notion of strong belief in a formula ϕ if ϕ is satisfied in at least one state and all the states in which ϕ is satisfied are more plausible than all the states in which ϕ is not satisfied. Strong belief also implies belief, but strong belief is not normal. A very good overview of the concepts of logics defined over Plausibility Models, including thorough presentations of the semantics and of axiomatic systems for those logics, can be found in the course material [20]. This material covers in detail both the single-agent case, briefly presented above, and the multi-agent case, on which our logic in the following section is based. 3. Logic of plausible justifications In this section, we present our logic, discuss some of its features and present a sound and strongly complete axiomatic system for it. 3.1. Language and semantics We start by defining the language for the formulas of our logic. Definition 3.1. In order to define the language of LPJ, we need to take a finite set A = {1, . . . , n} of agents, a countable set  of proposition symbols and countable sets Xi , i ∈ A, of evidence variables. We assume that each pair of such sets is disjoint. The formulas ϕ and the evidence terms t of the language are defined as follows:

ϕ ::= p |  | ¬ϕ | ϕ1 ∧ ϕ2 | Ki ϕ | 2i ϕ | [t?]ϕ | t i ϕ | Pi t | t :i ϕ , with t ::= p | g | xij | t | t 1 + t 2 , where p ∈ , i ∈ A, j ∈ N, xij ∈ Xi and g is a term not occurring in A,  or any of the sets Xi . We denote the set of all evidence terms of the language by T and the set of all formulas by F . In the rest of this paper, sometimes it is convenient to use the duals of some of our modalities: Ki ϕ ≡ ¬Ki ¬ϕ , 3i ϕ ≡ ¬2i ¬ϕ and t?ϕ ≡ ¬[t?]¬ϕ . We do not use the duals of i , Pi and :i , but they can be defined analogously. The first thing that we notice is that there are some differences between our evidence terms and the ones in Justification Logic. Our language does not have the operators · and !, it does not have a set C of evidence constants, having instead a single evidence constant g, it has evidence terms of the form t and proposition symbols are also evidence terms. These differences come from the different semantics that the evidence terms have in the two logics. Thus, we will discuss these differences after we present the semantics of our logic. Definition 3.2. A frame for LPJ is a tuple F = ( W , {∼i }i ∈A , {≥i }i ∈A ) where

• W is a non-empty set of states. • ∼i ⊆ W × W is an equivalence relation. • ≥i ⊆ W × W is a relation that satisfies reflexivity and transitivity.

136

L.M. Schechter / Theoretical Computer Science 603 (2015) 132–145

• For each i ∈ A, the relations ∼i and ≥i satisfy the following property: ∼i = ≥i ∪ (≥i )−1 , where (≥i )−1 = {( v , w ) ∈ W × W : ( w , v ) ∈ ≥ i }. • The relation ≈ = ( i ∈A ∼i ) satisfies the following property: for every pair ( v , w ) ∈ W × W , v ≈ w. We call this property weak connectivity.

In an LPJ frame, the relations ≥i are the plausibility relations for each of the agents. They are pre-orders, just as in the single-agent setting of the previous section. Also as in the previous section, if v ≥i w, then agent i does not know for sure in which of the states v or w he actually is, but he considers state w equally or more plausible than state v. The relations ∼i denote these relations of indistinguishability of states by each of the agents. This is why they are defined as ∼i = ≥i ∪ (≥i )−1 . The relations ≥i are no longer total in our present multi-agent scenario when we consider them individually. However, the property of weak connectivity implies that every pair of states in the frame is indistinguishable to at least one of the agents. Definition 3.3. A model for LPJ is a tuple M = (F , V, E ), where F is a frame, V is a valuation function V :  → 2 W , mapping proposition symbols to sets of states, and E is an evidence function E : T → 2 W , mapping evidence terms to sets of states, that satisfies the following rules:

• • • •

E ( p ) = V( p ), for all p ∈ . E (g) = W . E (t ) = W \ E (t ). E (t 1 + t 2 ) = E (t 1 ) ∩ E (t 2 ).

We can see, in this definition, how the semantics for our evidence terms is built. We follow a similar approach to the one presented in [12] and use what seems to be the simplest semantics for evidence: an evidence is a subset of states of the model. We say that an agent considers the evidence term t as an admissible evidence for a formula ϕ if ϕ is satisfied in all of the states inside E (t ) that the agent considers possible for him to be (that is, the states inside E (t ) that the agent does not consider as possible states for him are not taken into consideration when he evaluates the admissibility of an evidence term t). As the proposition symbols denote subsets of states (using the function V), we also use them as evidence terms. The presence of variables, with one set to each agent, allows them to label particular subsets of their interest. There may be some subsets of states that are not denoted by any evidence term t built solely from the set  of proposition symbols with the use of the two operators defined for evidence terms, so the presence of variables allows the agents to label any subset of their choice. As an example, in a dynamic scenario where agents communicate, an agent may want to label the set of his initial “best” (most plausible) states before the communication actions begin. Going further into this scenario and depending on how we eventually choose to model the communications of the agents in the future, the presence of variables allows us even to consider “private” evidences, that are accessible only to a particular agent until they are made “public” by an act of communication. In short, taking all of this into consideration, we can think of the evidences denoted by proposition symbols as the “common ground” for all of the agents, while using the variables in the sets Xi to denote the “personal views” of each agent. The evidence term g can be considered as the “weakest” evidence, as it contains all the states in the model. It has a similar function to the evidence constants in Justification Logic, as it serves as evidence for formulas that do not need further justification. Looking at the semantics below, we can see that g can be used by any agent as evidence for what he knows. Finally, the operator + denotes evidence aggregation and evidence terms of the form t denote evidence complementation. We can see now, given the semantics above for the evidence terms in our logic, that evidences in Justification Logic and in LPJ are intrinsically different objects (sets of formulas in Justification Logic and sets of states in LPJ) with quite different meanings. That is the reason why the set of operators used to build the evidence terms are different in the two logics. Definition 3.4. Let M = (F , V, E ) be a model. The notion of satisfaction of a formula M, w  ϕ , can be inductively defined as follows:

• • • • • • • •

M, w  p iff w ∈ V( p ). M, w   always. M, w  ¬ϕ iff M, w  ϕ . M, w  ϕ1 ∧ ϕ2 iff M, w  ϕ1 and M, w  ϕ2 . M, w  Ki ϕ iff for all v ∈ W such that w ∼i v, M, v  ϕ . M, w  2i ϕ iff for all v ∈ W such that w ≥i v, M, v  ϕ . M, w  [t?]ϕ iff if w ∈ E (t ), then M, w  ϕ . M, w  t i ϕ iff for all v ∈ W such that w ∼i v and v ∈ E (t ), M, v  ϕ .

ϕ in a model M at a state w, notation

L.M. Schechter / Theoretical Computer Science 603 (2015) 132–145

137

• M, w  Pi t iff: 1. there is v ∈ W such that w ∼i v and v ∈ E (t ) and 2. for all x, y ∈ W such that w ∼i x and x ≥i y, if x ∈ E (t ), then y ∈ E (t ). • M, w  t :i ϕ iff M, w  t i ϕ and M, w  Pi t. The notions of global satisfaction, validity in a frame and validity are defined as in the previous section. We say that a formula ϕ is satisfiable if there is a model M and a state w in M such that M, w  ϕ . A formula is not satisfiable iff its negation is valid. A (possibly infinite) set  of formulas is satisfiable if there is a single model M and a single state w in M such that M, w  ϕ , for all ϕ ∈ . Our modalities Ki , 2i , [t?], i and :i are all normal. The modalities Ki and 2i denote the usual notions of knowledge (satisfaction in all states indistinguishable from the current one) and safe belief (satisfaction in all states more or equally plausible than the current one), respectively. Our modalities [t?] are inspired by PDL [21] test modalities. They allow us to verify whether a state belongs to an evidence t, by checking whether t? is satisfied at a state. The relation associated to the modality [t?] is R t = {( w , w ) : w ∈ E (t )}. The modalities i are inspired by Renne’s [15–17] modality of admissibility. The semantics of a formula t : ϕ in Justification Logic is composed by two parts (see Section 2.1): the first one related to the evidence function and the second to the relations in the frame. Renne calls the first part admissibility and uses the modality  to describe it. In our semantics, an agent i considers an evidence t admissible for a formula ϕ (t i ϕ ) if, in all the states inside the evidence t that the agent consider possible for him to be, the formula ϕ is satisfied. The modalities Pi are used to denote that agent i considers an evidence to be plausible. The notion of plausibility of an evidence has some similarities to the notion of strong belief. An evidence is considered plausible if there is a state that the agent considers possible inside of the evidence and, among the states that the agent considers possible, all of the states inside the evidence are more plausible than all of the states outside the evidence. Finally, the modality :i is used to denote that an agent considers an evidence as plausible evidence or plausible justification for a formula. An evidence is plausible evidence for a formula if the agent considers the evidence to be plausible and considers the evidence to be admissible for the formula. It is important to notice that, with this choice of semantics for plausible evidence, we can relate the modality :i with standard belief (the modality Bi ). Proposition 3.5. Suppose that the set Besti (M) is well-defined (this happens if M is finite or if the relation ≥i is well-founded). If a formula Pi t is satisfied in M = (F , V, E ), then Besti (M) ⊆ E (t ). Proof. Let us start by noticing that, since Besti (M) is well-defined, it is not empty. Besides that, the agent considers all the states in Besti (M) as possible states for him. Suppose that Besti (M)  E (t ). One possibility is that there is no state in E (t ) that the agent considers possible for him to be. In that case, t is not plausible, by Definition 3.4, which means that Pi t is not satisfied in M. Let us now consider the case where there is a state v ∈ E (t ) that the agent considers possible. As Besti (M)  E (t ), there is a state w such that w ∈ Besti (M) and w ∈ / E (t ). Since w ∈ Besti (M), then v ≥i w. But then we have two states that the agent considers possible, one outside E (t ) (w), the other inside E (t ) (v), where the one outside is equally or more plausible than the one inside. By Definition 3.4, this means that t is not plausible and Pi t is not satisfied in M. 2 Proposition 3.6. Suppose that the set Besti (M) is well-defined. If M, w  t :i t :i ϕ → Bi ϕ is valid.

ϕ , then M, w  Bi ϕ . In other words, the formula

Proof. Suppose that M, w  t :i ϕ . Then, by Definition 3.4, M, w  Pi t. By the proposition above, this means that Besti (M) ⊆ E (t ). Besides this inclusion, by definition, Besti (M) is also contained inside the set of states that the agent consider possible. Now, as M, w  t :i ϕ , we also have that M, w  t i ϕ , which means that ϕ is satisfied in all of the states of M that the agent considers possible and that are in E (t ). As Besti (M) is contained in this set, we have that ϕ is satisfied in all of the states in Besti (M), which means that M, w  Bi ϕ . 2 At this point, we are also able to justify the claim made above that the constant g can be used by an agent as evidence for what he knows. Proposition 3.7. The formulas Pi g and ( g i

ϕ ) ↔ (Ki ϕ ) are valid.

Proof. The validity of Pi g is straightforward from Definition 3.4, based on the fact that E ( g ) = W in every model M, where W is the set of states of the model. Now, suppose that M, w  g i ϕ . From Definition 3.4, this happens if and only if ϕ is satisfied in all of the states of M that the agent i considers possible and that are in E ( g ). However, all states of the model are always in E ( g ), as E ( g ) = W . Thus, M, w  g i ϕ if and only if ϕ is satisfied in all of the states that the agent i considers possible. Looking

138

L.M. Schechter / Theoretical Computer Science 603 (2015) 132–145

2

ϕ

1

ϕ

ϕ

2 x21

2

1

2

2

ϕ

2,4

1,3

3

t

3,4

4

Fig. 1. Example of a Plausibility Model with evidence terms.

again at Definition 3.4, the satisfaction of ϕ in all of the states that the agent i considers possible is precisely the definition of knowledge of the formula ϕ . Hence, M, w  g i ϕ if and only if M, w  Ki ϕ , which means that ( g i ϕ ) ↔ (Ki ϕ ) is valid. 2 Corollary 3.8. The formula g :i

ϕ ↔ Ki ϕ is valid.

Proof. (⇒) If M, w  g :i ϕ , then M, w  g i ϕ . Then, the proposition above allows us to conclude that M, w  Ki ϕ . (⇐) If M, w  Ki ϕ , then, by the proposition above, M, w  g i ϕ . The proposition also tells us that M, w  Pi g. Now, if M, w  g i ϕ and M, w  Pi g, then M, w  g :i ϕ . 2 Now, we illustrate the semantics presented above with a small example. Example 3.9. We consider the plausibility model represented in Fig. 1, where we describe the plausibility orders for the agents in the set A = {1, 2, 3, 4}, leaving implicit the reflexive loops. We name the three states in the top row of the model, from left to right, as w 1 , w 2 and w 3 and the three states in the bottom row, also from left to right, as w 4 , w 5 and w 6 . In the representation of the model in Fig. 1, the states where ϕ is satisfied are labeled, so ϕ is satisfied in the states w 1 , w 2 , w 3 and w 5 . We also label the subset denoted by the evidence term t, so E (t ) = { w 2 , w 3 , w 5 , w 6 }. Finally, the actual state where the agents are is w 5 , which is labeled with a double circle. We denote by W i the set of states that the agent i considers possible for him to be. These sets are important in the evaluation of admissibility and plausibility of evidence terms. We have the following sets: 1. 2. 3. 4.

W 1 = { w 2 , w 3 , w 5 }. W 2 = { w 1 , w 2 , w 4 , w 5 }. W 3 = { w 3 , w 5 , w 6 }. W 4 = { w 4 , w 5 , w 6 }.

Let us now determine, for each agent i ∈ A, if they consider t to be a plausible justification for ϕ in this model. In other words, let us evaluate whether M, w 5  t :i ϕ , for each i ∈ A. We start analyzing whether t is plausible for each of the agents (whether M, w 5  Pi t). First, since w 5 ∈ E (t ), there is a world that all of the agents consider possible inside E (t ). For the second property of plausible evidences, we need to consider each set W i and verify if there is an arrow between states in W i that starts inside E (t ) and ends outside. If there is, then t is not plausible for agent i. For agents 1 and 3, we have W 1 ⊂ E (t ) and W 3 ⊂ E (t ), so M, w 5  P1 t and M, w 5  P3 t. On the other hand, for agent 2, there are several arrows starting inside E (t ) and ending outside (the arrow from w 2 to w 1 , for instance). The same happens with agent 4, who has an arrow from w 5 to w 4 . Thus, M, w 5  ¬P2 t and M, w 5  ¬P4 t. We proceed to analyze whether t is an admissible evidence for ϕ for each of the agents (whether M, w 5  t i ϕ ). For this, we need to verify whether ϕ is satisfied in all of the states in E (t ) ∩ W i , for each agent i ∈ A. We have that ϕ is satisfied in all of the states in E (t ) ∩ W 1 = { w 2 , w 3 , w 5 } and in all of the states in E (t ) ∩ W 2 = { w 2 , w 5 }, so M, w 5  t 1 ϕ and M, w 5  t 2 ϕ . On the other hand, ϕ is not satisfied in w 6 , which is in E (t ) ∩ W 3 and in E (t ) ∩ W 4 , so M, w 5  ¬t 3 ϕ and M, w 5  ¬t 4 ϕ . Hence, we can conclude that M, w 5  t :1 ϕ , but M, w 5  ¬t :2 ϕ (t is admissible for ϕ , but it is not plausible), M, w 5  ¬t :3 ϕ (t is plausible, but it is not admissible for ϕ ) and M, w 5  ¬t :4 ϕ (t is neither plausible nor admissible for ϕ ). We can see here the different scenarios. In a first level, an agent may consider t as a plausible justification for ϕ (as agent 1 in this example) or not (as agents 2, 3 and 4). In a second level, there are multiple reasons why an agent does not consider t as plausible justification: he may consider that t is not plausible (as agent 2) or that t is not admissible for ϕ (as agent 3) or both at the same time (as agent 4).

L.M. Schechter / Theoretical Computer Science 603 (2015) 132–145

Let us now analyze the standard belief (described by the modality Bi ) of each of the agents with respect to plausibility orders, we can determine the sets Besti (M), for each i ∈ A: 1. 2. 3. 4.

139

ϕ . From the

Best1 (M) = { w 2 , w 3 }. Best2 (M) = { w 1 , w 4 }. Best3 (M) = { w 3 }. Best4 (M) = { w 4 , w 6 }.

As ϕ is satisfied in all of the states in Best1 (M) and in Best3 (M), we have M, w 5  B1 ϕ and M, w 5  B3 ϕ . On the other hand, as ϕ is not satisfied in w 4 , which is in Best2 (M) and in Best4 (M), we have M, w 5  ¬B2 ϕ and M, w 5  ¬B 4 ϕ . As belief is consistent, we know that, for agents 1 and 3, we have M, w 5  ¬B1 ¬ϕ and M, w 5  ¬B3 ¬ϕ . However, we need to determine whether agents 2 and 4 believe or not in ¬ϕ . As ϕ is satisfied in all of the states in Best4 (M), we have M, w 5  B4 ¬ϕ . On the other hand, as ¬ϕ is not satisfied in w 1 , which is in Best2 (M), we have M, w 5  ¬B2 ¬ϕ . Thus, we see that, with respect to standard belief of a formula ϕ , there are also various possible scenarios. An agent may believe in ϕ (as agents 1 and 3), in ¬ϕ (as agent 4) or in neither of them (as agent 2). We can then see that agents 1 and 3 agree on the standard belief of ϕ , but disagree on whether t is a plausible justification for ϕ . On the other hand, agents 2 and 3 agree that t is not a plausible justification for ϕ , but disagree on the standard belief of ϕ . Finally, agents 1 and 4 disagree both on whether t is a plausible justification for ϕ and on the standard belief of ϕ . With this example, we can see that, in our logic, agents can disagree not only over whether a sentence is true or false, but also on whether some evidence is a valid justification for a sentence or not. We think that, because of the possibility of describing all of these different scenarios, this logic is a good first step for the development of a dynamic logic that can model the processes of argumentation and debate in multi-agent systems. As we described in Section 1, in such dynamic logic, an agent i could make sincere announcements of the form t :i ϕ and the behavior of the other agents upon receiving this announcement should be based both in ϕ and in t, i.e., whether the agent believes in ϕ , in ¬ϕ or in neither of them and whether the agent considers t to be admissible for ϕ or not and to be plausible or not. This could potentially lead to situations in which one agent takes the announcement into consideration while another one completely disregards it. To finish this example, suppose that agent 2 considers important to have a label for his set Best2 (M). So, if there are no evidence terms t built from proposition symbols such that E (t ) = Best2 (M), or simply if agent 2 does not want to make this verification, he may label this set with one of his evidence variables, such as x21 , as represented in Fig. 1 above. 3.2. Quantification over evidence terms We can also describe an extension of LPJ with quantification over evidence terms, as [18] did in the context of traditional Justification Logics. We can add to Definition 3.1 formulas of the form Ji ϕ . The notion of satisfaction for these new formulas is defined as

M, w  Ji ϕ iff there is an evidence term t ∈ T such that M, w  t :i ϕ . The formulas Ji ϕ can then be read as “agent i has a justification (or an evidence) for ϕ . So, the modalities Ji denote a form of existential quantification over evidence terms. We call the logic obtained with the addition of these new modalities LPJ Q . Let us now analyze the behavior of these modalities and how they interact with the other operators in the language. We start by analyzing their duals. Even though Ji reads as “there is an evidence term that justifies ϕ ”, its dual is not equivalent to “all evidence terms justify ϕ ”, as is shown below.

M, w  ¬Ji ¬ϕ iff there is no evidence term t ∈ T such that M, w  t :i ¬ϕ iff for all evidence terms t ∈ T , M, w  ¬t :i ¬ϕ . But the condition “for all evidence terms t ∈ T , M, w  ¬t :i ¬ϕ ” is not semantically equivalent to the condition “for all evidence terms t ∈ T , M, w  t :i ϕ ”, which is the condition that reads as “all evidence terms justify ϕ ”. This happens because the operator :i is not its own self-dual. Therefore, the correct reading for the duals of the modalities Ji is “there is no evidence term that justifies ¬ϕ ”. If we think as the Ji modalities as asserting the existence of justification for ϕ , their duals assert absence of “refutation” for ϕ . We proceed to show that the formula Ji (ϕ → ψ) → (Ji ϕ → Ji ψ) is valid, which means that Ji is a normal modality. Suppose that M, w  Ji (ϕ → ψ) and M, w  Ji ϕ . This assumptions imply the existence of evidence terms t and t such that M, w  t i (ϕ → ψ) and M, w  t i ϕ . From these two results, we obtain M, w  (t + t ) i (ϕ → ψ) ∧ ϕ , which also implies M, w  (t + t ) i ψ . From the two initial assumptions, we can also conclude that Pi t and Pi t , which implies Pi (t + t ). Hence, we obtain M, w  (t + t ) :i ψ , which also implies that M, w  Ji ψ . Using an analogous reasoning, we can also show the validities of the formulas (Ji ϕ ∧ Ji ψ) ↔ Ji (ϕ ∧ ψ) and (Ji ϕ ∨ Ji ψ) → Ji (ϕ ∨ ψ). However, unlike the case with the ∧ operator, the formula Ji (ϕ ∨ ψ) → (Ji ϕ ∨ Ji ψ) can be falsified.

140

L.M. Schechter / Theoretical Computer Science 603 (2015) 132–145

1. Tautologies, Duals and Normality PL DuK Du2 Dut KK K2 Kt

Ki ϕ ∧ Ki ψ → Ki (ϕ ∧ 3ψ) ∨ Ki (3ϕ ∧ ψ)

Propositional Tautologies

Ki ϕ ↔ ¬Ki ¬ϕ 2i ϕ ↔ ¬3i ¬ϕ [t?]ϕ ↔ ¬t?¬ϕ Ki (ϕ → ψ) → (Ki ϕ → Ki ψ) 2i (ϕ → ψ) → (2i ϕ → 2i ψ) [t?](ϕ → ψ) → ([t?]ϕ → [t?]ψ)

2. ∼i is an equivalence relation TK 4K 5K

Rel2

Ki ϕ → ϕ Ki ϕ → Ki Ki ϕ ¬Ki ϕ → Ki ¬Ki ϕ

5. Construction of evidence terms E1 E2 E3 E4 E5

t?ϕ ↔ (t? ∧ ϕ )  p? ↔ p  g? t? ↔ ¬t? (t? ∧ s?) ↔ (s + t )?

6. Admissibility, Plausibility and Justification Adm Pla Jus

3. ≥i is reflexive and transitive

(t i ϕ ) ↔ (Ki [t?]ϕ ) Pi t ↔ ((Ki [t?]2i t?) (Ki t?)) t :i ϕ ↔ (t i ϕ ) ∧ Pi t



7. Rules T2 42

2i ϕ → ϕ 2i ϕ → 2i 2i ϕ

4. ∼i =≥i ∪ (≥i )−1 Rel1

MP Gen

From From [t?]ϕ

ϕ → ψ and ϕ , derive ψ ϕ , derive Ki ϕ , 2i ϕ and

Ki ϕ → 2i ϕ Fig. 2. Axiomatic System for LPJ.

To see this, consider a model with three states W = {a, b, c } and the following plausibility relation for an agent i: ≥i = {(a, b), (a, c ), (b, c ), (c , b)}. Consider also that V( p ) = {b} and V(q) = {c }. Then, the set of best states according to this plausibility relation is {b, c }. So, as was shown in Proposition 3.5, for any evidence to be plausible, it must contain the states b and c. But then no plausible evidence can be admissible for p, since b and c disagree on its valuation. The same holds for q. So, the formula ¬Ji p ∧ ¬Ji q ≡ ¬(Ji p ∨ Ji q) is true in every state of this model. At the same time, an evidence that contains exactly the set {b, c } is plausible and is admissible for p ∨ q. So, the formula Ji ( p ∨ q) can be satisfied in every state of the model, which means that the formula Ji ( p ∨ q) → (Ji p ∨ Ji q) is falsified in every state of this model. From these results, we can conclude that Ji is a normal modality that interacts with the other operators of the language in the same way as the other “Box” modalities from traditional modal logics, like Ki , Bi , 2i and so on. From the definition above, if M, w  Ji ϕ , then M, w  t :i ϕ , for some t ∈ T . Then, by Proposition 3.6, if M, w  t :i ϕ , then M, w  Bi ϕ . From this two implications, we can conclude that the formula Ji ϕ → Bi ϕ is valid. In fact, in any model where the evidence function E is surjective onto the set 2 W , we also have M, w  Bi ϕ → Ji ϕ , since in this case it is guaranteed that there is an evidence term t such that E (t ) = Besti (M). Hence, in such models, we have M, w  Ji ϕ ↔ Bi ϕ . Even if the evidence function is not surjective, but we still have E (t ) = Besti (M), for an evidence term t ∈ T , then the previous satisfaction remains true. It is also interesting to notice that, since in our semantics the evidences correspond to set of states in the model, the modalities Ji represent, in fact, a form of second-order quantification. 3.3. Axiomatic systems and complexity results We present below axiomatic systems for the logics LPJ and LPJ Q , beginning with the first of these logics. We consider the set of axioms and rules in Fig. 2, where ϕ and ψ are arbitrary formulas, t and s are arbitrary evidence terms and p is an arbitrary proposition symbol. We present the axioms divided in groups related to their function. In order to build an axiomatic system for the logic LPJ Q , we just add to the axiomatic system above the following axiom:

J t :i

ϕ → Ji ϕ ,

where t is an arbitrary evidence term and ϕ is an arbitrary formula. From the axioms above, perhaps Rel2 is the one with the less clear purpose. Rel1 states that every pair in ≥i is also in ∼i , while Rel2 states that every pair in ∼i is in ≥i or in (≥i )−1 . Rel2 is an adaptation with two modalities (Ki  and 3i ) of the so-called .3 axiom (see [3] for more details about this axiom). Such use of the axioms Rel1 and Rel2 to describe the interaction between the relations ≥i and ∼i can also be found in [20]. Every formula φ derivable from the axiomatic system above is called a theorem (denoted by  φ ). A formula φ is consistent iff ¬φ is not a theorem, i.e., iff  ¬φ , and inconsistent otherwise. A finite set of formulas  = {φ1 , . . . , φn } is consistent iff the formula ψ = φ1 ∧ . . . ∧ φn is consistent. Finally, an infinite set of formulas  is consistent iff every finite subset  ⊂  is consistent.

L.M. Schechter / Theoretical Computer Science 603 (2015) 132–145

141

The axiomatic system is said to be sound if every satisfiable formula is consistent (or, in an equivalent definition, if every satisfiable set of formulas is consistent). Soundness also means that every theorem is valid ( ϕ ⇒ ϕ ). The axiomatic system is said to be complete if every consistent formula is satisfiable, which also means that every valid formula is a theorem ( ϕ ⇒ ϕ ). It is said to be strongly complete if every consistent set of formulas is satisfiable. Unlike the case of soundness, the two definitions for completeness are not equivalent. Strong completeness implies completeness, but the reciprocal is false. The proof of the soundness of our axiomatic systems for LPJ and LPJ Q is straightforward. It is not difficult to show that each of the axioms is valid according to the LPJ and LPJ Q semantics and that the application of each of the rules to valid formulas give formulas that are also valid. Since the above axioms are valid, we can use axioms E1 through E5 and axioms Adm, Pla and Jus as reduction axioms. Reduction axioms are valid formulas that allow formulas of a logic to be translated into formulas of a simpler logic (a logic with less operators in its language, for instance). In extensions of traditional epistemic logics, it is usually desirable to find reduction axioms that allow the translation of formulas in the extension to formulas in the basic epistemic logic. This simplifies the study of the logic, as we can use the translation to relate results for the extended logic with known results for the basic logic. This is the strategy used to study many variants of Dynamic Epistemic Logics [4], where reduction axioms are used to translate its formulas to formulas of the basic (static) epistemic logic. We use this strategy with our logics, starting with LPJ. We can use the axioms E1 through E5 , Adm, Pla and Jus as reduction axioms to define a translation from the formulas in our logic to formulas in a basic epistemic logic that contains only the modalities 2i and Ki and that does not contain evidence terms. We will define the translation in an inductive way. In order to do this, we first define a notion of complexity of a formula ϕ or of an evidence term t, denoted by c (ϕ ) and c (t ), respectively. We start with the complexity of an evidence term t: 1. If t ∈  or t ∈ Xi , for i ∈ A, or t = g, then c (t ) = 1. 2. If t = s, then c (t ) = c (s) + 1. 3. If t = t 1 + t 2 , then c (t ) = max(c (t 1 ), c (t 2 )) + 1. Now, we define the complexity of a formula 1. 2. 3. 4. 5. 6. 7.

If If If If If If If

ϕ:

ϕ ∈  or ϕ = , then c (ϕ ) = 1. ϕ = ¬ψ or ϕ = Ki ψ or ϕ = 2i ψ , then c (ϕ ) = c (ψ) + 1. ϕ = ϕ1 ∧ ϕ2 , then c (ϕ ) = max(c (ϕ1 ), c (ϕ2 )) + 1. ϕ = [t?]ψ , then c (ϕ ) = max(c (t ), c (ψ)) + 1. ϕ = t i ψ , then c (ϕ ) = max(c (t ), c (ψ)) + 2. ϕ = Pi t, then c (ϕ ) = c (t ) + 4. ϕ = t :i ψ , then c (ϕ ) = max(c (ψ) + 3, c (t ) + 5).

symbols, in which the In order to define the translation, we begin by considering a new expanded set  of proposition  evidence variables of the sets Xi , i ∈ A, are incorporated.  is defined as  =  ∪ X . We denote by F  the set of i i ∈A formulas of the original language of LPJ, i.e., formulas built from the set  of proposition symbols, and by F  the set of LPJ formulas that can be built from the set  of proposition symbols. It is important to notice  that F  ⊂ F  , as we can think of F  as the set of formulas in F  satisfying the restriction that the atomic symbols in i ∈A Xi only occur inside evidence terms. Bool We can now define the translation δT : T → F 

, where T is the set of evidence terms of the original language of LPJ Bool and F  is the set of propositional formulas built from the expanded set  of proposition symbols using only the boolean

operators for negation and conjunction. This translation δT is defined inductively as follows:

1. 2. 3. 4.

If q ∈  , then δT (q) = q (following axiom E2 ). δT ( g ) =  (following axiom E3 ). δT (t ) = ¬δT (t ) (following axiom E4 ). δT (t + s) = δT (t ) ∧ δT (s) (following axiom E5 ).

It is straightforward to see that, in each of the rules above, the translation of the evidence term on the left is defined in terms of the translation of evidence terms that have a strictly lower complexity. Besides that, the translation of the evidence terms with the lowest complexity (c (t ) = 1) is given in a non-recursive way. Thus, this translation can be effectively calculated for any evidence term t. Besides that, |δT (t )| = O (|t |), for any evidence term t. K,2 K,2 Using the translation δT , we can define the translation δ F : F  → F  , where F  is the set of modal formulas built

from the expanded set  of proposition symbols that may contain only the modalities Ki and 2i (which implies that they do not contain evidence terms). This translation δ F is defined inductively as follows:

142

L.M. Schechter / Theoretical Computer Science 603 (2015) 132–145

1. 2. 3. 4. 5. 6. 7. 8. 9.

If p ∈ , then δ F ( p ) = p. δ F () = . δ F (¬ϕ ) = ¬δ F (ϕ ).

δ F (ϕ1 ∧ ϕ2 ) = δ F (ϕ1 ) ∧ δ F (ϕ2 ). δ F (Ki ϕ ) = Ki δ F (ϕ ). δ F (2i ϕ ) = 2i δ F (ϕ ). δ F ([t?]ϕ ) = δT (t ) → δ F (ϕ ) (following axiom E1 , rewritten as [t?]ϕ ↔ (t? → ϕ )). δ F (t i ϕ ) = δ F (Ki [t?]ϕ ) = Ki (δT (t ) → δ F (ϕ )) (following axiom Adm and the previous items). δ F (Pi t ) = δ F ((Ki [t?]2i t?) ∧ (Ki t?)) = Ki (δT (t ) → 2i δT (t )) ∧ Ki δT (t ) (following axiom Pla and the previous

items). 10. δ F (t :i ϕ ) = δ F (t i

ϕ ) ∧ δ F (Pi t ) (following axiom Jus).

In the same way as with the translation δT , it is straightforward to see that, in each of the rules above, the translation of the formula on the left is defined in terms of the translation of formulas that have a strictly lower complexity. Besides that, the translation of the formulas with the lowest complexity (c (ϕ ) = 1) is given in a non-recursive way. Thus, this translation can be effectively calculated for any formula ϕ . We also have, in analogy with the translation δT , that |δ F (ϕ )| = O (|ϕ |), for any formula ϕ . K,2 The translation δ F is done from the set of formulas F  to the set of formulas F  . Both of these sets are subsets of F  . Then, considering our axiomatic system defined in the set F  and considering models for formulas in F  , we have the following result, which states that we indeed have a useful translation. Lemma 3.10. If ϕ ∈ F  , then  ϕ ↔ δ F (ϕ ) and  ϕ ↔ δ F (ϕ ). Proof. First, we can see that the condition ϕ ∈ F  is necessary, since the translation δ F is only defined for such formulas. The result that  ϕ ↔ δ F (ϕ ) follows from the fact that each step of the translation is done through the use of one of the reduction axioms, which are all bi-implications2 and are all sound. From  ϕ ↔ δ F (ϕ ), using the soundness of the axiomatic system, we can infer that  ϕ ↔ δ F (ϕ ). 2 K,2

Even though F  and F  are both subsets of F  , models M = (F , V, E ) where the domain of the valuation function K,2 is  can be used to evaluate formulas in F  but not in F  , while models M = (F , V) where the domain of the valuation K,2

function is  but there is no evidence function can be used to evaluate formulas in F  but not in F  . We show how to  where formulas from both sets can be evaluated (M  is a model transform any of such “restricted models” in a model M , we have for formulas in F  ). In such a model M 1. If 2. If

, w  ϕ , or, ϕ ∈ F  , M, w  ϕ if and only if M , w  ϕ . ϕ ∈ F K ,2 , M , w  ϕ if and only if M

Then, from the lemma above, we also have

, w  ϕ if and only if M , w  δ F (ϕ ) M in such models. Another important feature of these transformations is that the frame F of the model is not altered, so  and M or M have the same size. both M

W   Suppose that M = (F , V, E ), where V :  → 2 W . Then,  we build M as M = (F , V , E ), where V :  → 2 is defined

such that V ( p ) = V( p ), if p ∈ , and V (x) = E (x), if x ∈ i ∈A Xi .  as M  = (F , V, E ), On the other hand, suppose that M = (F , V) is a model with no evidence function. Then, we build M where E is defined such that E (q) = V(q), for all q ∈  . We will prove the strong completeness of our axiomatic system in the end of the section. First, we use the translation built from the reduction axioms that are present in that axiomatic system to show some complexity results for our logics. Definition 3.11 (Model-Checking Problem). Given a finite model M and a formula determining the set SM (ϕ ) = { w ∈ M : M, w  ϕ }.

ϕ , the model-checking problem consists of

Theorem 3.12 (Model-Checking complexity for LPJ). The complexity of the Model-Checking Problem for the logic LPJ is linear in the product of the length of the formula and the size of the model.

2

We can think of E3 as  g? ↔ .

L.M. Schechter / Theoretical Computer Science 603 (2015) 132–145

143

 for Proof. Given a model M for formulas in F  and a formula ϕ ∈ F  , we start by transforming M into a model M , w  ϕ , for all states w. Then, we have that M , w  ϕ if and only formulas in F  such that M, w  ϕ if and only if M , w  δ F (ϕ ). Finally, for the evaluation of the formula δ F (ϕ ), the evidence function in M  is not used. So, M , w  if M  but without the evidence function E . Thus, solving the δ F (ϕ ) if and only if M , w  δ F (ϕ ), where M is the same as M original Model-Checking Problem is equivalent to solving a Model-Checking Problem for a formula in a standard multi-modal language with modalities Ki and 2i in a standard Kripke model. This can be done in time that is linear in the product of the length of the formula and the size of the model [22]. As |δ F (ϕ )| = O (|ϕ |) and |M | = |M|, the theorem is proved. 2 It is an open question at this point whether the Model-Checking Problem for LPJ Q has the same complexity of the Model-Checking Problem for LPJ. We conjecture that the answer is negative. The Model-Checking algorithms work inductively in the subformulas of the formula given as input, following a convenient order that allows the algorithm to determine a set SM (ϕ ) based on the sets of the other subformulas that were previously computed. The difficulty that appears in the case of LPJ Q is how to compute (efficiently, if possible) the set SM for a formula of the form Ji ψ . It is simple to notice that SM (Ji ψ) is either ∅ or W , where W is the set of states of the model. We have that SM (Ji ψ) = W if and only if there is an evidence term t such that the set E (t ) is a plausible justification for the formula ψ . It is not too difficult to determine a subset of W that could work as a plausible justification for ψ . We can consider the formula p :i ψ , where p does not occur in ψ and find a valuation for V( p ) among the 2| W | possible ones such that this formula is satisfied in the model. This can be done in nondeterministic polynomial time. The difficult part, after computing such a subset, is to determine if it matches E (t ) for some evidence term t. If it does, then SM (Ji ψ) = W . If it does not, then we can try again with a new subset that satisfies p :i ψ . If no such subset matches an evidence term, then SM (Ji ψ) = ∅. However, there does not seem to exist any efficient or simple way to perform this calculation of a suitable evidence term t such that E (t ) matches a subset that was previously computed. As a future work, it would be interesting to be able to determine some results about the complexity of the Model-Checking Problem for LPJ Q . Theorem 3.13 (Finite Model Property for LPJ). Every satisfiable formula ϕ of LPJ is satisfiable in a finite model. Proof. Suppose that the formula ϕ ∈ F  is satisfiable in a model M for formulas in F  . Then, it is also satisfiable in a  for formulas in F  . This, by Lemma 3.10, implies that δ F (ϕ ) is satisfiable in that same model M . Finally, we can model M  and conclude that δ F (ϕ ), which is a formula in a standard multi-modal language with drop the evidence function of M modalities Ki and 2i , is satisfiable in a standard Kripke model. Then, the filtration argument for standard modal logics [3] let us conclude that δ F (ϕ ) is satisfiable in a finite standard Kripke model M f . Thus, it is also satisfiable in a finite model

 f = (F f , V f , E f ) for formulas in F  . By Lemma 3.10, this finite model also satisfies ϕ . We can then obtain a finite model M M f for formulas in F  that satisfies ϕ by restricting the valuation function to the elements of , proving the theorem. 2 The Finite Model Property remains valid in LPJ Q , as shown in the result below.

Theorem 3.14 (Finite Model Property for LPJ Q ). Every satisfiable formula ϕ of LPJ Q is satisfiable in a finite model. Proof. In the presence of the modalities Ji , we cannot extend the translation δ F in a manner that preserves Lemma 3.10. Q Q Q K,2 But we can extend δ F to a translation δ F : F  → F  (where F  denotes the set of formulas of LPJ Q over the original Q

Q

set of proposition symbols ) such that a formula ϕ ∈ F  is satisfiable if and only if δ F (ϕ ) is satisfiable. Moreover, if the two formulas are satisfiable, they are satisfiable in models of the same size. Such “weaker” translation is sufficient for our needs. The translation that we propose is inspired by the process of Skolemization of first-order formulas [23]. Q We define the complexities c (Ji ϕ ) = max(c (ϕ ), 6) and c (¬Ji ϕ ) = c (Ji ϕ ) + 1. We can then define the translation δ F Q

inductively in the same way as we did for the translation δ F . The translation δ F is defined as follows:

ϕ = ¬ψ and ϕ = Ji ψ , then δ FQ (ϕ ) = δ F (ϕ ). Q 2. If ϕ = ¬ψ and ψ = Ji ψ , then δ F (ϕ ) = δ F (ϕ ). Q 3. If ϕ = Ji ψ , then δ F (ϕ ) = δ F (q :i ψ), where q is a fresh proposition symbol that does not occur in ψ . Q 4. If ϕ = ¬Ji , then δ F (ϕ ) = δ F (¬ g :i ψ). 1. If

Q

Each time we need to compute δ F (Ji ψ), we expand the set of proposition symbols  with a fresh proposition sym-

δ FQ

(Ji ψ) = δ F (q :i ψ). It is important that we use a different proposition symbol for each formula of the bol q and define form Ji ψ . As q is a fresh proposition symbol, it is not difficult to see that the formulas Ji ψ and q :i ψ are equi-satisfiable. This happens because, q being fresh, the truth value of any formula φ that may appear in a calculation δ F (φ) will not depend on the valuation given to q. So, Ji ψ is satisfiable in a model if and only if we can obtain a valuation V(q) that satisfies q :i ψ in that same model.

144

L.M. Schechter / Theoretical Computer Science 603 (2015) 132–145

On the other hand, if ¬Ji ψ is satisfied in a model M, this means that no evidence term in the model is a plausible justification for ψ . In particular, ¬ g :i ψ is satisfied in this model. Reciprocally, if ¬ g :i ψ is satisfied in a model, then g is not admissible for ψ in that model (g is always plausible). This means that there is at least one state in the model that does not satisfy ψ . If every set E (t ) contains a state where ψ is not satisfied, then ¬Ji ψ is satisfied in this model. Q Q K,2 Now, suppose that ϕ ∈ F  is satisfiable. Then, δ F (ϕ ) ∈ F  is also satisfiable. We can then continue following the steps Q

of the proof of the previous theorem (Theorem 3.13) to conclude that δ F (ϕ ) is satisfiable in a finite model. Then, by the way the translation was built, ϕ is also satisfiable in a finite model. 2 Theorem 3.15 (Satisfiability complexity for LPJ). The complexity of the Satisfiability Problem for the logic LPJ is PSPACE-Complete in the length of the formula. Proof. From Lemma 3.10, if ϕ ∈ F  (if ϕ is a formula from our original language), then ϕ ↔ δ F (ϕ ) is valid. This implies that M, w  ϕ if and only if M, w  δ F (ϕ ). Thus, in order to verify the satisfiability of ϕ , we can compute the formula δ F (ϕ ) and then verify its satisfiability. We can compute δ F (ϕ ) in polynomial time and |δ F (ϕ )| = O (|ϕ |). The formula δ F (ϕ ) is a formula in a standard multi-modal language with modalities Ki and 2i . The complexity to verify the satisfiability of such a formula is PSPACE-Complete in the length of δ F (ϕ ) [3]. Thus, we can conclude that the complexity to verify the satisfiability of a formula ϕ ∈ F  is also PSPACE-Complete in the length of ϕ . 2 The complexity of the Satisfiability Problem for LPJ Q turns out to be the same as the one for LPJ, as shown in the result below. Theorem 3.16 (Satisfiability complexity for LPJ Q ). The complexity of the Satisfiability Problem for the logic LPJ Q is PSPACE-Complete in the length of the formula. Q

ϕ ∈ F Q is satisfiable if and only if (ϕ ) is satisfiable, we can, as in the previous theorem (Theorem 3.15), verify the satisfiability of ϕ by verifying the Q satisfiability of δ F (ϕ ). Hence, the result of the previous theorem still follows and the complexity to verify the satisfiability

Proof. For this proof, we use the same translation δ F defined in the proof of Theorem 3.14. As

δ FQ

of a formula of LPJ Q is PSPACE-Complete.

2

We finish this section proving the strong completeness of our axiomatic systems. The strong completeness proof for our first axiomatic system is given in the following theorem. Theorem 3.17 (Strong completeness for LPJ). Every consistent set of LPJ formulas is satisfiable. Proof. Let  be a consistent set of formulas such that  ∈ F  . Then,  = δ F () = {δ F (ϕ ) : ϕ ∈ } is also a consistent K,2 set. We have that  ∈ F  . So, the formulas in  are formulas in a standard multi-modal language with modalities Ki and 2i . We can use the standard completeness proof for such logics [3] to show that the set  is satisfiable in a model K,2 for formulas in F  . Using the model transformations described above, we can show that  is satisfiable in a model for formulas in F  . Then,  is also satisfiable in this model. Finally, if we restrict the valuation function of this model to the set , we get that  is satisfiable in a model for formulas in F  , proving the theorem. 2 We now proceed to the strong completeness proof of the axiomatic system for LPJ Q . Theorem 3.18 (Soundness and strong completeness for LPJ Q ). Every consistent set of LPJ Q formulas is satisfiable. Q

Proof. The completeness proof follows as the proof of the previous theorem (Theorem 3.17), using the translation δ F , instead of δ F . 2 4. Final remarks and future work In this work, we combine features from Justification Logics and Logics of Plausibility-Based Beliefs to build a logic of explicit beliefs, where each agent can explicitly state which is his justification for believing in a given sentence. Our logic is a normal modal logic based on the standard Kripke semantics, where we provide a semantic definition for the evidence terms and define the notion of plausible evidence for an agent, based on plausibility relations in the model. In our logic, agents can disagree not only over whether a sentence is true or false, but also on whether some evidence is a valid justification for a sentence or not, as illustrated in Example 3.9. Thus, in this logic, justifications can be faulty and unreliable. After defining our logic and its semantics, we provide a strongly complete axiomatic system for it and show that it has the finite model

L.M. Schechter / Theoretical Computer Science 603 (2015) 132–145

145

property and that the complexities of its Model-Checking and Satisfiability problems are the same as in the case of basic modal logics. We also present an extension of this logic with a form of quantification over evidence terms. We feel that this logic is a good first step for the development of a dynamic logic that can model the processes of argumentation and debate in multi-agent systems, as discussed in Section 1 and Example 3.9. We think that the appropriate logic to model these processes would be a dynamic logic of evidence-based beliefs. Thus, as our next step to build such a logic, we need to add to the present logic the actions that would model the communications between agents during the processes of argumentation and debate. In our desired framework, the announcement of a sentence should be accompanied by a justification as to why the agent performing the announcement believes in it. Then, each agent receiving the announcement should judge by himself whether he should start believing or not in the announced sentence, based on his current beliefs both about what was announced and about the justification that was given. In a preliminary analysis, it seems that we may have a few possible results for an announcement, depending on whether the agent receiving the announcement currently (before the announcement) believes in what was announced, in the negation of it or in neither and whether he currently considers that the justification that was given is plausible or not. Beside this main goal, we feel that it would also be interesting to develop other proof systems for this logic, such as a tableau system or a sequent calculus, since they are more suited to be used as automatic provers than an axiomatic system. It would also be interesting to further investigate the complexity of the Model-Checking Problem for our logic LPJ Q with quantification over evidence terms. Acknowledgements This work was supported by grants from the Brazilian Research Agencies CNPq (grant number 307551/2012-1) and FAPERJ (grant number E-26/110.716/2011). The author also wishes to thank Bryan Renne, Ruy de Queiroz, João Marcos, Raul Fervari and Guillaume Hoffmann for valuable discussions during and after WoLLIC 2012. Finally, the author thanks the two anonymous referees for valuable suggestions for the improvement of this work. References [1] L.M. Schechter, A logic of plausible justifications, in: L. Ong, R. Queiroz (Eds.), Proceedings of the 19th Workshop on Logic, Language, Information and Computation, WoLLIC 2012, in: Lecture Notes in Computer Science, vol. 7456, Springer, Heidelberg, 2012, pp. 306–320. [2] W. van der Hoek, R. Verbrugge, Epistemic logic: a survey, in: L.A. Petrosjan, V.V. Mazalov (Eds.), Game Theory and Applications, vol. 8, Nova Science Publishers, New York, 2002, pp. 53–94. [3] P. Blackburn, M. de Rijke, Y. Venema, Modal Logic, Cambridge Tracts in Theoretical Computer Science, vol. 53, Cambridge University Press, Cambridge, 2001. [4] H. van Ditmarsch, W. van der Hoek, B. Kooi, Dynamic Epistemic Logic, Synthese Library, Springer, Heidelberg, 2007. [5] A. Baltag, S. Smets, Conditional doxastic models: a qualitative approach to dynamic belief revision, in: R. Queiroz, G. Mints (Eds.), Proceedings of the 13th Workshop on Logic, Language, Information and Computation, WoLLIC 2006, Electron. Notes Theor. Comput. Sci. 165 (2006) 5–21. [6] A. Baltag, S. Smets, Dynamic belief revision over multi-agent plausibility models, in: G. Bonano, W. van der Hoek, M. Wooldridge (Eds.), Proceedings of the 7th Conference on Logic and the Foundations of Game and Decision Theory, LOFT 2006, Liverpool, 2006, pp. 11–24. [7] J. van Benthem, Dynamic logic for belief revision, J. Appl. Non-Classical Logics 17 (2) (2007) 129–155. [8] J. van Benthem, F. Liu, Dynamic logic of preference upgrade, J. Appl. Non-Classical Logics 17 (2) (2007) 157–182. [9] S. Artemov, Explicit provability and constructive semantics, Bull. Symbolic Logic 7 (1) (2001) 1–36. [10] S. Artemov, Justified common knowledge, Theoret. Comput. Sci. 357 (1–3) (2006) 4–22. [11] S. Artemov, E. Nogina, Introducing justification into epistemic logic, J. Logic Comput. 15 (6) (2005) 1059–1073. [12] J. van Benthem, E. Pacuit, Dynamic logics of evidence-based beliefs, Studia Logica 99 (1–3) (2011) 61–92. [13] A. Baltag, S. Smets, Talking your way into agreement: belief merge by persuasive communication, in: M. Baldoni (Ed.), Proceedings of the Second MultiAgent Logics, Languages, and Organisations Federated Workshops, in: CEUR Workshop Proceedings, vol. 494, CEUR-WS.org, Aachen, 2009, pp. 129–141, http://ceur-ws.org/Vol-494/. [14] T. Yavorskaya, Interacting explicit evidence systems, Theory Comput. Syst. 43 (2) (2008) 272–293. [15] B. Renne, Dynamic epistemic logic with justification, Ph.D. thesis, The City University of New, York, 2008. [16] B. Renne, Public communication in justification logic, J. Logic Comput. 21 (6) (2011) 1005–1034. [17] B. Renne, Multi-agent justification logic: communication and evidence elimination, Synthese 185 (S1) (2012) 43–82. [18] M. Fitting, A quantified logic of evidence, Ann. Pure Appl. Logic 152 (1–3) (2008) 67–83. [19] M. Fitting, The logic of proofs, semantically, Ann. Pure Appl. Logic 132 (1) (2005) 1–25. [20] A. Baltag, S. Smets, Dynamic logics for interactive belief revision, course offered at the 21st European Summer School in Logic, Language and Information (ESSLLI), http://alexandru.tiddlyspot.com/#[[ESSLLI’09 Slides]], 2009. [21] D. Harel, D. Kozen, J. Tiuryn, Dynamic Logic, Foundations of Computing, MIT Press, Cambridge, 2000. [22] E.M. Clarke, O. Grumberg, D. Peled, Model Checking, MIT Press, Cambridge, 2000. [23] H.B. Enderton, A Mathematical Introduction to Logic, Academic Press, San Diego, 2001.