On efficiently estimating the probability of extensions in abstract argumentation frameworks

On efficiently estimating the probability of extensions in abstract argumentation frameworks

International Journal of Approximate Reasoning 69 (2016) 106–132 Contents lists available at ScienceDirect International Journal of Approximate Reas...

2MB Sizes 0 Downloads 12 Views

International Journal of Approximate Reasoning 69 (2016) 106–132

Contents lists available at ScienceDirect

International Journal of Approximate Reasoning www.elsevier.com/locate/ijar

On efficiently estimating the probability of extensions in abstract argumentation frameworks ✩,✩✩ Bettina Fazzinga a , Sergio Flesca b,∗ , Francesco Parisi b a b

ICAR-CNR, Rende (CS), Italy DIMES – University of Calabria, Rende (CS), Italy

a r t i c l e

i n f o

Article history: Received 31 July 2015 Received in revised form 19 November 2015 Accepted 23 November 2015 Available online 27 November 2015 Keywords: Probabilistic reasoning Argumentation systems Monte-Carlo simulation

a b s t r a c t Probabilistic abstract argumentation is an extension of Dung’s abstract argumentation framework with probability theory. In this setting, we address the problem of computing the probability Prsem ( S ) that a set S of arguments is an extension according to a semantics sem. We focus on four popular semantics (i.e., complete, grounded, preferred and ideal-set) for which the state-of-the-art approach is that of estimating Prsem ( S ) by using a MonteCarlo simulation technique, as computing Pr sem ( S ) has been proved to be intractable. In this paper, we propose a new Monte-Carlo simulation approach which exploits some properties of the above-mentioned semantics for estimating Prsem ( S ) using much fewer samples than the state-of-the-art approach, resulting in a significantly more efficient estimation technique. © 2015 Elsevier Inc. All rights reserved.

1. Introduction Argumentation allows disputes to be modeled, which arise between two or more parties, each of them providing arguments to assert their reasons. The simplest argumentation framework is the abstract argumentation framework (AAF) introduced in the seminal paper [2]. An AAF is a pair  A , D  consisting of a set A of arguments, and of a binary relation D over A, called the defeat (or, equivalently, attack) relation. Roughly speaking, an argument is an abstract entity representing an assertion of a party, while an attack from an argument a to an argument b indicates that b is contradicted by a (i.e., if a holds then b cannot hold). Example 1. Consider the following scenario, where we are interested in deciding whether to organize a BBQ party in our garden. Assume that our arguments are the following: 1. a = Our friends will have great fun at the BBQ party; 2. b = Saturday will rain; 3. c = We will not be serving alcoholics at the party.

✩ ✩✩

*

An abridged version of this paper appeared in [1]. The first and the second author were partially supported by the project PON01_01286 – eJRM (electronic Justice Relationship Management). Corresponding author. E-mail addresses: [email protected] (B. Fazzinga), fl[email protected] (S. Flesca), [email protected] (F. Parisi).

http://dx.doi.org/10.1016/j.ijar.2015.11.009 0888-613X/© 2015 Elsevier Inc. All rights reserved.

B. Fazzinga et al. / International Journal of Approximate Reasoning 69 (2016) 106–132

107

This scenario can be modeled by the AAF A, whose set of arguments is {a, b, c }, and whose defeat relation consists of the defeats δ1 = (b, a) and δ2 = (c , a), meaning that both in the case of rain and in the case that we do not serve alcoholics our friends could not have much fun. 2 Several semantics for AAFs, such as admissible, complete, grounded, preferred, and ideal-set, have been proposed [2–4] to identify “reasonable” sets of arguments, called extensions. Basically, each of these semantics corresponds to some properties which “certify” whether a set of arguments can be profitably used to support a point of view in a discussion. For instance, a set S of arguments is an extension according to the admissible semantics if it has two properties: there is no defeat between arguments in S (it is conflict-free), and every argument (outside S) attacking any argument in S is counterattacked by any argument in S. Intuitively enough, the fact that a set S is an extension according to the admissible semantics means that, using the arguments in S, you do not contradict yourself, and you can rebut to anyone who uses any of the arguments outside S to contradict yours. The fundamental problem of verifying whether a set of arguments is an extension according to one of the above-mentioned semantics has been studied in [5,6]. As a matter of fact, in some context, it may happen that there is a degree of uncertainty about the arguments and attacks used by the parties involved in a dispute. Thus, several proposals have been made to model uncertainty in AAFs, by considering weights, preferences, or probabilities associated with arguments and/or defeats. In this regard, [7–10] have recently extended the original Dung framework in order to achieve probabilistic abstract argumentation frameworks (PrAFs), where the probability theory is exploited to model uncertainty of arguments and defeats. In particular, [8] proposed a PrAF where both arguments and defeats are associated with their (marginal) probabilities. Example 2. A PrAF F can be obtained from the AAF A of Example 1 by considering the arguments a, b, and c, and the defeats δ1 and δ2 as probabilistic events, having probabilities Pr (a) = 0.8, Pr(b) = 0.3, Pr(c ) = 0.5, Pr(δ1 ) = 0.7, and Pr(δ2 ) = 0.2. Basically, the values of the probabilities of the events associated with the arguments mean that: we are quite certain that our friends will have great fun at the party; we do not trust a lot the weather forecast service forecasting rain; we are undecided about serving alcoholics; we are quite confident that in the case of rain our friends will not have great fun; and we believe that the lack of alcoholics is very unlikely to make our friend unhappy. 2 The issue of how to assign probabilities to arguments and defeats in the PrAF proposed in [8], has been deeply investigated in [11,12], where the justification and the premise perspectives have been introduced. In this paper, we do not address this issue, but we assume that the probabilities of arguments and defeats are given. We deal with the probabilistic counterpart of the problem of verifying whether a set of arguments is an extension according to a semantics, that is, the problem of determining the probability Prsem F ( S ) that a set S of arguments is an extension according to a given semantics sem. To this end, we consider the PrAF proposed in [8], which is based on the notion of possible world. Basically, given a PrAF F , a possible world represents a (deterministic) scenario consisting of some subset of the arguments and defeats in F . Hence, a possible world can be viewed as an AAF containing exactly the arguments and the defeats occurring in the represented scenario. For instance, considering the above-introduced PrAF FA , the possible world {a, b}, ∅ is the AAF representing the scenario where only a and b occur, while the possible world {a, b, c }, {δ1 , δ2 } is the AAF representing the scenario where all the arguments and defeats occur. In [8] it was shown that a PrAF admits a unique probability distribution over the set of possible worlds, which assigns a probability value to each possible world coherently with the probabilities of arguments and defeats, and allows users to derive probabilistic conclusions from the PrAF. The fact that a PrAF admits a unique probability distribution over the set of possible worlds follows from the independence assumption, that is: arguments are viewed as pairwise independent probabilistic events and defeats are viewed as probabilistic events conditioned by the occurrence of the arguments they relate, but independent from any other event. As a matter of fact, the independence assumption is widely used when event correlations are unknown or hard to be derived exactly. For instance, in Example 2, the three events associated with arguments can be reasonably assumed independent from one another, and both the defeats can be deemed independent from one another too. In this work, we rely on the independence assumption as done in [8], so that, as explained above, a unique probability distribution over the set of possible worlds is defined. Once shown that a PrAF admits a unique probability distribution over the set of possible worlds, the probability Prsem F ( S ) is naturally defined as the sum of the probabilities of the possible worlds where the set S of arguments is an extension according to the semantics sem. Unfortunately, as pointed out in [13, #P 14], computing Prsem F ( S ) is intractable (actually, F P -complete) for the popular semantics complete, grounded, preferred and ideal-set. Indeed, for these semantics, the state-of-the-art approach is that of estimating Pr sem F ( S ) by a Monte-Carlo simulation approach, as proposed in [8], since the complexity of exactly computing Pr sem F ( S ) is prohibitive. 1.1. Main contributions In this paper, we propose a new Monte-Carlo-based simulation technique for estimating the probability Pr sem F ( S ), where sem is one of the following semantics: complete, grounded, preferred, ideal-set. In more detail, our strategy relies on the fact that an extension is complete, grounded, preferred or ideal-set only if it is both conflict-free and admissible, and on the fact

108

B. Fazzinga et al. / International Journal of Approximate Reasoning 69 (2016) 106–132 cf

that the probability Pr F ( S ) (resp., Prad F ( S )) that a set of arguments is conflict-free (resp. admissible) can be computed in polynomial time [13]. Starting from this, we devised a strategy for estimating Pr sem F ( S ) that, first computes an estimate of sem| E cf ( S )

PrF

sem| E ad ( S )

( S ) (resp. PrF

( S )), that is the conditional probability that S is an extension according to sem given that S sem| E ( S )

cf (S) is conflict-free (resp. admissible), and then provides an estimate of Pr sem F ( S ) by multiplying the estimate of Pr F sem| E ad ( S ) cf (resp. PrF ( S )) by PrF ( S ) (resp. Prad ( S ) ). F Hence, differently from the approach proposed in [8], where the aim of the Monte-Carlo simulation is that of estimating sem| E cf ( S ) sem| E ( S ) Prsem ( S ), or PrF ad ( S ). This imF ( S ), in our approach the Monte-Carlo simulation is exploited to estimate Pr F plies that, instead of considering the whole set of possible worlds of F as sample space (as done in [8]), we work over a reduced sample space: the subset of the possible worlds such that S is conflict-free, or such that S is an admissible extension. We show that our strategy is correct and that it results in considering fewer samples than the approach in [8]. sem| E ( S ) Finally, we experimentally validate the proposed approach, showing that, both in the cases that we estimate Pr F cf ( S )

sem| E

(S)

and Pr F ad ( S ), our approach outperforms the approach proposed in [8] and that, in most practical cases, estimating sem| E ( S ) sem| E ( S ) PrF ad ( S ) is faster than estimating Pr F cf ( S ). 2. Preliminaries In this section, we briefly overview Dung’s abstract argumentation framework, and its probabilistic extension introduced in [8]. 2.1. Abstract argumentation An abstract argumentation framework [2] (AAF) is a pair  A , D , where A is a finite set, whose elements are referred to as arguments, and D ⊆ A × A is a binary relation over A, whose elements are referred to as defeats (or attacks). An argument is an abstract entity whose role is determined by its relationships with other arguments. Given an AAF A, we also refer to the set of its arguments and the set of its defeats as Arg (A) and Def (A), respectively. Given arguments a, b ∈ A, we say that a defeats b iff there is (a, b) ∈ D. Similarly, a set S ⊆ A defeats an argument b ∈ A iff there is a ∈ S such that a defeats b; and argument a defeats S iff there is b ∈ S such that a defeats b. A set S ⊆ A of arguments is said to be conflict-free if there are no a, b ∈ S such that a defeats b. An argument a is said to be acceptable w.r.t. S ⊆ A iff ∀b ∈ A such that b defeats a, there is c ∈ S such that c defeats b. Several semantics for AAFs have been proposed to identify “reasonable” sets of arguments, called extensions. We consider the following well-known semantics [2]: admissible (ad), complete (co), grounded (gr), preferred (pr) and ideal-set (id). A set S ⊆ A of arguments is an admissible extension iff S is conflict-free and all its arguments are acceptable w.r.t. S; a complete extension iff S is admissible and S contains all the arguments that are acceptable w.r.t. S; a grounded extension iff S is a minimal (w.r.t. ⊆) complete set of arguments; a preferred extension iff S is a maximal (w.r.t. ⊆) admissible set of arguments; an ideal-set extension iff S is an admissible set of arguments contained in every preferred extension of A;

– – – – –

Example 3. Consider the AAF  A , D  of Example 1, where the set A of arguments is {a, b, c }, and the set D of defeats is {δ1 = (b, a), δ2 = (c , a)}. As S = {b} is conflict-free and b is acceptable w.r.t. S, it is the case that S is admissible. It is easy to see that sets ∅, S 1 = {c } and S 2 = {b, c } are admissible extensions, while set S 3 = {a} is not admissible since S 3 does not counterattack neither the attack from b to a nor the attack from c to a. Both S and S 1 are not complete, since they do not contain all the acceptable arguments: b is acceptable w.r.t. S and c is acceptable w.r.t. S 1 . Since S 2 is an admissible extension and contains all the acceptable arguments, it is a complete extension, and it is also a preferred, grounded, and ideal-set extension since it is the unique complete extension. 2 Given an AAF A, a set S ⊆ Arg(A) of arguments, and a semantics sem ∈ {ad, co, gr, pr, id}, we define the function ext(A, sem, S) which returns true if S is an extension according to sem, false otherwise. 2.2. Probabilistic abstract argumentation We now review the probabilistic abstract argumentation framework (PrAF) proposed in [8]. Definition 1 (PrAF). A PrAF is a tuple  A , P A , D , P D  where  A , D  is an AAF, and P A and P D are, respectively, functions assigning a non-zero1 probability value to each argument in A and defeat in D, that is, P A : A → (0, 1] and P D : D → (0, 1]. 1

Assigning probability equal to 0 to arguments/defeats is useless.

B. Fazzinga et al. / International Journal of Approximate Reasoning 69 (2016) 106–132

109

Fig. 1. Running example.

Basically, the value assigned by P A to any argument a represents the probability that a actually occurs, whereas the value assigned by P D to a defeat (a, b) represents the conditional probability that a defeats b given that both a and b occur. The meaning of a PrAF is given in terms of possible worlds, each of them representing a scenario that may occur in the reality. Given a PrAF F , a possible world is modeled by an AAF which is derived from F by considering only a subset of its arguments and defeats. More formally, given a PrAF F =  A , P A , D , P D , a possible world w of F is an AAF  A , D  such that A ⊆ A and D ⊆ D ∩ ( A × A ). The set of the possible worlds of F will be denoted as pw(F ). Example 4. As a running example, consider the PrAF F =  A , P A , D , P D  of Example 2 where A = {a, b, c }, D = {δ1 = (b, a), δ2 = (c , a)}, P A (a) = 0.8, P A (b) = 0.3, P A (c ) = 0.5, P D (δ1 ) = 0.7, P D (δ2 ) = 0.2. A graphical representation of F is shown in Fig. 1, where (i) each node of the graph represents an argument, (ii) each edge represents a defeat and (iii) the probability values are reported near to nodes and edges. The set pw(F ) consists of the following possible worlds:

w 1 = ∅, ∅ w 6 = {a, c }, ∅ w 11 =  A , {δ1 }

w 2 = {a}, ∅ w 7 = {b, c }, ∅ w 12 =  A , {δ2 }

w 3 = {b}, ∅ w 8 =  A , ∅ w 13 =  A , {δ1 , δ2 }

w 4 = {c }, ∅ w 9 = {a, b}, {δ1 }

2

w 5 = {a, b}, ∅ w 10 = {a, c }, {δ2 }

An interpretation for a PrAF F =  A , P A , D , P D  is a probability distribution function I over the set pw(F ) of the possible worlds. Assuming that arguments represent pairwise independent events, and that each defeat represents an event conditioned by the occurrence of its argument events but independent from any other event, the interpretation for the PrAF F =  A , P A , D , P D  is as follows. Each w ∈ pw(F ) is assigned by I the probability:

I (w) =



a∈Arg( w )

P A (a) ·



(1 − P A (a)) ·

a∈ A \Arg( w )



P D (δ) ·

δ∈Def ( w )



(1 − P D (δ))

δ∈ D ( w )\Def ( w )

where D ( w ) is the set of defeats that may appear in the possible world w, that is D ( w ) = D ∩ (Arg( w ) × Arg( w )). Hence, the probability of a possible world w is given by the product of four contributions: (i) the product of the probabilities of the arguments belonging to w; (ii) the product of the one’s complements of the probabilities of the arguments that do not appear in w; (iii) the product of the conditional probabilities of the defeats in w (recall that a defeat δ = (a, b) may appear in w only if both a and b are in w); (iv) the product of the one’s complements of the conditional probabilities of the defeats that, though they may appear in w, they do not. Example 5. Continuing our running example, the interpretation I for F is as follows. I ( w 1 ) = (1 − P A (a)) · (1 − P A (b)) · (1 − P A (c )) = 0.07; I ( w 2 ) = P A (a) · (1 − P A (b)) · (1 − P A (c )) = 0.28; I ( w 3 ) = (1 − P A (a)) · P A (b) · (1 − P A (c )) = 0.03; I ( w 4 ) = (1 − P A (a)) · (1 − P A (b)) · P A (c ) = 0.07; I ( w 5 ) = P A (a) · P A (b) · (1 − P A (c )) · (1 − P D (δ1 )) = 0.036; I ( w 6 ) = P A (a) · (1 − P A (b)) · P A (c ) · (1 − P D (δ2 )) = 0.224; I ( w 7 ) = (1 − P A (a)) · P A (b) · P A (c ) = 0.03; I ( w 8 ) = P A (a) · P A (b) · P A (c ) · (1 − P D (δ1 )) · (1 − P D (δ2 )) = 0.0288; I ( w 9 ) = P A (a) · P A (b) · (1 − P A (c )) · P D (δ1 ) = 0.084; I ( w 10 ) = P A (a) · (1 − P A (b)) · P A (c ) · P D (δ2 ) = 0.056; I ( w 11 ) = P A (a) · P A (b) · P A (c ) · P D (δ1 ) · (1 − P D (δ2 )) = 0.0672; I ( w 12 ) = P A (a) · P A (b) · P A (c ) · (1 − P D (δ1 )) · P D (δ2 ) = 0.0072; I ( w 13 ) = P A (a) · P A (b) · P A (c ) · P D (δ1 ) · P D (δ2 ) = 0.0168. 2 The probability Prsem F ( S ) that a set S of arguments is an extension according to a given semantics sem is defined as the sum of the probabilities of the possible worlds w for which S is an extension according to sem (i.e., ext( w , sem, S ) = true). a PrAF F , a set S, and a semantics sem, the probability Pr sem Definition 2 (Prsem F ( S )). Given  F ( S ) that S is an extension according to sem is Prsem ( S ) = I ( w ). w ∈ pw(F ) F ∧ext( w , sem, S )

Example 6. In our running example, the probabilities that the sets S = {b}, S 1 = {c }, S 2 = {b, c } and S 3 = {a} are admissible are as follows:

110

B. Fazzinga et al. / International Journal of Approximate Reasoning 69 (2016) 106–132

Prad F ( S ) = I ( w 3 ) + I ( w 5 ) + I ( w 7 ) + I ( w 8 ) + I ( w 9 ) + I ( w 11 ) + I ( w 12 ) + I ( w 13 ) = 0.3 Prad F ( S 1 ) = I ( w 4 ) + I ( w 6 ) + I ( w 7 ) + I ( w 8 ) + I ( w 10 ) + I ( w 11 ) + I ( w 12 ) + I ( w 13 ) = 0.5 Prad F ( S 2 ) = I ( w 7 ) + I ( w 8 ) + I ( w 11 ) + I ( w 12 ) + I ( w 13 ) = 0.15 Prad F ( S 3 ) = I ( w 2 ) = 0.28 2 In the following we will also refer to the probability that a set S of arguments is conflict-free, that is the sum of probabilities of the possible worlds w wherein S is conflict-free. Let cf ( w , S ) be a function returning true iff S is conflict-free in w. Though cf is not a semantics, with a little abuse of notation, we denote as Pr cf F ( S ) the probability that S is conflict-free,  that is Prcf w ∈pw(F )∧cf ( w , S ) I ( w ). F (S) = 3. Estimating extension probability using Monte-Carlo simulation We first describe the state-of-the-art approach for estimating the probability Pr sem F ( S ) that a set S of arguments is an extension according to a semantics sem, then we introduce two algorithms that, as we show in Section 4, significantly speed up the estimation of Pr sem F ( S ). Throughout this section, as well as in the rest of the paper, we assume that a PrAF F =  A , P A , D , P D , a set S ⊆ A of arguments, and a semantics sem are given. 3.1. The state-of-the-art approach In this section, we briefly review the Monte-Carlo simulation approach proposed in [8], which is implemented by Algorithm 1. Algorithm 1 estimates the probability Pr sem F ( S ) by repeatedly sampling the set pw(F ) of possible worlds (i.e., the set of AAFs that can be induced by F ). It takes as input F , S, sem, an error level  , and a confidence level 1 − α , and it sem sem  sem  sem returns an estimate Pr F ( S ) of the probability Pr F ( S ) such that Pr F ( S ) lies in the interval Pr F ( S ) ±  with confidence level 1 − α . Algorithm 1 works as follows. It samples n AAFs from the set pw(F ) (Lines 2–22), and, for each of them, it checks whether S is an extension according to sem (Line 16); if it is the case, variable γ which keeps track of the number of AAFs for which S is an extension according to sem is incremented by 1. At each iteration, an AAF is generated by randomly selecting an argument a ∈ A according to its probability P A (a) (Lines 4–9), and then randomly selecting a defeat such that both of its arguments occur in the set of generated arguments according to its probability (Lines 10–15). The number n of AAFs to be sampled to achieve the required error level  with confidence level 1 − α is determined by exploiting the Agresti–Coull interval [15]. In particular, according to [15] the estimated value p of Prsem F ( S ) after γ successes in n samples is p =

γ +(z12−α /2 )/2 n+ z12−α /2

(Line 20), where z1−α /2 is the 1 − α /2 quantile of the normal distribution, and the number of samples

ensuring that the error level is

 with confidence level 1 − α is n =

z12−α /2 · p ·(1− p )

2

after that n samples have been generated (Line 22), and returns the proportion samples.

γ n

− z12−α /2 (Line 21). Thus, Algorithm 1 stops of successes in the number of n generated

3.2. Estimating Prsem F ( S ) by sampling AAFs wherein S is conflict-free In this section, we introduce a Monte-Carlo approach for estimating Prsem F ( S ), whose main idea is that of sampling only the AAFs of pw(F ) wherein S is conflict-free. In fact, our algorithm estimates Pr sem F ( S ) by first computing the probability sem| E ( S )

cf Prcf ( S ) that S is an extension according F ( S ) that S is conflict-free and then estimating the conditional probability Pr F to sem given that S is conflict-free. Before providing our results, we introduce some notation that will be used also in Section 3.3. In the following, each argument a ∈ A is viewed as a (basic) probabilistic event xa , which is independent from the other events xb associated with other arguments b ∈ A (with b = a). The probability of event xa , denoted as Pr(xa ), is given by P A (a). An event is said to be complex if it is obtained from basic events by using the operators ∧ (conjunction), ∨ (disjunction), or ¬ (negation, or complement). For instance, (xa ∧ xb ) represents the (complex) event that the (basic) events xa and xb simultaneously occur, whereas (xa ∨ xb ) is the event that xa or xb occurs (note that xa and xb may occur together, that is, they may be not exclusive events). Similarly to what was said for arguments, each defeat δ = (a, b) ∈ D is viewed as the probabilistic event xδ , and the (conditional) probability Pr(xδ |(xa ∧ xb )) of the event xδ given that xa ∧ xb occurred is given by P D (δ). Therefore, the probability of the event xa ∧ xb ∧ xδ is Pr(xa ∧ xb ∧ xδ ) = Pr(xa ∧ xb ) · Pr(xδ |(xa ∧ xb )) = Pr(xa ) · Pr(xb ) · Pr(xδ |(xa ∧ xb )) = P A (a) · P A (b) · P D (δ). Observe that, event xδ is independent from the other events associated with other defeats. Let E w be the event that the possible world w ∈ pw(F ) occurs, that is,

Ew =



a∈Arg( w )

xa ∧



a∈ A \Arg( w )

¬xa ∧



δ∈Def ( w )



xδ ∧

δ∈ D ( w )\Def ( w )

Note that the probability Pr( E w ) of event E w is I ( w ).

¬xδ

(1)

B. Fazzinga et al. / International Journal of Approximate Reasoning 69 (2016) 106–132

111

Algorithm 1 State-of-the-art algorithm for approximating Pr sem F ( S ). Input: A PrAF F =  A , P A , D , P D  A set S ⊆ A A semantics sem An error level  A confidence level 1 − α sem  sem  sem  sem Output: Pr F ( S ) such that Pr F ( S ) ∈ [Pr F ( S ) −  , Pr F ( S ) +  ] with confidence level 1 − α 1: γ = n = 0 2: repeat 3: Arg = Def = ∅ 4: for all a ∈ A do 5: Generate a random number r ∈ [0, 1] 6: if r ≤ P A (a) then 7: Arg = Arg ∪ {a} 8: end if 9: end for 10: for all a, b ∈ D s.t. a, b ∈ Arg do 11: Generate a random number r ∈ [0, 1] 12: if r ≤ P D (a, b) then 13: Def = Def ∪ {a, b} 14: end if 15: end for 16: if ext(Arg, Def , sem, S) then 17: γ =γ +1 18: end if 19: n=n+1 20:

p=

21:

n=

γ +(z12−α /2 )/2

n+ z1−α /2 z12−α /2 · p ·(1− p ) 2

2

− z12−α /2

22: until n > n γ 23: return n

Let E cf ( S ) be the event “the set S is conflict-free”. As shown in [13,14], E cf ( S ) can be expressed as follows.

E cf ( S ) =





xa ∧

a∈ S

¬xδ .

δ = (a, b) ∈ D ∧a ∈ S ∧ b ∈ S

The rationale of the expression above is that the event that S is conflict-free occurs iff the following events simultaneously occur: (i) the event that all of the arguments in S occur (this is represented by the first conjunct of E cf ( S )); (ii) the event that no defeat (a, b) with a, b ∈ S occurs (this is represented by the second conjunct of E cf ( S )). cf To compute the probability Prcf F ( S ) of E cf ( S ), our algorithm exploits the following fact, which entails that Pr F ( S ) can 2 be computed in O (| S | ).

Fact 1 (Prcf F ( S ) [13,14]). The probability that a set S of arguments is conflict-free is as follows.

Prcf F (S) =



P A (a) ·

a∈ S







1 − P D (a, b) .

a, b ∈ D ∧a ∈ S ∧ b ∈ S

Since for the considered semantics (i.e., complete, grounded, preferred, ideal-set), S is an extension according to sem sem| E cf ( S ) ( S ) · Prcf only if S is conflict-free, it holds that Pr sem F ( S ) = Pr F F ( S ). sem Hence, PrF ( S ) can be estimated by first determining the exact value of Pr cf F ( S ) in polynomial time (Fact 1), and then sem| E ( S )

estimating Pr F cf ( S ) by sampling the AAFs wherein E cf ( S ) occurs (that is, AAFs wherein S is conflict-free). However, to accomplish this, we need to know the value of the probabilities of the argument events and defeat events conditioned to the event that E cf ( S ) occurs. Given an argument a ∈ A and a defeat δ = a, b ∈ D, we denote as Pr(xa | E cf ( S )) (resp., Pr(xδ | E cf ( S ))) the probability that event xa (resp., xδ ) occurs given that E cf ( S ) occurs. The following lemma states that Pr(xa | E cf ( S )) coincides with P A (a) if a is not in S, otherwise Pr(xa | E cf ( S )) = 1. Moreover, it states that for δ = a, b it is the case that Pr(xδ | E cf ( S ) ∧ xa ∧ xb ) coincides with P D (δ) if a ∈ / S or b ∈ / S, otherwise Pr(xδ | E cf ( S ) ∧ xa ∧ xb ) is zero. As we show shortly, our algorithm samples the AAFs wherein E cf ( S ) occurs by randomly selecting arguments and defeats according to the probabilities given in the following lemma. The proofs of all the lemmas and the theorems stated in the paper are given in the appendix.

112

B. Fazzinga et al. / International Journal of Approximate Reasoning 69 (2016) 106–132

Algorithm 2 Estimating Pr sem F ( S ) by sampling AAFs wherein S is conflict-free. Input: A PrAF F =  A , P A , D , P D  A set S ⊆ A A semantics sem An error level  A confidence level 1 − α sem  sem  sem  sem Output: Pr F ( S ) such that Pr F ( S ) ∈ [Pr F ( S ) −  , Pr F ( S ) +  ] with confidence level 1 − α cf 1: Compute Pr F ( S ) as indicated in Fact 1 2: γ = n = 0 3: repeat 4: Arg = S 5: Def = ∅ 6: for all a ∈ A \ S do 7: Generate a random number r ∈ [0, 1] 8: if r ≤ Pr (xa | E cf ( S )) then 9: Arg = Arg ∪ {a} 10: end if 11: end for 12: for all δ = a, b ∈ D such that a, b ∈ Arg do 13: if a ∈ / S ∨b∈ / S then 14: Generate a random number r ∈ [0, 1] 15: if r ≤ Pr (xδ | E cf ( S ) ∧ xa ∧ xb ) then 16: Def = Def ∪ {a, b} 17: end if 18: end if 19: end for 20: if ext(Arg, Def , sem, S) then 21: γ =γ +1 22: end if 23: n=n+1 24:

p=

25:

n

=

γ +z12−α /2 /2

n+ z1−α /2 z12−α /2 · p ·(1− p ) 2

2

2 2 · (Prcf F ( S )) − z1−α /2

26: until n > n

27: return γ /n · Pr cf F (S)

Lemma 1. Given a PrAF F =  A , P A , D , P D  and a set S ⊆ A of arguments, then it holds that 1) 2) 3) 4)

∀a ∈ S, Pr(xa | E cf ( S )) = 1. ∀a ∈ A \ S, Pr(xa | E cf ( S )) = P A (a). ∀δ = a, b ∈ D such that a, b ∈ S, Pr(xδ | E cf ( S ) ∧ xa ∧ xb ) = 0. ∀δ = a, b ∈ D \ {a, b ∈ D s.t. a, b ∈ S }, Pr(xδ | E cf ( S ) ∧ xa ∧ xb ) = P D (δ).

cf Algorithm 2 estimates Pr sem F ( S ), with error level  and confidence level 1 − α , by first computing Pr F ( S ) (Line 1), next, sem| E cf ( S ) sem sem| E cf ( S ) sem | E ( S ) cf    F ( S ) = Pr computing the estimate Pr ( S ) of PrF ( S ) (Lines 3–26); and finally returning Pr ( S ) · Prcf F F F (S) (Line 27). sem| E cf ( S )

 ( S ) by exploiting the results of Lemma 1 as follows. At each iteration, it The core of Algorithm 2 computes Pr F generates an AAF Arg , Def  by first adding all the a ∈ S to Arg, since Pr(xa | E cf ( S )) = 1 (Line 4). Then, the arguments a ∈ A \ S are randomly added to Arg according to their probability Pr(xa | E cf ( S )) = P A (a) (Lines 6–11). Moreover, as every generated AAF must not contain any defeat δ = a, b where a, b ∈ S (due to Pr(xδ | E cf ( S ) ∧ xa ∧ xb ) = 0), Algorithm 2 randomly adds to Def only the defeats δ = a, b such that a or b is in the set Arg \ S according to probability Pr(xδ | E cf ( S ) ∧ xa ∧ xb ) = P D (δ) (Lines 12–19). After such an AAF has been generated, variable γ is incremented by 1 if S is an extension according to sem (Line 20). Algorithm 2 takes as input the error level  and the confidence level 1 − α to get Prsem F ( S ) lying in the interval

 F ( S ) ±  with confidence level 1 − α . However, note that in the core of Algorithm 2 we do not compute Pr  F ( S ), but Pr sem| E cf ( S ) sem| E cf ( S )

 we compute PrF ( S ), that is an estimate of PrF ( S ). Hence, we need to determine the error level  to be taken sem

sem

sem| E cf ( S )

sem| E ( S )

 into account to get Pr F cf ( S ) lying in the interval Pr ( S ) ±  , which in turn entails Pr sem F F ( S ) lying in the interval sem sem sem| E cf ( S ) sem| E cf ( S ) sem| E cf ( S ) cf     PrF ( S ) ±  . Since PrF ( S ) = PrF ( S ) · PrF ( S ), an estimate PrF ( S ) such that PrF ( S ) lies in the interval sem| E cf ( S )

sem| E cf ( S )

   F ( S ) such that Prsem ( S ) lies in the interval [Pr Pr ( S ) ±  corresponds to an estimate Pr ( S ) ±  ] · Prcf F F F F ( S ). cf



Thus,  =  /PrF ( S ). Furthermore, we need to determine the number n of AAFs to be sampled to ensure that the error level sem| E cf ( S )

 of Pr F

sem

( S ) is  with confidence level 1 − α . According to the Agresti–Coull interval [15], n =

Since  =  /Prcf F ( S ), we obtain n =

2 z12−α /2 · p ·(1− p )·(Prcf F ( S ))

2

− z12−α /2 (Line 25).

z12−α /2 · p ·(1− p )

( )2

− z12−α /2 .

B. Fazzinga et al. / International Journal of Approximate Reasoning 69 (2016) 106–132

113

The following theorem states that Algorithm 2 is sound.

 F ( S ) returned by Algorithm 2 is such that Prsem ( S ) ∈ Theorem 1. Let  be an error level, and 1 − α a confidence level. The estimate Pr F sem

 F ( S ) −  , Pr  F ( S ) +  ] with confidence level 1 − α . [Pr sem

sem

3.3. Estimating Prsem F ( S ) by sampling AAFs wherein S is admissible In this section, we introduce a Monte-Carlo approach for estimating Pr sem F ( S ) where only the AAFs wherein S is an extension according to the admissible semantics are sampled. Let E ad ( S ) be the event “set S is an admissible extension”. As shown in [13,14], E ad ( S ) can be expressed as follows:

 

E ad ( S ) = E cf ( S ) ∧



e 1 ( S , d) ∨ e 2 ( S , d) ∨ e 3 ( S , d)

d∈ A \ S

where:

• e 1 ( S , d) = ¬xd • e 2 ( S , d) = xd ∧ • e 3 ( S , d) = xd ∧

 δ = (d, b) ∈ D ∧b ∈ S



δ = (d, b) ∈ D ∧b ∈ S

¬xδ

xδ ∧

δ = (a, d) ∈ D ∧a ∈ S



The rationale of the expression above is that E ad ( S ) occurs iff E cf ( S ) occurs in conjunction with the event that for all the arguments d in A \ S, exactly one of the following mutually exclusive events occurs:

• e 1 ( S , d): d does not occur, or • e 2 ( S , d): d occurs and d does not defeat S (i.e., no defeat d, b such that b ∈ S occurs). • e 3 ( S , d): d occurs, d defeats S, and S defeats d. That is, d occurs, there is at least one argument b ∈ S such that d, b occurs, and there is at least one argument a ∈ S such that a, d occurs. ad We will show that Pr sem F ( S ) can be estimated by first computing in polynomial time the probability Pr F ( S ) that E ad ( S ) sem| E ad ( S ) occurs and then estimating the probability Pr F ( S ) that S is an extension according to sem given that E ad ( S ) occurs. The probability Prad ( S ) can be computed in polynomial time by exploiting the following fact, which entails that Pr ad F F (S) can be computed in time O (| S | · | A |).

Fact 2 (Prad F ( S ) [13,14]). The probability that a set S of arguments is admissible is as follows. cf Prad F ( S ) = Pr F ( S ) ·

 



P 1 ( S , d) + P 2 ( S , d) + P 3 ( S , d)

d∈ A \ S

where:

• P 1 ( S , d) = Pr(e 1 ( S , d)) = 1 − P A (d).

• P 2 ( S , d) = Pr(e 2 ( S , d)) = P A (d) ·

d, b ∈ D ∧b ∈ S



• P 3 ( S , d) = Pr(e 3 ( S , d)) = P A (d) · 1 −





1 − P D (d, b) .



d, b ∈ D ∧b ∈ S



 · 1−

1 − P D (d, b)

a, d ∈ D ∧a ∈ S





1 − P D (a, d)

.

As for the case of sampling only AAFs wherein S is conflict-free, since for all the semantics sem we consider, S is an sem| E ad ( S ) extension according to sem only if S is an admissible extension, it holds that Pr sem ( S ) · Prad F ( S ) = Pr F F ( S ). In order to sample only AAFs wherein S is an admissible extension, we need to know the probability of the argument events given that E ad ( S ) occurs. This probability Pr(xa | E ad ( S )) is given by the first two items of Lemma 2. The lemma also sem| E ( S ) states the following probabilities that, as it will be clearer in the following, are exploited to estimate Pr F ad ( S ): (i) the probability Pr(xδ | E ad ( S )) that δ occurs given that E ad ( S ) occurs, for each defeat δ = a, b such that both a and b belong to S; (ii) the probability Pr(e 3 ( S , a)| E ad ( S ) ∧ xa ) that event e 3 ( S , a) occurs given that E ad ( S ) ∧ xa occurs, for each argument a not in S;

114

B. Fazzinga et al. / International Journal of Approximate Reasoning 69 (2016) 106–132

Algorithm 3 Estimating Pr sem F ( S ) by sampling AAFs wherein S is admissible. Input: A PrAF F =  A , P A , D , P D  A set S ⊆ A A semantics sem An error level  A confidence level 1 − α sem  sem  sem  sem Output: Pr F ( S ) such that Pr F ( S ) ∈ [Pr F ( S ) −  , Pr F ( S ) +  ] with confidence level 1 − α

1: Compute Pr ad F ( S ) as in Fact 2 2: γ = n = 0 3: repeat 4: Arg = S 5: Def = ∅ 6: defeatS = ∅ 7: for all a ∈ A \ S do 8: Generate a random number r ∈ [0, 1] 9: if r ≤ Pr (xa | E ad ( S )) then 10: Arg = Arg ∪ {a} 11: if r ≤ Pr (e 3 ( S , a)| E ad ( S ) ∧ xa ) then 12: Def = Def ∪ generateAtLeastOneDefeatAndDefend(F , Arg , Def , S , a) 13: defeatS = defeatS ∪ {a} 14: end if 15: end if 16: end for 17: for all δ = a, b ∈ D s.t. (a, b ∈ Arg \ S ) ∨ (a ∈ S ∧ b ∈ Arg \ S ∧ b ∈ / defeatS) do 18: Generate a random number r ∈ [0, 1] 19: if r ≤ Pr(xδ | E ad ( S ) ∧ e 2 ( S , b) ∧ xa ∧ xb ) then 20: Def = Def ∪ {δ = a, b} 21: end if 22: end for 23: if ext(Arg, Def , sem, S) then 24: γ =γ +1 25: end if 26: n=n+1 27:

p=

28:

n

γ +z12−α /2 /2

=

n+ z1−α /2 z12−α /2 · p ·(1− p ) 2

2

2 · Prad F ( S ) − z1−α /2

29: until n > n

30: return γ /n · Pr ad F (S)

(ii) the probability Pr(xδ | E ad ( S ) ∧ e 2 ( S , b) ∧ xa ∧ xb ) that xδ occurs given that E ad ( S ) ∧ e 2 ( S , b) ∧ xa ∧ xb occurs, for each defeat δ = a, b such that a belongs to S while b does not, or both a and b are not in S. Lemma 2. Given a PrAF F =  A , P A , D , P D  and a set S ⊆ A of arguments, then 1) 2) 3) 4) 5) 6)

∀a ∈ S, Pr(xa | E ad ( S )) = 1; ( S ,a)+ P 3 ( S ,a) ∀a ∈ A \ S, Pr(xa | E ad ( S )) = P 1 ( S ,Pa2)+ ; P 2 ( S ,a)+ P 3 ( S ,a) ∀ δ = a, b ∈ D s.t. a, b ∈ S, Pr(xδ | E ad ( S )) = 0; ∀ δ = a, b ∈ D s.t. a, b ∈ A \ S, Pr(xδ |xa ∧ xb ∧ E ad ( S ) ∧ e 2 ( S , b)) = P D (δ); ( S ,a) ∀a ∈ A \ S, Pr(e 3 ( S , a)| E ad ( S ) ∧ xa ) = P 2 ( S ,Pa3)+ ; P 3 ( S ,a) ∀δ = a, b ∈ D s.t. a ∈ S ∧ b ∈ A \ S, Pr(xδ |xa ∧ xb ∧ E ad ( S ) ∧ e 2 ( S , b)) = P D (δ),

where P 1 ( S , a), P 2 ( S , a), and P 3 ( S , a) are the probabilities of events e 1 ( S , a), e 2 ( S , a), and e 3 ( S , a) given in Fact 2. Algorithm 3 estimates Pr sem F ( S ) by sampling AAFs wherein S is an admissible extension. Analogously to Algorithm 2, it first determines the (exact) probability Pr ad F ( S ) that S is an admissible extension, as specified in Fact 2 (Line 1); next, sem| E ad ( S )

sem| E

  F ( S ) of it computes the estimate Pr ( S ) of PrF ad ( S ) (Lines 3–29); and finally it determines the estimate Pr F sem sem | E ( S ) ad ad sem   PrF ( S ) as PrF ( S ) = PrF ( S ) · PrF ( S ) (Line 30). (S)

sem

sem| E ad ( S )

 The core of Algorithm 3 determines Pr ( S ) by exploiting the results of Lemma 2 as follows. At each iteration, F it generates an AAF Arg , Def  by first adding all the a ∈ S to Arg (Line 4), since Pr(xa | E ad ( S )) = 1 when a ∈ S. Next, to guarantee that S is an admissible extension in the AAF being generated, each argument a ∈ A \ S, is randomly added to Arg P ( S ,a)+ P ( S ,a) according to its probability Pr(xa | E ad ( S )) = P ( S ,a2)+ P ( S ,a3)+ P ( S ,a) provided by Case 2) of Lemma 2 (Lines 7–10). Next, for 1 2 3 each argument a ∈ A \ S which has been added to Arg, we distinguish two cases (Line 11): Case A) the event e 3 ( S , a)| E ad ( S ) ∧ xa occurs, meaning that a defeats S given that S is admissible and a occurs. In this case, the random number r generated for deciding whether at least one defeat from a to S, and vice versa from S to a,

B. Fazzinga et al. / International Journal of Approximate Reasoning 69 (2016) 106–132

115

Function 4 generateAtLeastOneDefeatAndDefend. Input: A PrAf F =  A , P A , D , P D ; an AAF Arg, Def ; a set S of arguments; an argument a ∈ Arg \ S; Output: A set (a) of defeats 1: (a) = ∅ 2: min = 0, max = 1

3: Generate a random number r ∈ 0, 1 −



δ∈ (a) (1 − P D (δ))

4: for all a, b ∈ D such that b ∈ S do 5: if r ∈ [min, min + (max − min) · P D (a, b)] then 6: (a) = (a) ∪ {a, b} 7: max = min + (max − min) · P D (a, b) 8: else 9: min = min + (max − min) · P D (a, b) 10: end if 11: end for 12: min = 0, max = 1

13: Generate a random number r ∈ 0, 1 −





δ∈

(a) (1 − P D (δ))



14: for all b, a ∈ D such that b ∈ S do 15: if r ∈ [min, min + (max − min) · P D (b, a)] then 16: (a) = (a) ∪ {b, a} 17: max = min + (max − min) · P D (b, a)) 18: else 19: min = min + (max − min) · P D (b, a)) 20: end if 21: end for 22: return (a)

P ( S ,a)

should be added to Def , is less than or equal to Pr(e 3 ( S , a)| E ad ( S ) ∧ xa ) = P ( S ,a3)+ P ( S ,a) provided by Case 5) of 2 3 Lemma 2. Case B) the event e 3 ( S , a)| E ad ( S ) ∧ xa does not occur. In this case, no defeats from a to S is added to Def . However, some defeat from S to a could be added to Def . We now give more details on both cases A) and B). Case A). To guarantee that S is an admissible extension in the AAF being generated, Algorithm 3 randomly generates a non-empty set (a) of defeats, and it adds the defeats in (a) to the set of defeats Def of the AAF being generated (Lines 12–13). Specifically, (a) =  (a) ∪ 

(a) is such that | (a)|, |

(a)| ≥ 1, all the defeats in  (a) are of the form a, b with b ∈ S, and all the defeats in 

(a) are of the form c , a with c ∈ S. That is,  (a) consists of defeats from a toward S, while 

(a) consists of defeats from S toward a. The fact that |

(a)| ≥ 1 (which means that S defeats a) ensures that S remains an admissible extension even adding the defeats in (a) to Def. The generation of the defeats in (a) is accomplished at Line 12 by function generateAtLeastOneDefeatAndDefend (Function 4) in linear time w.r.t. the size of S, and set defeatS is used to keep track of the arguments a for which (a) has been generated (Line 13). Function 4 takes as input the PrAf F =  A , P A , D , P D , the AAF Arg, Def  being generated, the set S of arguments as well as an argument a ∈ Arg \ S, and returns a randomly generated set (a) of defeats consisting of at least one defeat from a towards S and at least one from S towards a, that is (a) is a subset of (a) =  (a) ∪ 

(a) where  (a) = {a, b| a, b ∈ D , b ∈ S } and 

(a) = {b, a| b, a ∈ D , b ∈ S }. Function 4 uses variables (a), min and max which are initialized with the empty set, zero and one, respectively. (a) is progressively augmented with the defeats in  (a) (resp., 

(a)) randomly chosen during the steps between Lines 3 and 11 (resp., Lines 13 and 21). At the end of the first (resp. second) for loop of the algorithm the value of max minus min represents the probability of occurrence of the subset of  (a) (resp., 

(a)) consisting of the defeats which have been retained from (a) at Line 6 (resp., Line 16). Lemma 3, that will be used to prove the soundness of Algorithm 3, states the probability of the set of defeats returned by Function 4. Case B). To guarantee that S is an admissible extension in the AAF being generated, no defeat from a towards S is generated. However, there can be some defeat δ from S to a. Let δ = b, a, with b ∈ S to a ∈ Arg \ S, defeat δ is generated according to the probability Pr(xδ | E ad ( S ) ∧ e 2 ( S , a) ∧ xa ∧ xb ) = P D (δ) provided by Case 6) of Lemma 2. This is done at Lines 19–20, where Algorithm 3 randomly adds to Def (i) the defeats a, b such that both a and b belong to Arg \ S, according to their probability provided by Case 4) of Lemma 2, and (ii) the defeats a, b such that a ∈ S and b belongs to Arg \ S (and b does not defeat S), according to their probability provided by Case 6) of Lemma 2. Observe that, since every AAF generated must not contain any defeat a, b where a, b ∈ S (see Case 3) of Lemma 2), Algorithm 3 does not add any of these defeats to Def . After that such an AAF has been generated, analogously to Algorithm 2, Algorithm 3 checks whether S is an extension according to sem, and, if this is the case, variable γ is incremented by 1 (Line 24). Moreover, reasoning as in the case

116

B. Fazzinga et al. / International Journal of Approximate Reasoning 69 (2016) 106–132 sem| E ad ( S )

 of Algorithm 2, it can be shown that (i) the error level 

to be taken into account when computing Pr ( S ) is F sem 

=  /Prad ( S ) , where  is the error level for estimating Pr ( S ) , and that (ii) the number of AAFs to be sampled to ensure F F | E ad ( S )  sem that the error level of Pr ( S ) is 

with confidence level 1 − α is n

= F As stated in the following theorem, Algorithm 3 is sound.

2 z12−α /2 · p ·(1− p )·(Pr ad F ( S ))

2

− z12−α /2 (Line 28).

 F ( S ) returned by Algorithm 3 is such that Prsem ( S ) ∈ Theorem 2. Let  be an error level, and 1 − α a confidence level. The estimate Pr F sem

 F ( S ) −  , Pr  F ( S ) +  ] with confidence level 1 − α . [Pr sem

sem

3.4. Theoretical analysis of the efficiency of Algorithm 2 and Algorithm 3 In this section, we provide a theoretical analysis of the efficiency of Algorithms 2 and 3 in terms of the number of samples generated during their execution, compared with the number of samples generated during the execution of Algorithm 1 for the same input. Specifically, we provide sufficient conditions guaranteeing that the number n (resp. n

) of AAFs sampled by Algorithm 2 (resp. Algorithm 3) to estimate Pr sem F ( S ) is lower than the number n of AAFs sampled by Algorithm 1 in the case that the same error and confidence level are assumed. The following theorem provides a relationship between n and n and between n and n

. Theorem 3. Let 1 − α be a confidence level,  be an error level and let n, n and n

be the number of Monte-Carlo iterations of Algorithm 1, Algorithm 2, and Algorithm 3, respectively. Let i 1 , i 2 , i 3 , i 4 , i 5 and i 6 be the following inequalities:

(i 1 ) Prsem ( S ) ≥ k ·  , (i 3 ) Prsem| E cf ( S ) ( S ) ≥ k ·  , (i 5 ) Prsem| E ad ( S ) ( S ) ≥ k

· 

, (i 2 ) 1 − Prsem ( S ) ≥ k ·  , (i 4 ) 1 − Prsem| E cf ( S ) ( S ) ≥ k ·  , (i 6 ) 1 − Prsem| E ad ( S ) ( S ) ≥ k

· 

If there exist k and k greater than 1 such that i 1 , i 2 , i 3 and i 4 hold, then





+1) 2

(Prcf ( S )−Prsem F ( S )) (a) n−nn ≥ 1 − F · kk·( ·(kk− holds with confidence level 1 − α . 1) (1−Prsem ( S ))

If there exist k and k greater than 1 such that i 1 , i 2 , i 5 and i 6 hold, then

(Prad ( S )−Prsem F ( S )) (b) n−nn ≥ 1 − F · (1−Prsem ( S ))



k·(k

+1) k ·(k−1)

2

holds with confidence level 1 − α .

Theorem 3 ensures that Algorithm 2 and Algorithm 3 need fewer samples than Algorithm 1 in most practical cases. For instance, assuming that the confidence level 1 − α is 95%,  = 0.005, Pr ad ( S ) = 80% and Pr sem ( S ) = 50%, we have that Pr sem ( S ) Prsem| E ad ( S ) ( S ) = ad = 62.5% and  = ad = 0.00625. Hence, inequalities i 1 , i 2 , i 3 and i 4 are verified for values of Pr

(S)

Pr

(S)

k ∈ [2..100] and k

∈ [2..60]. Therefore, the largest value for k = k

n−n

n−n

n

that can be derived using Theorem 3 is obtained for

= 100 and it is n = 0.37, that means that Algorithm 2 needs 37% samples less than Algorithm 1. This theoretical analysis is confirmed by the experimental validation, whose results are reported in the next section. 4. Experimental results In this section, we report the experiments we performed to evaluate our Monte-Carlo-based simulation technique for estimating the probability Pr sem F ( S ), where sem ∈ {co, gr, pr, id}, presented in the previous section. In particular, after comparing the efficiency of Algorithms 2 and 3 w.r.t. the baseline Algorithm 1, we perform a sensitive analysis to investigate the features the efficiency of both Algorithm 2 and Algorithm 3 depends on. 4.1. Dataset To perform a thorough experimental evaluation of the performances of the three algorithms, we considered a dataset

SYN consisting of 200 pairs (F , S ) where (i) F is a PrAf whose number of arguments ranges from 12 to 40 and whose number of defeats is three times the number of arguments, and (ii) S is a set of arguments such that | S | ranges from the 20% to the 40% of the size of the set of arguments in F . The generation of the above-mentioned pairs was performed as follows. First, we randomly generated a number x ∈ [12..40] and then we built an AAF  A , D , such that | A | = x and D contains 3 · x defeats, randomly selected in A × A. Once the AAF  A , D  was generated we obtained a PrAF F by randomly generating the probability of each argument and defeat. Next, we randomly generated a set S of arguments whose size ranged from the 20% to the 40% of the size of the set of arguments in the PrAF. Finally, we checked whether (i) the probability Pr cf F ( S ) that S is conflict-free was greater than or equal to 45% and smaller than or equal to 95% and (ii) the probability Pr ad F ( S ) that S is admissible was greater than or equal to 37% and smaller than or equal to 88%. In the case that these conditions were satisfied, we inserted the pair F , S in SYN otherwise we discarded it and proceeded to the generation of the next pair. The generation process continued until SYN contained 200 pairs.

B. Fazzinga et al. / International Journal of Approximate Reasoning 69 (2016) 106–132

117

Fig. 2. Average ImpS(A2), ImpT(A2), ImpS(A3) and ImpT(A3) for complete, grounded, preferred and ideal-set semantics.

4.2. Efficiency of Algorithm 2 and Algorithm 3 We compare Algorithm 1 (A1) to Algorithm 2 (A2) and Algorithm 3 (A3) in terms of the number of generated samples  sem and the running times needed to compute the probability Pr F ( S ) for the complete, grounded, preferred, and ideal-set semantics. To this end, we denote as samples( Ak) and time( Ak), with k ∈ {1, 2, 3}, the average number of samples and the average execution time of the runs of algorithm Ak, respectively, and we use the following performance measures:

• ImpS( A2) =

samples( A1)−samples( A2) samples( A1)

samples( A1)−samples( A3)

and ImpS( A3) = , for measuring the improvement of A2 and A3 samples( A1) w.r.t. A1 in terms of the number of generated samples; )−time( A2) time( A1)−time( A3) • ImpT ( A2) = time( A1 and ImpT ( A3) = for measuring the improvement of A2 and A3 w.r.t. A1 in time( A1) time( A1) terms of execution time. Intuitively, ImpS( A2) (resp. ImpS( A3)) measures the percentage of generated samples that is saved by adopting A2 (resp. A3) rather than A1. Analogously, ImpT ( A2) (resp. ImpT ( A3)) measures the percentage of execution time that is saved by adopting A2 (resp. A3) rather than A1. For each pair in SYN and for each semantics sem ∈ {co, gr, pr, id}, we executed 200 runs of each algorithm Ak, with k ∈ {1, 2, 3}, and we measured the number samples( Ak) of generated samples and the execution times time( Ak). In all the experiments, we used an error level  = 0.005, and a confidence level equal to 95%, i.e., z1−α /2 = 1.96. All experiments have been carried out on an Intel i7 CPU with 12 GB RAM running Windows 8.1. Fig. 2 reports the average values of ImpS(A2), ImpT(A2), ImpS(A3) and ImpT(A3) obtained for the complete, grounded, preferred and ideal-set semantics in the experiments. It turns out that on average A3 outperforms A2 which in turn outperforms A1, both considering the average number of generated samples and the average execution time. Specifically, on average ImpS( A3) is about 53%, while ImpS( A2) is about 41%, and ImpT ( A3) is about 37%, while ImpT ( A2) is about 33%. Moreover, the experiments show that on average ImpT is lower than ImpS for both A2 and A3 regardless of the considered semantics. Intuitively, this behavior derives from the fact that a single Monte-Carlo iteration of A2 (as well as the single iteration of A3) is on average more time consuming than a single Monte-Carlo iteration of A1. 4.3. Sensitive analysis Once having assessed that on average both A2 and A3 outperform A1, we performed a sensitive analysis of both the algorithms which aims at understanding whether the improvement in efficiency of A2 and A3 is correlated to some features of the problem instances. Specifically, given a problem instance consisting of a PrAF F =  A , P A , D , P D , a set of arguments S ⊆ A and a semantics sem ∈ {co, gr, pr, id}, we considered three distinct features in the sensitive analysis: 1. the size of the instances, measured as the number | A | of arguments; 2. the probability Prcf ( S ) (resp. Prad ( S )) of an instance that the set S is conflict-free (resp. admissible), denoted as PCF (resp. PAD) in the figures reporting the results; 3. the ratio

sem sem (Prcf (Prad F ( S )−PrF ( S )) F ( S )−PrF ( S )) (resp. ), denoted as P-ratio cf (resp. P-ratio ad) in the figures. (1−Prsem ( S )) (1−Prsem ( S ))

We point out that performing a sensitive analysis w.r.t. P-ratio cf and P-ratio ad can be seen as an experimental validation of the results stated in Theorem 3. Figs. 3(a)–(c) and Figs. 4(a)–(c) show, respectively, the values of ImpS(A2) and ImpT(A2) for the complete semantics w.r.t. the three features described above (as A2 only samples possible worlds wherein S is conflict-free, features Pr cf ( S ) (Prcf ( S )−Prsem ( S ))

F F and are considered). Moreover, Figs. 5(a)–(c) and Fig. 6(a)–(c) show, respectively, the values of ImpS(A3) (1−Prsem ( S )) and ImpT(A3) for the complete semantics w.r.t. the three features. From these figures, it turns out that both ImpS(A2) and ImpT(A2) (resp. ImpS(A3) and ImpT(A3)) are not correlated with the number of arguments. Moreover, both ImpS(A2) and

118

B. Fazzinga et al. / International Journal of Approximate Reasoning 69 (2016) 106–132

Fig. 3. ImpS( A2) for the complete semantics by varying (a) the number of arguments, (b) the value of PCF, (c) the P-ratio cf.

Fig. 4. ImpT ( A2) for the complete semantics by varying (a) the number of arguments, (b) the value of PCF, (c) the P-ratio cf.

Fig. 5. ImpS( A3) for the complete semantics by varying (a) the number of arguments, (b) the value of PAD, (c) the P-ratio ad.

Fig. 6. ImpT ( A3) for the complete semantics by varying (a) the number of arguments, (b) the value of PAD, (c) the P-ratio ad.

ImpT(A2) (resp. ImpS(A3) and ImpT(A3)) are weakly correlated with Pr cf ( S ) (resp. Prad ( S )). Interestingly, it turns out that both ImpS(A2) and ImpT(A2) (resp. ImpS(A3) and ImpT(A3)) are strongly correlated with P-ratio cf (resp. P-ratio ad). We point out that from the sensitive analysis that we performed w.r.t. the grounded, preferred and ideal-set semantics, it turned out that both A2 and A3 exhibit the same behavior shown for the complete semantics (i.e., no correlation with the number of arguments, weak correlation with Pr cf ( S ) (resp. Prad ( S ), and strong correlation with P-ratio cf (resp. P-ratio ad)). Thus, in the following we focus on the correlation with P-ratio cf and P-ratio ad, we do not report the values of ImpS(A2), ImpS(A3), ImpT(A2), and ImpT(A3) w.r.t. the number of arguments, Pr cf ( S ) and Pr ad ( S ), for the grounded, preferred and ideal-set semantics.

B. Fazzinga et al. / International Journal of Approximate Reasoning 69 (2016) 106–132

119

Fig. 7. ImpS(A2) (a), ImpS(A3) (b), ImpT(A2) (c), and ImpT(A3) (d) for the grounded semantics by varying P-ratio cf (a, c) and P-ratio ad (b, d).

Fig. 8. ImpS(A2) (a), ImpS(A3) (b), ImpT(A2) (c), and ImpT(A3) (d) for the preferred semantics by varying P-ratio cf (a, c) and P-ratio ad (b, d).

Fig. 9. ImpS(A2) (a), ImpS(A3) (b), ImpT(A2) (c), and ImpT(A3) (d) for the ideal-set semantics by varying P-ratio cf (a, c) and P-ratio ad (b, d).

Figs. 7(a–d), Figs. 8(a–d) and Figs. 9(a–d) depict the values of ImpS(A2) (a), ImpS(A3) (b), ImpT(A2) (c), and ImpT(A3) (d) w.r.t. P-ratio cf (a, c) and P-ratio ad (b, d) for the grounded, preferred and ideal-set semantics, respectively. The results clearly show a strong correlation between the percentage of generated samples and execution time that is saved by adopting A2 (resp. A3) rather than A1 and the features P-ratio cf and P-ratio ad used in the statement of Theorem 3. Summing up, since on average ImpT ( A3) is about 7% better than ImpT ( A2), we can conclude that A3 performs better on our dataset. However this does not preclude A2 from performing better than A3 on different datasets, where P-ratio cf is closer to P-ratio ad than in our dataset. As a matter of fact, in our dataset, on average, the difference between P-ratio cf and P-ratio ad is about 10%. Since it is likely that for most of the PrAFs representing real-world situations, the difference between P-ratio cf and P-ratio ad is at least 10%, it is fair to claim that A3 outperforms A2 in most of the real-life contexts. 5. Related work and discussion In [7,10,8,9,16,17] approaches for handling uncertainty in AAFs by relying on probability theory have been proposed. In particular, with the aim of modeling jury-based dispute resolutions, [7] proposed a PrAF where uncertainty is taken into account by specifying probability distribution functions (PDFs) over possible worlds and shown how an instance of the proposed PrAF can be obtained by specifying a probabilistic assumption-based argumentation framework (introduced by themselves). In the same spirit, [10] defined a PrAF as a PDF over the set of possible worlds, and introduced a probabilistic version of a fragment of ASPIC framework [18] that can be used to instantiate the proposed PrAF. Differently from [7] and [10,8] proposed a PrAF where probabilities are directly associated with arguments and defeats, instead of being associated with possible worlds. After claiming that computing the probability Pr ( S ) that a set S of arguments belongs to an extension requires exponential time for every semantics, [8] proposes a Monte-Carlo simulation approach to approximate Pr( S ). Later, [13,14] shown that computing Pr sem F ( S ) is tractable for the admissible and stable semantics, but it is F P #P -complete for other semantics, including complete, grounded, preferred and ideal-set. Hence, the results of [13,14] entail that the usage of approximation is more appropriate for those semantics sem for which computing sem Prsem F ( S ) is hard, while for the admissible and stable semantics the exact value of Pr F ( S ) can be found in polynomialtime. In this paper, we devised an optimized Monte-Carlo simulation approach which is able to estimate Pr sem ( S ), with sem ∈ {co, gr, pr, id}, using much fewer samples than the original approach proposed in [8], resulting in a significantly more efficient estimation technique. In [8], as well as in [7] and [10], Prsem ( S ) is defined as the sum of the probabilities of the possible worlds where S is an extension according to semantics sem. Differently from these approaches, [9] did not define a probabilist version of a

120

B. Fazzinga et al. / International Journal of Approximate Reasoning 69 (2016) 106–132

classical semantics but introduced a new probabilistic semantics based on p-justifiable PDFs defined over the set of possible worlds. Given an AAF A =  A , D , a PDF f is a function assigning a probability to each possible world w of A,2 and it is said to be p-justifiable iff for each argument a ∈ A, it holds that (i) for each argument b defeating a, the probability that a is in an extension according to f is lower than or equal to the one’s complement of the probability that b is in an extension according to f ; (ii) the probability that a is in an extension according to f is greater than or equal to the one’s complement of the sum, over arguments b defeating a, of the probability that b is in an extension according to p. It is easy to check that, the probabilistic semantics considered in our paper satisfy the first of the above-mentioned conditions if PrAFs containing no pair of mutually defeating arguments are considered. Moreover, it is easy to see that, the second condition is not satisfied in the general case. In fact, the probability that an argument a belongs to an extension cannot be greater than P A (a). Hence, if P A (a) is smaller than the one’s complement of the sum, over arguments b defeating a, of the probability that b belongs to an extension according to p, condition (ii) is not satisfied. Hence, the probability distribution obtained by assuming independence among arguments may or not correspond to a p-justifiable PDF, depending on the probabilities specified by P A and P D for the arguments and defeats, respectively. [16] defines evidential argumentation frameworks lo lift the independence assumption. In that approach, probabilities are assigned to the items of support relations. This way, probabilistic evidential argumentation frameworks (PrEAFs) are obtained, which model inter-argument dependencies by assigning conditional probabilities between arguments and their supporting arguments. However, the computation strategy proposed in [16] relies on an exponential time algorithm (w.r.t. the number of arguments), which sums up the probabilities of the EAFs where the given set is reasonable according the given semantics. Since our strategy relies on the presence of polynomial-time computational results, it should be investigated if evidential argumentation frameworks admit polynomialtime results for some semantics, such as the admissible one, but we defer it to future work since extending our polynomial time results to the case of PrEAF is not trivial. [19] addresses the problem of computing all the subgraphs of an AAF in which an argument a belongs to the grounded extension, and [17] extends it by focusing on computing the probability that argument a belongs to the grounded extension of a probability abstract argumentation framework. In particular, [17] assumes to receive a joint probability distribution over the arguments as input. However, this means that its input is possibly exponential in the size of the argumentation framework. In fact, providing a joint probability distribution usually means specifying the probability values for all the possible correlations, i.e., P (a), P (a ∧ b), P (a ∧ b ∧ c ) . . . and so on. This is analogous to provide the probabilities for all the possible worlds, which are exponential in number. In the above-cited works probability theory is recognized as a fundamental tool to model uncertainty. However, a deeper understanding of the role of probability theory in abstract argumentation was developed only later in [11,12], where the justification and the premise perspectives of probabilities of arguments are introduced. According to the former perspective the probability of an argument indicates the probability that it is justified in appearing in the argumentation system. In contrast, the premise perspective views the probability of an argument as the probability that the argument is true based on the degrees to which the premises supporting the argument are believed to be true. Starting from these perspectives, in [12], a formal framework showing the connection among argumentation theory, classical logic, and probability theory was investigated. Furthermore, qualification of attacks is addressed in [20], where an investigation of the meaning of the uncertainty concerning defeats in probabilistic abstract argumentation is provided. Besides the approaches that model uncertainty in AAFs by relying on probability theory, many proposals have been made where uncertainty is represented by exploiting weights or preferences on arguments and/or defeats [21–26]. In [21] each argument is associated with a numeric value, and a set of possible orders (preferences) among the values is defined. Here, a defeat succeeds w.r.t. a specific value order only if the value associated with the defeated argument is not preferred to the value associated with the defeating argument in the value order. All the semantics are extended to take into account this notion of defeat. [22] extends [21] by introducing preferences among sets of arguments, exploiting the values associated with the arguments. The aim is that of choosing the best set of arguments among those satisfying a (classic) semantics. In [24] arguments can express preferences between other arguments, determining whether defeats succeed or not, while in [23] a defeat succeeds only if the defeated is not preferred to the attacker, on the basis of a preference relation between arguments. [27] introduces preferences between defeats, with the aim of finding the extensions providing best defenses for their elements. [25] associates attacks with weights, and proposes new semantics extending the classical ones on the basis of a threshold β . Specifically, a set S of arguments is a β -sem extension, where sem is a semantics, iff S is an extension according sem in the AAF obtained by removing, from the original set of defeats, a subset of defeats whose weights sum up at most to β . For a semantics sem, a set S of arguments is preferred to another set S iff S requires a smaller value of β to be a sem extension. [26] extends [25] by considering other aggregation functions over weights than sum. In addition to the above-mentioned approaches, another interesting approach to represent uncertainty in argumentation is that based on using possibility theory, such as done in [28–30]. Although the approaches based on weights, preferences, possibilities, or probabilities to model uncertainty have been proved to be effective in different contexts, there is no common agreement on what kind of approach should be used in general. In this regard, [11,12] observed that the probability-based approaches may take advantage from relying on a well-established and well-founded theory, whereas the approaches based on weights or preferences do not conform to well-established theories yet.

2

Observe that any subset of A is considered as a possible world in [9], since defeat (a, b) ∈ D occurs if and only if both a and b occur.

B. Fazzinga et al. / International Journal of Approximate Reasoning 69 (2016) 106–132

121

The computational complexity of computing extensions has been thoroughly investigated for classical AAFs [31,5,6,32–34] with respect to several semantics (a comprehensive overview of argumentation semantics can be found in [35]). In particular, [31] presents a number of results on the complexity of some decision questions for semi-stable semantics, while [6] focuses on ideal semantics; complexity results for preferred semantics can be found in [5]. [32] provides complexity results for AAFs in terms of skeptical and credulous acceptance under semi-stable and stage semantics, while [33] analyzes CF2 semantics. The recent work [34] has studied the computational complexity of different decision problems centered on critical sets of arguments whose status (i.e., membership to an extension) is sufficient to determine uniquely the status of every other argument. As regard the case of adding weights to AAFs, the computational complexity of computing extensions has been deeply investigated in [25,26], whereas the complexity for the case of using preferences is studied in [22,36]. In structured argumentation a formal language is used to explicitly represent the premises and the conclusion of each argument as well as the relationship between them [37]. Several approaches combine structured argumentation with models for reasoning under uncertainty [38,39,12,40]. In particular, [40] proposes a structured probabilistic argumentation framework where Presumptive Defeasible Logic Programming (PreDeLP) [41,42] is extended with probabilistic models to obtain a general-purpose probabilistic language called Probabilistic PreDeLP (P-PreDeLP). Then the paper focuses on studying belief revision operations over P-PreDeLP KBs, and introduces a set of rationality postulates inspired by those presented for non-prioritized revision of classical belief bases [43]. Since several (structured) arguments can share the same conclusions, it is natural analyzing together these arguments rather than making a valuation of them individually. This concept is known as accrual and it relies on the intuition that aggregating several arguments for a given conclusion makes such a conclusion more credible [44,45]. Recently, [46] introduced the notion of accrued structure which accounts for several arguments supporting the same conclusion by accumulating their strength in terms of possibilistic values. While traditional argumentation systems (including those using possibilistic logic [29,30]) only consider individual arguments and compare them pairwise, [46] proposes a formalization of argument accrual with possibilistic uncertainty in a logic programming framework building on Possibilistic Defeasible Logic Programming (P-DeLP) [30]. Several systems, such as [47–50], are available for reasoning in non-probabilistic argumentation frameworks. An up-todate survey of systems for solving reasoning problem in AAFs can be found in [51]. In this paper, by developing an optimized Monte-Carlo simulation approach, we took an important step that enables the implementation of a system able to efficiently estimate the probability of extensions in PrAFs under the F P #P -hard semantics sem ∈ {co, gr, pr, id}. 5.1. Discussion on the independence assumption Some approaches have been proposed that integrate argumentation frameworks with probabilities which do not rely on the independence assumption. In particular, in the approach defined in [9], users directly specify the unique probability distribution over the set of possible worlds, instead of specifying arguments’ and defeats’ probabilities. However, in this case, users may be required to specify a huge number of probability values (one for each possible world), as the number of possible worlds is exponential w.r.t. the number of arguments and defeats. Moreover, it can be the case that users are not aware of the probability value that should be assigned to a possible world, as it may represent a complex scenario. Indeed, assigning probabilities to possible worlds is generally recognized to be so hard that in [12] it is shown that assigning probabilities to arguments and defeats is more intuitive and it is feasible in real-life scenarios. However, assigning probabilities to arguments and defeats without relying on the independence assumption requires to consider all the possible probability distributions over the set of possible worlds which are compatible with the arguments’ and defeats’ probabilities in order to derive “well-founded” conclusions. This approach is adopted in different contexts, such as probabilistic logics [52–54] and probabilistic databases [55], where all the possible probability distributions over the set of possible worlds compatible with the rules’ and facts’ probabilities, or the tuples’ probabilities, are considered. In all these approaches, conclusions derived using all the probability distributions are associated with probability ranges, which intuitively indicate the minimum and maximum probability with which a derived conclusion holds. Unfortunately, in most cases, large probability ranges are derived (often close to [0, 1]), thus preventing users to draw meaningful conclusions. Relying on the independence assumption avoids the problems of the above-mentioned approaches. In fact, the adoption of the independence assumption has been investigated in several contexts. For instance, in the context of probabilistic logic, independence choice logic [56] is a logic relying on the independence assumption (on which the probabilistic description logic in [57] as well as the probabilistic argumentation systems proposed in [38,39] are based), and, in the context of probabilistic databases, the state of the art data model, which is the bucket independent model [58], relies on the independence assumption too. As a matter of fact, the independence assumption is widely used when event correlations are unknown or hard to be derived exactly. However, in [14] we have investigated an extension of our framework, named non-independent probabilistic argumentation framework, where the independence assumption is relaxed, that is, the probabilistic events associated with arguments may not be pairwise independent, and defeats may not be independent from the other probabilistic events associated with the other arguments or defeats. In non-independent probabilistic argumentation frameworks, the events associated to arguments and defeats are complex probabilistic events which are derived from basic probabilistic events that are independent from one another. Unfortunately, this approach has been proved to lead to a high complexity: in fact, the problem of computing the probability that a set of arguments is an admissible extension has been proved to be F P #P -complete. Thus, since

122

B. Fazzinga et al. / International Journal of Approximate Reasoning 69 (2016) 106–132 cf

the results of this paper are based on the fact that we can compute Pr F ( S ) and Pr ad F ( S ) in polynomial time, to adapt our technique to work with non-independent probabilistic argumentation frameworks, we have to find special cases of non-independent probabilistic argumentation frameworks admitting polynomial-time computational results. Since this task is not trivial at all, we plan to investigate it in future work. 6. Conclusions and future work In this paper we focused on estimating the probability Pr sem ( S ) that a set S of arguments is an extension according to the semantics sem, where sem is the complete, the grounded, the preferred, or the ideal-set semantics. We experimentally showed that both of the two algorithms we proposed outperform the state-or-the-art algorithm for estimating Pr sem ( S ), both in terms of number of generated samples and evaluation time. In this paper we dealt with the PrAF introduced in [8] where the probabilistic events associated with arguments and defeats are assumed to be independent. An interesting direction for future work is that of dealing with approaches in which the independence assumption is relaxed, such as the non-independent probabilistic argumentation frameworks proposed in [14] and the evidential argumentation frameworks proposed in [16]. As regard the non-independent probabilistic argumentation frameworks proposed in [14], applying the idea presented in this paper to improve the Monte-Carlo estimation of the probability of extensions over those frameworks is not straightforward, as discussed in Section 5.1. The reason is that computing the probability that a set of arguments is conflict-free or admissible is hard for non-independent probabilistic argumentation frameworks [14]. In this regard, it is worth investigating if non-independent probabilistic argumentation frameworks admit special cases with polynomial-time computational results that could be then exploited to devise a MonteCarlo based simulation approach similar to that proposed in this paper. As regard the evidential argumentation frameworks (PrEFs) proposed in [16], the computational complexity of the problem of computing extensions’ probabilities has not been characterized yet, thus it is interesting to investigate if there are polynomial-time computational results for PrEFs in order to make our approach work also for PrEFs. Finally, another direction for future work is that of addressing the probabilistic counterparts of some problems which have been investigated in the AAF context [51], such as the problem of computing the probability that a given argument belongs to any/every extension according to a given semantics. These problems can be defined for both PrAFs and non-independent probabilistic argumentation frameworks. In addressing these problems, it is worth trying to exploit ideas proposed in [17] which introduced an algorithm to compute the probability of acceptance of arguments under the grounded semantics for probabilistic argumentation frameworks where correlations are represented using a joint probability distribution over arguments. Appendix A. Proofs of lemmas and theorems In this appendix we report the proofs of the lemmas and the theorems stated in the main body of the paper. Moreover, we state and prove Lemma 3 whose result will be used in the proof of Theorem 2. Lemma 1. Given a PrAF F =  A , P A , D , P D  and a set S ⊆ A of arguments, then it holds that 1) 2) 3) 4)

∀a ∈ S, Pr(xa | E cf ( S )) = 1. ∀a ∈ A \ S, Pr(xa | E cf ( S )) = P A (a). ∀δ = a, b ∈ D such that a, b ∈ S, Pr(xδ | E cf ( S ) ∧ xa ∧ xb ) = 0. ∀δ = a, b ∈ D \ {a, b ∈ D s.t. a, b ∈ S }, Pr(xδ | E cf ( S ) ∧ xa ∧ xb ) = P D (δ). Pr( E ∧ E )

Proof. We use the well-known formula Pr( E 1 | E 2 ) = Pr(1E ) 2 for the conditional probability, where E 1 and E 2 are (complex) 2 events. We separately prove the four cases. Case 1). Since, for each a ∈ S, the occurrence of event E cf ( S ) entails the occurrence of the event xa , it holds that Pr(xa ∧ Pr(x ∧ E ( S )) E cf ( S )) = Pr( E cf ( S )), and thus Pr(xa | E cf ( S )) = Pr(a E (cfS )) = 1. cf Case 2). For a ∈ A \ S, it holds that Pr(xa ∧ E cf ( S )) = Pr(xa ) · Pr( E cf ( S )), as event xa is independent from E cf ( S ). Thus, Pr(xa | E cf ( S )) = Pr(xa ) = P A (a). Case 3). For each defeat δ = a, b such that both a and b belong to S, it is the case that E cf ( S ) ∧ xa ∧ xb is equivalent to E cf ( S ) (i.e., E cf ( S ) occurs iff E cf ( S ) ∧ xa ∧ xb occurs). Hence, Pr(xδ ∧ E cf ( S ) ∧ xa ∧ xb ) = Pr(xδ ∧ E cf ( S )), which is equal to zero as xδ ∧ E cf ( S ) is an impossible event (it has no chance of occurring, as there cannot be defeats among arguments of a conflict-free set). It follows that, Pr(xδ | E cf ( S ) ∧ xa ∧ xb ) = 0. Case 4). Given δ = a, b whose arguments are not both in S, it is easy to see that event xδ ∧ E cf ( S ) ∧ xa ∧ xb can be expressed as follows.

xδ ∧ E cf ( S ) ∧ xa ∧ xb = xδ ∧ xa ∧ xb ∧

 a ∈( S \{a,b})

xa ∧

 δ = (a , b ) ∈ D ∧a ∈ S ∧ b ∈ S

¬xδ .

B. Fazzinga et al. / International Journal of Approximate Reasoning 69 (2016) 106–132

Then, we obtain that Pr(xδ ∧ E cf ( S ) ∧ xa ∧ xb ) is as follows.



Pr(xδ ∧ E cf ( S ) ∧ xa ∧ xb ) = Pr(xδ |xa ∧ xb ) · Pr(xa ∧ xb ) ·



Pr(xa )) ·

a ∈( S \{a,b})

123

Pr(¬xδ |(xa ∧ xb )).

δ = (a , b ) ∈ D ∧a ∈ S ∧ b ∈ S

Moreover, E cf ( S ) ∧ xa ∧ xb and its probability Pr( E cf ( S ) ∧ xa ∧ xb ) is as follows.



E cf ( S ) ∧ xa ∧ xb = xa ∧ xb ∧



xa ∧

a ∈( S \{a,b})



Pr( E cf ( S ) ∧ xa ∧ xb ) = Pr(xa ∧ xb ) ·

¬xδ .

δ = (a , b ) ∈ D ∧a ∈ S ∧ b ∈ S

a ∈( S \{a,b})

Therefore, we obtain Pr(xδ | E cf ( S ) ∧ xa ∧ xb ) =



Pr(xa )) ·

Pr(¬xδ |(xa ∧ xb )).

δ

= (a , b ) ∈ D ∧a ∈ S ∧ b ∈ S

Pr(xδ ∧ E cf ( S )∧xa ∧xb ) Pr( E cf ( S )∧xa ∧xb )

= Pr(xδ |xa ∧ xb ) = P D (δ), which completes the proof. 2

 F ( S ) returned by Algorithm 2 is such that Prsem ( S ) ∈ Theorem 1. Let  be an error level, and 1 − α a confidence level. The estimate Pr F sem

 F ( S ) −  , Pr  F ( S ) +  ] with confidence level 1 − α . [Pr sem

sem

Proof. It is easy to see that, at the end of each iteration (Lines 3–11), Algorithm 2 sampled a possible world w = Arg, Def  such that S is conflict-free in w. In fact, the arguments of Arg are chosen so that S ⊆ Arg ⊆ A, Def only contains defeats whose arguments are in Arg, and no defeat between arguments in S belongs to Def (thus ensuring that S is conflict-free). Let Pr( w ) be the probability that possible world w = Arg, Def  is sampled during an iteration of Algorithm 2. In the following, we first provide the expression of Pr ( w ), and then show that Pr ( w ) coincides with the probability Pr( E w | E cf ( S )) that world w occurs given that the event that S is conflict-free occurs. This shows that Algorithm 2 correctly samples possible worlds wherein S is conflict-free. Finally, we show that Algorithm 2 samples the right number of possible worlds to achieve the required confidence level on the estimated probability. Algorithm 2 generates world w = Arg, Def  by performing the following steps: 1) for each a ∈ S, a is added to Arg; 2) for each a ∈ A \ S, a is added to Arg if a randomly generated number belonging to the interval [0, 1] is less than Pr(xa | E cf ( S )) = P A (a); 3) for each δ = a, b such that a ∈ Arg \ S or b ∈ Arg \ S, the defeat δ is added to Def if a randomly generated number belonging to the interval [0, 1] is less than Pr(xδ | E cf ( S ) ∧ xa ∧ xb ) = P D (δ); Each of the decision taken by Algorithm 2 during the steps 1), 2) and 3) can be viewed as an outcome of a random experiment (specifically, the decisions of step 1) can be viewed as the results of deterministic experiments). Outcomes of steps 2) and 3) are associated with probabilistic events whose probabilities are P A (a) and P D (δ), respectively. It is easy to see that these events are independent from one another, since the generations of the random numbers in steps 2) and 3) are independent from one another. Therefore, the probability Pr ( w ) that world w is generated during an iteration of Algorithm 2 is the product of the probabilities of these independent events, that is,



Pr( w ) =

P A (a) ·

a∈Arg\ S





(1 − P A (a)) ·

a∈ A \Arg

P D (δ) ·



(1 − P D (δ))

D ∗ \Def

δ∈Def

where D ∗ is the set of defeats δ = a, b such that a ∈ Arg \ S or b ∈ Arg \ S. We now show that Pr( w ) coincides with the probability Pr( E w | E cf ( S )) that world w occurs given that the event that S is a conflict-free set occurs. Given a possible world w = Arg, Def , we have that

Ew =



xa ∧

a∈Arg

=



a∈ S

xa ∧



¬xa ∧

a∈ A \Arg



a∈Arg\ S

xa ∧



xδ ∧

δ∈Def



a∈ A \Arg

¬xa ∧

 δ∈ D ( w )\Def



δ∈Def

xδ ∧

¬xδ  δ∈ D 1 ( w )\Def

¬xδ



¬xδ

δ∈ D ∗ ( w )\Def

where D ( w ) = D ∩ (Arg( w ) × Arg( w )), D 1 ( w ) = {a, b|a, b ∈ D ∧ a ∈ S ∧ b ∈ S )} is the set of defeats between arguments of S and D ∗ is as defined above.

124

B. Fazzinga et al. / International Journal of Approximate Reasoning 69 (2016) 106–132

Since Def contains no defeat belonging to D 1 ( w ), E w can be expressed as follows:

Ew =



xa ∧

a∈ S





xa ∧

a∈Arg\ S



¬xa ∧

a∈ A \Arg

δ∈Def





xa ∧

a∈ S

¬xδ =



a∈ S

δ = (a, b) ∈ D ∧a ∈ S ∧ b ∈ S



xa ∧



¬xδ

¬xδ .

δ∈ D ∗ ( w )\Def

δ∈ D 1 ( w )

Moreover, E cf ( S ) can be written as follows:

E cf ( S ) =



xδ ∧

¬xδ .

δ∈ D 1 ( w )

From the expressions of E w and E cf ( S ), it is easy to see that the occurrence of E w entails the occurrence of E cf ( S ), which, in turns, implies that Pr( E w ∧ E cf ( S )) = Pr( E w ). Moreover, since Pr( E w ) = I ( w ), where I is the PDF interpreting F , we obtain that

Pr( E w | E cf ( S )) =

Pr( E w ∧ E cf ( S )) Pr( E cf ( S ))

=

I (w) Prcf F (S)

where Prcf F ( S ) is the probability of E cf ( S ) provided by Fact 1, which is as follows.

Prcf F (S) =





P A (a) ·

a∈ S





1 − P D (δ) .

δ∈ D 1 ( w )

Finally, since I ( w ) can be written as follows

I (w) =



P A (a) ·

a∈ S





P A (a) ·

a∈Arg\ S

(1 − P A (a)) ·

a∈ A \Arg





P D (δ) ·

δ∈Def



(1 − P D (δ)) ·

(1 − P D (δ)),

δ∈ D ∗ ( w )\Def

δ∈ D 1 ( w )

it follows that

Pr( E w | E cf ( S )) =

I (w) Prcf F (S)



=

P A (a) ·

a∈Arg\ S



(1 − P A (a)) ·

a∈ A \Arg

 δ∈Def



P D (δ) ·

(1 − P D (δ))

δ∈ D ∗ ( w )\Def

which is equal to Pr( w ). Therefore the probability that a possible world w is sampled during an iteration of Algorithm 2 is equal to the probability Pr( E w | E cf ( S )) that w occurs given that the event that S is conflict-free occurs. This means that Algorithm 2 correctly samples possible worlds in the sample space consisting of the possible worlds wherein S is conflict-free. In the rest of the proof, we show that Algorithm 2 samples a number of possible worlds ensuring the required confidence level on the estimated probability. It is easy to see that, given two possible worlds w 1 and w 2 generated during different iterations of Algorithm 2, they correspond to independent repetitions of the same random experiment. Assuming that the total number n of possible worlds generated by Algorithm 2 is sufficiently large, the results of the Monte-Carlo simulation performed can be seen as a normal distribution over the possible values of the number γ of successes in a sequence of n independent yes/no experiments. Note that in Algorithm 2 deciding whether the experiment succeeds or not means testing whether the set S of arguments is an extension according to semantics sem in a generated possible world. γ The proportion of success n , where γ is the number of successes and n is the total number of trials, is an estimate sem| E cf ( S )

 Pr F

sem| E cf ( S )

( S ) of the probability PrF

( S ) that S is an extension according to sem given that S is conflict-free. sem| E cf ( S )

Let 1 − α be a confidence level, according the Agresti–Coull method [15], the estimate of Pr F sem| E cf ( S )

F Pr

(S) =

is

x + ( z12−α /2 )/2 n + z12−α /2

and the number of samples ensuring that the error level is  with confidence level 1 − α is sem| E cf ( S )



n =

F z12−α /2 · Pr

 2 sem| E cf ( S )

 F ( S ) is obtained as Pr  Since Pr F sem

val

| E cf ( S )  sem ( S ) · (1 − Pr ( S )) F

sem| E cf ( S )

F [Pr

− z12−α /2 .

 sem| E cf ( S ) ( S ) such that Prsem| E cf ( S ) ( S ) lies in the inter( S ) · Prcf F ( S ), an estimate Pr F F

| E cf ( S )  sem ( S ) −  , Pr (S ) +  ] F

sem  sem entails an estimate Pr F ( S ) such that Pr F ( S ) lies in the interval

sem| E cf ( S )

F [Pr

cf

cf  sem| E cf ( S ) ( S ) · Prcf ( S ) · Prcf F ( S ) −  · Pr F ( S ), Pr F F ( S ) +  · Pr F ( S )].

B. Fazzinga et al. / International Journal of Approximate Reasoning 69 (2016) 106–132

125

 sem Thus, as Algorithm 2 has to return Pr sem F ( S ) lying in the interval Pr F ( S ) ±  with confidence level 1 − α , the value of  to sem| E cf ( S ) cf  be used when estimating Pr ( S ) is  /PrF ( S ). Moreover, the number n of possible worlds to be sampled to ensure F | E cf ( S )  sem that the error level of Pr ( S ) is  with confidence level 1 − α is F sem| E cf ( S )

n =

F z12−α /2 · Pr

sem| E cf ( S )

F ( S ) · (1 − Pr



( S ))

2

2 2 · (Prcf F ( S )) − z1−α /2 sem| E cf ( S )

 F ( S ), obtained as Pr  This way, after sampling n possible worlds, it is the case that Pr F  sem Prsem ( S ) lies in the interval Pr ( S ) ±  with confidence level 1 − α . 2 F F sem

( S ) · Prcf F ( S ), is such that

Lemma 2. Given a PrAF F =  A , P A , D , P D  and a set S ⊆ A of arguments, then 1) 2) 3) 4) 5) 6)

∀a ∈ S, Pr(xa | E ad ( S )) = 1; ( S ,a)+ P 3 ( S ,a) ∀a ∈ A \ S, Pr(xa | E ad ( S )) = P 1 ( S ,Pa2)+ ; P 2 ( S ,a)+ P 3 ( S ,a) ∀ δ = a, b ∈ D s.t. a, b ∈ S, Pr(xδ | E ad ( S )) = 0; ∀ δ = a, b ∈ D s.t. a, b ∈ A \ S, Pr(xδ |xa ∧ xb ∧ E ad ( S ) ∧ e 2 ( S , b)) = P D (δ); ( S ,a) ∀a ∈ A \ S, Pr(e 3 ( S , a)| E ad ( S ) ∧ xa ) = P 2 ( S ,Pa3)+ ; P 3 ( S ,a) ∀δ = a, b ∈ D s.t. a ∈ S ∧ b ∈ A \ S, Pr(xδ |xa ∧ xb ∧ E ad ( S ) ∧ e 2 ( S , b)) = P D (δ),

where P 1 ( S , a), P 2 ( S , a), and P 3 ( S , a) are the probabilities of events e 1 ( S , a), e 2 ( S , a), and e 3 ( S , a) given in Fact 2. Proof. We separately prove the six cases. Case 1). For each a ∈ S, since the occurrence of E ad ( S ) entails the occurrence of xa , it holds that Pr(xa ∧ E ad ( S )) = Pr(x ∧ E ( S )) Pr( E ad ( S )), and thus Pr(xa | E ad ( S )) = Pr(a E ad = 1. ad ( S )) Case 2). For a ∈ A \ S, it is easy to see that the event xa ∧ E ad ( S ) is as follows:







xa ∧ E ad ( S ) = E cf ( S ) ∧ e 2 ( S , a) ∨ e 3 ( S , a)





e1 ( S , b) ∨ e2 ( S , b) ∨ e3 ( S , b)

b∈ A \( S ∪{a})

Reasoning analogously to the proof of Fact 2 reported in [13,14], we obtain that

Pr(xa ∧ E ad ( S )) = Prcf F ( S ) · ( P 2 ( S , a) + P 3 ( S , a)) · Finally, observing that Pr ad F ( S ) can be written as follows









cf Prad F ( S ) = Pr F ( S ) · P 1 ( S , a) + P 2 ( S , a) + P 3 ( S , a) ·

after some simplifications, we obtain that Pr(xa | E ad ( S )) =

( P 1 ( S , b) + P 2 ( S , b) + P 3 ( S , b)) .

b∈ A \( S ∪{a})





P 1 ( S , b) + P 2 ( S , b) + P 3 ( S , b) ,

b∈ A \{ S ∪{a}}

Pr(xa ∧ E ad ( S )) Prad F (S)

=

P 2 ( S ,a)+ P 3 ( S ,a) . P 1 ( S ,a)+ P 2 ( S ,a)+ P 3 ( S ,a)

Case 3). For each defeat δ = a, b ∈ D such that both a and b belong to S, it holds that Pr(xδ ∧ E ad ( S )) is equal to zero, as xδ ∧ E ad ( S ) is an impossible event (that is, there cannot be any defeat among arguments belonging to an admissible, and thus conflict-free, set). Hence, Pr(xδ | E ad ( S )) = 0. Case 4). For each defeat δ = a, b such that both a and b are not in S, it is easy to see that

Pr(xδ ∧ xa ∧ xb ∧ E ad ( S ) ∧ e 2 ( S , b)) = P D (xδ ) · P A (a) · P A (b) · Prcf F (S)

·



( P 1 ( S , c ) + P 2 ( S , c ) + P 3 ( S , c )) ·

c ∈ A \( S ∪{b})







1 − P D (b, d) ,

b, d ∈ D ∧d ∈ S

where the contribute of argument b in the formula for Pr ad F is separated from the contributions of the other arguments outside S. Reasoning analogously, we obtain that

Pr(xa ∧ xb ∧ E ad ( S ) ∧ e 2 ( S , b)) = P A (a) · P A (b) · Prcf F (S)

·



( P 1 ( S , c ) + P 2 ( S , c ) + P 3 ( S , c )) ·

c ∈ A \( S ∪{b})

Thus, we obtain Pr(xδ |xa ∧ xb ∧ E ad ( S ) ∧ e 2 ( S , b)) =







1 − P D (b, d)

b, d ∈ D ∧d ∈ S Pr(xδ ∧xa ∧xb ∧ E ad ( S )∧e 2 ( S ,b)) Pr(xa ∧xb ∧ E ad ( S )∧e 2 ( S ,b))

= P D (δ).

126

B. Fazzinga et al. / International Journal of Approximate Reasoning 69 (2016) 106–132

Case 5). Given a ∈ A \ S, we have that

Pr(e 3 ( S , a) ∧ E ad ( S ) ∧ xa ) = P 3 ( S , a) · Prcf F (S) ·



( P 1 ( S , c ) + P 2 ( S , c ) + P 3 ( S , c ))

c ∈ A \( S ∪{a})

and that

Pr( E ad ( S ) ∧ xa ) = Prcf F ( S ) · ( P 2 ( S , a) + P 3 ( S , a)) ·



( P 1 ( S , c ) + P 2 ( S , c ) + P 3 ( S , c ))

c ∈ A \( S ∪{a}) P ( S ,a)

From these equations, it follows that Pr(e 3 ( S , a)| E ad ( S ) ∧ xa ) = P ( S ,a3)+ P ( S ,a) . 2 3 Case 6). For each defeat δ = a, b such that a ∈ S and b ∈ / S, it is easy to see that

Pr(xδ ∧ xa ∧ xb ∧ E ad ( S ) ∧ e 2 ( S , b)) = P D (xδ ) · P A (b) · Prcf F (S)



·



( P 1 ( S , c ) + P 2 ( S , c ) + P 3 ( S , c )) ·

c ∈ A \( S ∪{b})

= P D (xδ ) · Prcf F ( S ) · P 2 ( S , b) ·





1 − P D (b, d)

b, d ∈ D ∧d ∈ S



( P 1 ( S , c ) + P 2 ( S , c ) + P 3 ( S , c ))

c ∈ A \( S ∪{b})

and that

Pr(xa ∧ xb ∧ E ad ( S ) ∧ e 2 ( S , b)) = Prcf F ( S ) · P 2 ( S , b) · Thus, we obtain Pr(xδ |xa ∧ xb ∧ E ad ( S ) ∧ e 2 ( S , b)) =



( P 1 ( S , c ) + P 2 ( S , c ) + P 3 ( S , c ))

c ∈ A \( S ∪{b})

Pr(xδ ∧xa ∧xb ∧ E ad ( S )∧e 2 ( S ,b)) Pr(xa ∧xb ∧ E ad ( S )∧e 2 ( S ,b))

= P D (δ), which completes the proof. 2

The following lemma states a result that will be used in the proof of Theorem 2. Lemma 3. Function 4 returns a set (a) with probability



P A (a) ·

δ∈(a)

Pr((a)) =



P D (δ) ·

(1 − P D (δ))

(a)\(a)

P 3 ( S , a)

where P 3 ( S , a) is the probability of event e 3 ( S , a) given in Fact 2. Proof. We recall that (a) =  (a) ∪ 

(a), where  (a) = {(a, b) ∈ (a)|b ∈ S } and 

(a) = {(c , a) ∈ (a)|c ∈ S }. It is easy to check that  (a) (resp., 

(a)) is generated by Function 4 iff r belongs to an interval [min (a) , max (a) ] (resp., r ∈ [min

(a) , max

(a) ]), such that both min (a) and max (a) (resp. min

(a) and max

(a) ) satisfy the following condition:



max (a) − min (a) =





P D (δ) ·

δ∈ (a)



resp., max

(a) − min

(a) =

(1 − P D (δ))

 (a)\ (a)

P D (δ) ·

δ∈

(a)

(1 − P D (δ) .





(a)\

(a)

Since r is randomly generated in the interval [0, 1 −



δ∈ (a) (1

− P D (δ))] (resp., [0, 1 −



probability that a set of defeats  (a) (resp. 

(a)) is generated by Function 4 is

δ∈ (a)



P D (δ) ·

(1 − P D (δ))

 (a)\ (a)



1−

(1 − P D (δ))

⎛ ⎜ ⎜resp. ⎝

δ∈

(a)



P D (δ) ·

(1 − P D (δ))



(a)\

(a)



1−

δ∈ (a)

(1 − P D (δ))

δ∈

(a) (1

− P D (δ))]), the

⎞ ⎟ ⎟. ⎠

δ∈

(a)

Hence, as  (a) and 

(a) are generated independently from one another, it holds that



Pr((a)) =

δ∈ (a)



P D (δ) ·

 (a)\ (a)

1−



δ∈ (a)



(1 − P D (δ))

(1 − P D (δ))

·

δ∈

(a)



P D (δ) ·

(1 − P D (δ))



(a)\ (a)

1−



δ∈

(a)

(1 − P D (δ))

.

(A.1)

B. Fazzinga et al. / International Journal of Approximate Reasoning 69 (2016) 106–132

127

By definition of (a) and Fact 2, it holds that

1.





P D (δ) =

δ∈ (a)

δ∈(a)

2.



 



P D (δ) ·



δ∈

(a)



(1 − P D (δ)) = 1 −

(a)\(a)



 P D (δ) ,

 

(1 − P D (δ)) · 1 −

δ∈ (a)





3. P 3 ( S , a) = P A (a) · 1 −





(1 − P D (δ)) , and

δ∈

(a)

 



(1 − P D (δ)) · 1 −

δ∈ (a)



(1 − P D (δ)) .

δ∈

(a)

Thus, it follows that Equation (A.1) can be rewritten as in the statement of the lemma.

2

 F ( S ) returned by Algorithm 3 is such that Prsem ( S ) ∈ Theorem 2. Let  be an error level, and 1 − α a confidence level. The estimate Pr F sem

 sem  sem [Pr F ( S ) −  , PrF ( S ) +  ] with confidence level 1 − α .

Proof. The statement can be proved by reasoning as in the proof of Theorem 1 and exploiting Lemmas 2 and 3. First of all, observe that at the end of each iteration, Algorithm 3 sampled a possible world w = Arg, Def  such that S is admissible in w. In fact, the arguments of Arg are chosen so that S ⊆ Arg ⊆ A, Def only contains defeats whose arguments are in Arg, no defeat between arguments in S belongs to Def (thus ensuring that S is conflict-free), there is a defeat in Def from any argument a ∈ Arg \ S toward S only if there is at least one defeat from S toward a in Def (this ensures that S counter-attacks every argument attacking it, that is, S is admissible). Let Pr( w ) be the probability that possible world w = Arg, Def  is sampled during any iteration of Algorithm 3. After showing how Pr( w ) can be expressed, we show that Pr ( w ) coincides with the probability Pr( E w | E ad ( S )) that world w occurs given that the event that S is admissible occurs. This proves that Algorithm 3 correctly samples possible worlds wherein S is admissible. Algorithm 3 generates world w = Arg, Def  by performing the following steps: 1. for each a ∈ S, a is added to Arg. 2. for each a ∈ A \ S, a is added to Arg if a randomly generated number belonging to the interval [0, 1] is less than P ( S ,a)+ P ( S ,a) Pr(xa | E ad ( S )) = P ( S ,a2)+ P ( S ,a3)+ P ( S ,a) . 1 2 3 3. for each a ∈ Arg \ S, the set (a) is added to Def if the random number generated at step 2. is less than P ( S ,a) Pr(e 3 ( S , a)| E ad ( S ) ∧ xa ) = P ( S ,a3)+ P ( S ,a) , where (a) is generated with probability Pr((a)) = P A (a)·





P D (δ)·

δ∈(a)

(1− P D (δ))

(a)\(a)



3

(Lemma 3). That is, (a) is added to Def with probability equal to

P 3 ( S ,a)

P A (a) ·

2

P D (δ) ·

δ∈(a)



(1 − P D (δ))

(a)\(a)

.

P 2 ( S , a) + P 3 ( S , a)

4. ∀ δ = a, b ∈ D such that a, b ∈ Arg \ S, δ is added to Def if a randomly generated number belonging to the interval [0, 1] is less than Pr(xδ |xa ∧ xb ∧ E ad ( S ) ∧ e 2 ( S , b)) = P D (δ). 5. ∀δ = a, b ∈ D such that a ∈ S and b ∈ Arg \ S is an argument not defeating S, δ is added to Def if a randomly generated number belonging to the interval [0, 1] is less than Pr(xδ |xa ∧ xb ∧ E ad ( S ) ∧ e 2 ( S , b)) = P D (δ). Each of the decision taken by Algorithm 3 during the above described steps can be viewed as an outcome of a random experiment, each of them associated with a probabilistic event. The probability Pr ( w ) that world w is generated during any iteration of Algorithm 3 can be then expressed as follows.



Pr( w ) =

a∈Arg\ S

·

P 2 ( S , a) + P 3 ( S , a) P 1 ( S , a) + P 2 ( S , a) + P 3 ( S , a)

 

1−

a∈ A \Arg

·



δ = a, b ∈ Def s.t . a, b ∈ Arg \ S

P A (a) ·

δ∈(a)

·

P 2 ( S , a) + P 3 ( S , a)

 δ = a, b ∈ D \ Def s.t . a, b ∈ Arg \ S

δ∈(a)\(a)

P 2 ( S , a) + P 3 ( S , a)



P 1 ( S , a) + P 2 ( S , a) + P 3 ( S , a) P D (δ) ·

P D (δ) ·

(1 − P D (δ))

(1 − P D (δ))

128

B. Fazzinga et al. / International Journal of Approximate Reasoning 69 (2016) 106–132



·



P D (δ) ·

δ = a, b ∈ Def s.t . a ∈ S , b ∈ Arg \ S , b, c  ∈ Def , c ∈ S

(1 − P D (δ))

δ = a, b ∈ D \ Def s.t . a ∈ S , b ∈ Arg \ S , b, c  ∈ Def , c ∈ S

After some simplifications and considering that P 1 ( S , a) = 1 − P A (a), we obtain the following.



Pr( w ) =

P A (a)

a∈Arg\ S

·

P 1 ( S , a) + P 2 ( S , a) + P 3 ( S , a)





P D (δ) ·

a∈Arg\ S ,δ∈(a)



·

a∈ A \Arg

1 − P A (a)

a∈Arg\ S ,δ∈(a)\(a)



δ∈ D \ {a,b| a,b∈ S } ∪

a∈Arg\ S

   δ∈ Def \

a∈Arg\ S

(a)



P D (δ)

(1 − P D (δ)).







P 1 ( S , a) + P 2 ( S , a) + P 3 ( S , a)

(1 − P D (δ)) ·



·

 

\Def

(a)

We now show that Pr ( w ) coincides with the probability Pr( E w | E ad ( S )) that world w occurs given that the event that S is admissible occurs. Given a possible world w = Arg, Def , we have that

Ew =





xa ∧

a∈Arg

¬xa ∧

a∈ A \Arg





xδ ∧

δ∈Def

¬xδ

δ∈ D ( w )\Def

where D ( w ) consists of the defeats in D ∩ (Arg × Arg). The event that the possible world w = Arg, Def  occurs can be rewritten as follows.

Ew =



xa ∧

a∈ S





xa ∧

a∈Arg\ S

a∈ A \Arg





 δ∈ D ( w ) \

a∈Arg\ S

xδ ∧

a ∈ Arg \ S , δ ∈ (a)



  δ∈ Def \

a∈Arg\ S

(a)





xδ ∧ δ∈



a∈Arg\ S

¬xδ

(a)\Def

¬xδ .







¬xa ∧

(a) \Def

Now we consider the event E ad ( S ) that set S is an admissible extension, and observe that the occurrence of E w entails the occurrence of E ad ( S ). It is easy to see that E w implies E cf ( S ), as the above-reported expression for E w contains the expression for E cf ( S ) in the first and (part of) the last conjunct. Moreover, if E w occurs then, for each argument a in A \ S, it is the case that one of the mutually exclusive events e 1 ( S , a), e 2 ( S , a), e 3 ( S , a) occurs. Indeed, for each argument a in A \ S, E w either contains ¬xa , meaning that a ∈ / Arg and thus e 1 ( S , a) occurs, or it contains xa , meaning that a ∈ Arg. In the latter case we show that either e 2 ( S , a) or e 3 ( S , a) occurs. For each argument a ∈ Arg \ S, one and only one of the following cases holds: (i) there is at least one pair of defeats in (a), which entails that e 3 ( S , a) holds; (ii) there is no defeat from a towards any argument in S, which entails that e 2 ( S , a) holds. Therefore, if E w occurs then E cf ( S ) occurs and it holds that either e 1 ( S , a) or e 2 ( S , a), or e 3 ( S , a) occurs, that is, E ad ( S ) occurs as well. The fact that the occurrence of E w entails the occurrence of E ad ( S ) implies that Pr( E w ∧ E ad ( S )) = Pr( E w ). Hence, we obtain that

Pr( E w | E ad ( S )) = where

Prad F (S) =



Pr( E w ∧ E ad ( S )) Pr( E ad ( S ))



P A (a) ·

a∈ S



=

Pr( E w ) Prad F (S)



1 − P D (δ) ·

 



P 1 ( S , d) + P 2 ( S , d) + P 3 ( S , d)

d∈ A \ S

δ = a, b ∈ D s.t . a, b ∈ S

is the probability of E ad ( S ) (Fact 2). Starting from the expression of E w , the numerator Pr( E w ) of the fraction above can be written as follows

Pr( E w ) =





P A (a) ·

a∈ S

P A (a) ·

a∈Arg\ S



· δ∈(Def \



a∈Arg\ S



a∈ A \Arg



P D (δ) · (a))

(1 − P A (a)) ·

δ∈



a∈Arg\ S

(a)\Def



P D (δ)

a ∈ Arg \ S , δ ∈ (a)



(1 − P D (δ)) · δ∈( D ( w ) \



a∈Arg\ S

(1 − P D (δ)). (a)) \Def

B. Fazzinga et al. / International Journal of Approximate Reasoning 69 (2016) 106–132

It is easy to check that fraction

Pr( E w ) Pr ad F (S)

129

turns out to be equal to the probability Pr ( w ), shown before, that a possible world w

is sampled during any iteration of Algorithm 3. This suffices to show that the algorithm correctly samples possible worlds in the sample space consisting of the possible worlds wherein S is admissible. Finally, reasoning as in the second part of the proof of Theorem 1, it can be shown that after sampling n possible worlds,

 sem  sem| E ad ( S ) ( S ) · Prad ( S ), is such that Prsem ( S ) lies in the interval Pr  sem it holds that estimate Pr F ( S ), obtained as Pr F F (S) ±  F F with confidence level 1 − α . 2

Theorem 3. Let 1 − α be a confidence level,  be an error level and let n, n and n

be the number of Monte-Carlo iterations of Algorithm 1, Algorithm 2, and Algorithm 3, respectively. Let i 1 , i 2 , i 3 , i 4 , i 5 and i 6 be the following inequalities:

(i 1 ) Prsem ( S ) ≥ k ·  , (i 3 ) Prsem| E cf ( S ) ( S ) ≥ k ·  , (i 5 ) Prsem| E ad ( S ) ( S ) ≥ k

· 

, sem sem| E cf ( S )



( S ) ≥ k ·  , (i 6 ) 1 − Prsem| E ad ( S ) ( S ) ≥ k

· 

(i 2 ) 1 − Pr ( S ) ≥ k ·  , (i 4 ) 1 − Pr If there exist k and k greater than 1 such that i 1 , i 2 , i 3 and i 4 hold, then





+1) 2

(Prcf ( S )−Prsem F ( S )) (a) n−nn ≥ 1 − F · kk·( ·(kk− holds with confidence level 1 − α . 1) (1−Prsem ( S ))

If there exist k and k greater than 1 such that i 1 , i 2 , i 5 and i 6 hold, then

(Prad ( S )−Prsem F ( S )) (b) n−nn ≥ 1 − F · (1−Prsem ( S ))



k·(k

+1) k ·(k−1)

2

holds with confidence level 1 − α .

Proof. For the sake of brevity of presentation, we provide the proof for case (a). The proof for case (b) can be easily derived

 sem| E cf ( S ) ( S ), by applying the same reasoning. By observing that p in Algorithm 2 (Line 16) represents the estimated value Pr F we have that sem| E cf ( S )



n =

F z12−α /2 · Pr

| E cf ( S )  sem ( S ) · (1 − Pr ( S )) F

2

2 2 · (Prcf F ( S )) − z1−α /2

(A.2)

 F ( S ), we have that Similarly, since p in Algorithm 1 (Line 13) represents the estimated value Pr sem

 F ( S ) · (1 − Pr  F ( S )) z12−α /2 · Pr sem

n=

sem

2

− z12−α /2

(A.3)

Hence, by subtracting Equation (A.2) from Equation (A.3) we obtain:

 F ( S ) · (1 − Pr  F ( S )) z12−α /2 · Pr sem

n − n =

⎛ −⎝

sem

2 sem| E cf ( S )

F z12−α /2 · Pr

− z12−α /2

| E cf ( S )  sem ( S ) · (1 − Pr ( S )) F

2

⎞ 2 2 ⎠ · (Prcf F ( S )) − z1−α /2

(A.4)

or, equivalently,

n − n =

z12−α /2



| E cf ( S ) 2  sem  sem  sem| E cf ( S ) ( S ) · (1 − Pr  sem · Pr ( S )) · (Prcf F ( S ) · (1 − Pr F ( S )) − Pr F F F ( S ))

2

(A.5)

Next, by taking the ratio between Equation (A.5) and Equation (A.3), we obtain

n − n

n

| E cf ( S ) 2  sem  sem  sem| E cf ( S ) ( S ) · (1 − Pr  sem · Pr ( S )) · (Prcf F ( S ) · (1 − Pr F ( S )) − Pr F F F ( S ))

z12−α /2

2

=

 F ( S )·(1−Pr  F ( S )) z12−α /2 ·Pr sem

sem

2

Hence, since

n − n

n

z12−α /2

> 0, from Equation (A.6) we obtain

sem sem| E cf ( S ) sem| E cf ( S ) cf ( S ))2  sem    · Pr ( S ) · ( 1 − Pr ( S )) − Pr ( S ) · ( 1 − Pr ( S )) · ( Pr F F F F F

z12−α /2

2



(A.6)

− z12−α /2

 F ( S )·(1−Pr  F ( S )) z12−α /2 ·Pr sem

sem

2 z2

or equivalently, since 1− 2α/2 appears both in the numerator and the denominator of the fraction on the right-hand side of the above reported inequality,



n − n

n



sem| E cf ( S )

 F ( S ) · (1 − Pr  F ( S )) − Pr F Pr sem

sem

| E cf ( S ) 2  sem ( S ) · (1 − Pr ( S )) · (Prcf F F ( S ))

 F ( S ) · (1 − Pr  F ( S )) Pr sem

sem



130

B. Fazzinga et al. / International Journal of Approximate Reasoning 69 (2016) 106–132

which can be equivalently rewritten as:

n − n

n

sem| E cf ( S )

≥1−

F Pr

sem| E ( S )

 F cf ( S )) · (Prcf ( S ))2 ( S ) · (1 − Pr F sem  F ( S ) · (1 − Pr  sem Pr F ( S )) sem| E cf ( S )

  Since Prsem F ( S ) ∈ [PrF ( S ) −  , Pr F +  ], and Pr F dence level 1 − α , the following inequalities hold: sem

sem| E cf ( S )

F Pr

 sem| E cf ( S ) ( S ) −  , Pr  sem| E cf ( S ) ( S ) +  ] hold with confi( S ) ∈ [Pr F F

sem| E cf ( S )

(S)



F (S) Pr sem

sem| E cf ( S )

F 1 − Pr

sem

(A.7)

PrF

sem

PrF

(S)

F (S) 1 − Pr sem

(S) + 

(S) − 

(A.8)

sem| E cf ( S )



1 − PrF

1 − PrF

sem

(S) + 

(S) − 

(A.9)

Since from the hypothesis it holds that

k > 1, k > 1, Prsem ( S ) ≥ (k) ·  , Prsem| E cf ( S ) ( S ) ≥ k ·  ,

1 − Prsem ( S ) ≥ k ·  , 1 − Prsem| E cf ( S ) ( S ) ≥ k · 

we have that the following inequalities hold:

≤

Prsem ( S ) , k

 ≤

Prsem| E cf ( S ) ( S ) , k

≤

1−Prsem ( S ) , k

and

 ≤

1−Prsem| E cf ( S ) ( S ) . k

(A.10)

Thus by applying the above-reported inequalities to inequality (A.8) we obtain: sem| E cf ( S )

F Pr

(S)

F (S) Pr sem

sem| E cf ( S )



PrF

sem

PrF

sem| E ( S )

sem| E cf ( S )



sem| E ( S )

( S ) +  PrF cf ( S ) +  k · (PrF cf ( S ) +  ) ≤ = Prsem ( S ) (S) −  (k − 1)Prsem ( S ) Prsem ( S ) − k

k · (PrF

Pr

sem| E cf ( S )

(S) + F (k − 1)Prsem ( S )

(S)

k

)

=

k · (k + 1) k · (k − 1)

sem| E cf ( S )

·

( S ) k · (k + 1) 1 . =

· cf sem Pr ( S ) k · (k − 1) Pr ( S )

PrF

That is, sem| E cf ( S )

F Pr

(S)

F (S) Pr sem



k · (k + 1) k · (k − 1)

·

1

(A.11)

Prcf ( S )

Moreover, by applying inequalities (A.10) to inequality (A.9) we obtain: sem| E cf ( S )

F 1 − Pr

(S)

F (S) 1 − Pr sem

sem| E cf ( S )



sem| E ( S )

1−Pr

sem| E cf ( S )



sem| E ( S )

(S) + 

1 − PrF cf ( S ) + 

k · (1 − PrF cf ( S ) +  ) ≤ = sem sem 1 − Pr ( S ) 1 − PrF ( S ) −  (k − 1)(1 − Prsem ( S )) 1 − Prsem ( S ) − k

1 − PrF

k · (1 − PrF

sem| E cf ( S )

F (S) + k

sem (k − 1)(1 − Pr ( S ))

(S)

)

sem| E cf ( S )

=

k · (k + 1)(1 − PrF

( S ))

k · (k − 1)(1 − Prsem ( S ))

That is: sem| E cf ( S )

F 1 − Pr

(S)

F (S) 1 − Pr sem

sem| E cf ( S )



k · (k + 1)(1 − PrF k

( S )) · (k − 1)(1 − Prsem ( S ))

(A.12)

By applying inequalities (A.11) and (A.12) to inequality (A.7) we obtain:

n − n

n

sem| E cf ( S )

k · (k + 1)(1 − PrF

( S )) k · (k + 1) 1 ·

· · (k − 1)(1 − Pr ( S )) k · (k − 1) Prcf ( S ) 2 2   sem| E ( S ) (1 − PrF cf ( S )) (Prcf ( S ) − Prsem k · (k + 1) k · (k + 1) F ( S )) = 1 − Prcf ( S ) · = 1 − · · (1 − Prsem ( S )) k · (k − 1) (1 − Prsem ( S )) k · (k − 1) ≥ 1 − (Prcf ( S ))2 ·

k

sem

That is

n − n

n

2  (Prcf ( S ) − Prsem k · (k + 1) F ( S )) ≥1− ·

(1 − Prsem ( S )) k · (k − 1)

holds with confidence level 1 − α , which completes the proof of the statement.

2

B. Fazzinga et al. / International Journal of Approximate Reasoning 69 (2016) 106–132

131

References [1] B. Fazzinga, S. Flesca, F. Parisi, Efficiently estimating the probability of extensions in abstract argumentation, in: Scalable Uncertainty Management – 7th International Conference, SUM, 2013, pp. 106–119. [2] P.M. Dung, On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games, Artif. Intell. 77 (2) (1995) 321–358. [3] P.M. Dung, P. Mancarella, F. Toni, Computing ideal sceptical argumentation, Artif. Intell. 171 (10–15) (2007) 642–674. [4] P. Baroni, M. Giacomin, Semantics of abstract argument systems, in: Argumentation in Artificial Intelligence, 2009, pp. 25–44. [5] P.E. Dunne, M. Wooldridge, Complexity of abstract argumentation, in: Argumentation in Artificial Intelligence, 2009, pp. 85–104. [6] P.E. Dunne, The computational complexity of ideal semantics, Artif. Intell. 173 (18) (2015). [7] P.M. Dung, P.M. Thang, Towards (probabilistic) argumentation for jury-based dispute resolution, in: COMMA, 2010, pp. 171–182. [8] H. Li, N. Oren, T.J. Norman, Probabilistic argumentation frameworks, in: TAFA, 2011. [9] M. Thimm, A probabilistic semantics for abstract argumentation, in: ECAI, 2012, pp. 750–755. [10] T. Rienstra, Towards a probabilistic Dung-style argumentation system, in: AT, 2012, pp. 138–152. [11] A. Hunter, Some foundations for probabilistic abstract argumentation, in: COMMA, 2012, pp. 117–128. [12] A. Hunter, A probabilistic approach to modelling uncertain logical arguments, Int. J. Approx. Reason. 54 (1) (2013) 47–81. [13] B. Fazzinga, S. Flesca, F. Parisi, On the complexity of probabilistic abstract argumentation, in: IJCAI, 2013. [14] B. Fazzinga, S. Flesca, F. Parisi, On the complexity of probabilistic abstract argumentation frameworks, ACM Trans. Comput. Log. 16 (3) (2015) 22. [15] A. Agresti, B.A. Coull, Approximate is better than “exact” for interval estimation of binomial proportions, Am. Stat. 52 (2) (1998) 119–126. [16] H. Li, N. Oren, T.J. Norman, Relaxing independence assumptions in probabilistic argumentation, in: Proc. of Tenth International Workshop on Argumentation in Multi-Agent Systems, ArgMAS, 2013. [17] P. Dondio, Toward a computational analysis of probabilistic argumentation frameworks, Cybern. Syst. 45 (3) (2014) 254–278. [18] H. Prakken, An abstract framework for argumentation with structured arguments, Argument Comput. 1 (2) (2010) 93–124. [19] P. Dondio, Computing the grounded semantics in all the subgraphs of an argumentation framework: an empirical evaluation, in: Proc. of 14th International Workshop Computational Logic in Multi-Agent Systems, CLIMA, 2013, pp. 119–137. [20] A. Hunter, Probabilistic qualification of attack in abstract argumentation, Int. J. Approx. Reason. 55 (2) (2014) 607–638. [21] T.J.M. Bench-Capon, Persuasion in practical argument using value-based argumentation frameworks, J. Log. Comput. 13 (3) (2003) 429–448. [22] L. Amgoud, S. Vesic, A new approach for preference-based argumentation frameworks, Ann. Math. Artif. Intell. 63 (2) (2011) 149–183. [23] L. Amgoud, C. Cayrol, A reasoning model based on the production of acceptable arguments, Ann. Math. Artif. Intell. 34 (1–3) (2002) 197–215. [24] S. Modgil, Reasoning about preferences in argumentation frameworks, Artif. Intell. 173 (9–10) (2009) 901–934. [25] P.E. Dunne, A. Hunter, P. McBurney, S. Parsons, M. Wooldridge, Weighted argument systems: basic definitions, algorithms, and complexity results, Artif. Intell. 175 (2) (2015). [26] S. Coste-Marquis, S. Konieczny, P. Marquis, M.A. Ouali, Weighted attacks in argumentation frameworks, in: KR, 2012. [27] D.C. Martínez, A.J. García, G.R. Simari, An abstract argumentation framework with varied-strength attacks, in: KR, 2008, pp. 135–144. [28] L. Amgoud, H. Prade, Reaching agreement through argumentation: a possibilistic approach, in: KR, 2004, pp. 175–182. [29] T. Alsinet, C.I. Chesñevar, L. Godo, S. Sandri, G.R. Simari, Formalizing argumentative reasoning in a possibilistic logic programming setting with fuzzy unification, Int. J. Approx. Reason. 48 (3) (2008) 711–729. [30] T. Alsinet, C.I. Chesñevar, L. Godo, G.R. Simari, A logic programming framework for possibilistic argumentation: formalization and logical properties, Fuzzy Sets Syst. 159 (10) (2008) 1208–1228. [31] P.E. Dunne, M. Caminada, Computational complexity of semi-stable semantics in abstract argumentation frameworks, in: Proc. of 11th European Conference Logics in Artificial Intelligence, JELIA, 2008, pp. 153–165. [32] W. Dvorák, S. Woltran, Complexity of semi-stable and stage semantics in argumentation frameworks, Inf. Process. Lett. 110 (11) (2010) 425–430. [33] S.A. Gaggl, S. Woltran, The cf2 argumentation semantics revisited, J. Log. Comput. 23 (5) (2013) 925–949. [34] R. Booth, M. Caminada, P.E. Dunne, M. Podlaszewski, I. Rahwan, Complexity properties of critical sets of arguments, in: Proc. of Computational Models of Argument, COMMA, 2014, pp. 173–184. [35] P. Baroni, M. Caminada, M. Giacomin, An introduction to argumentation semantics, Knowl. Eng. Rev. 26 (4) (2011) 365–410. [36] E.J. Kim, S. Ordyniak, S. Szeider, Algorithms and complexity results for persuasive argumentation, Artif. Intell. 175 (9–10) (2011) 1722–1736. [37] P. Besnard, A.J. García, A. Hunter, S. Modgil, H. Prakken, G.R. Simari, F. Toni, Introduction to structured argumentation, Argument Comput. 5 (1) (2014) 1–4. [38] R. Haenni, J. Kohlas, N. Lehmann, Probabilistic argumentation systems, in: Handbook of Defeasible Reasoning and Uncertainty Management Systems, vol. 5: Algorithms for Uncertainty and Defeasible Reasoning, Kluwer, 2000, pp. 221–288. [39] R. Haenni, B. Anrig, J. Kohlas, N. Lehmann, A survey on probabilistic argumentation, in: ECSQARU’01, Toulouse. Workshop: Adventures in Argumentation, 2001, pp. 19–25. [40] P. Shakarian, G.I. Simari, M.A. Falappa, Belief revision in structured probabilistic argumentation, in: Proc. of 8th International Symposium on Foundations of Information and Knowledge Systems, FoIKS, 2014, pp. 324–343. [41] A.J. García, G.R. Simari, Defeasible logic programming: an argumentative approach, Theory Pract. Log. Program. 4 (2) (2004) 95–138. [42] M.V. Martinez, A.J. García, G.R. Simari, On the use of presumptions in structured defeasible reasoning, in: Computational Models of Argument – Proceedings of COMMA 2012, Vienna, Austria, September 10–12, 2012, 2012, pp. 185–196. [43] S.O. Hansson, Semi-revision (invited paper), J. Appl. Non-Class. Log. 7 (2) (2015). [44] B. Verheij, Accrual of arguments in defeasible argumentation, in: Proceedings of the Second Dutch/German Workshop on Nonmonotonic Reasoning, 1995, pp. 217–224. [45] H. Prakken, A study of accrual of arguments, with applications to evidential reasoning, in: Proceedings of the Tenth International Conference on Artificial Intelligence and Law, ACM Press, 2005, pp. 85–94. [46] M.J.G. Lucero, C.I. Chesñevar, G.R. Simari, Modelling argument accrual with possibilistic uncertainty in a logic programming setting, Inf. Sci. 228 (2013) 1–25. [47] M. South, G. Vreeswijk, J. Fox, Dungine: a Java Dung reasoner, in: COMMA, ISBN 978-1-58603-859-5, 2008, pp. 360–368. [48] U. Egly, S.A. Gaggl, S. Woltran, ASPARTIX: implementing argumentation frameworks using answer-set programming, in: ICLP, 2008, pp. 734–738. [49] M. Snaith, C. Reed, TOAST: online ASPIC+ implementation, in: COMMA, 2012, pp. 509–510. [50] T. Alsinet, R. Béjar, L. Godo, F. Guitart, Using answer set programming for an scalable implementation of defeasible argumentation, in: ICTAI, 2012, pp. 1016–1021. [51] G. Charwat, W. Dvorák, S.A. Gaggl, J.P. Wallner, S. Woltran, Methods for solving reasoning problems in abstract argumentation – a survey, Artif. Intell. 220 (2015) 28–63. [52] R.T. Ng, V.S. Subrahmanian, Probabilistic logic programming, Inf. Comput. 101 (2) (1992) 150–201. [53] T. Lukasiewicz, Probabilistic deduction with conditional constraints over basic events, J. Artif. Intell. Res. 10 (1999) 199–241.

132

B. Fazzinga et al. / International Journal of Approximate Reasoning 69 (2016) 106–132

[54] T. Lukasiewicz, Probabilistic logic programming with conditional constraints, ACM Trans. Comput. Log. 2 (3) (2001) 289–339. [55] S. Flesca, F. Furfaro, F. Parisi, Consistency checking and querying in probabilistic databases under integrity constraints, J. Comput. Syst. Sci. (2014), http://dx.doi.org/10.1016/j.jcss.2014.04.026, available online 24 April 2014. [56] D. Poole, The independent choice logic for modelling multiple agents under uncertainty, Artif. Intell. 94 (1–2) (1997) 7–56. [57] T. Lukasiewicz, Probabilistic description logic programs, Int. J. Approx. Reason. 45 (2) (2007) 288–307. [58] D. Suciu, D. Olteanu, C. Ré, C. Koch, Probabilistic Databases, Synthesis Lectures on Data Management, Morgan & Claypool Publishers, 2011.