Algorithms for decision problems in argument systems under preferred semantics

Algorithms for decision problems in argument systems under preferred semantics

Artificial Intelligence 207 (2014) 23–51 Contents lists available at ScienceDirect Artificial Intelligence www.elsevier.com/locate/artint Algorithms ...

1MB Sizes 0 Downloads 63 Views

Artificial Intelligence 207 (2014) 23–51

Contents lists available at ScienceDirect

Artificial Intelligence www.elsevier.com/locate/artint

Algorithms for decision problems in argument systems under preferred semantics ✩ Samer Nofal a,∗ , Katie Atkinson b , Paul E. Dunne b a b

Dept. of Computer Science, German Jordanian University, P.O. Box 35247, Amman 11180, Jordan Dept. of Computer Science, University of Liverpool, Ashton Street, Liverpool L69 7ZF, United Kingdom

a r t i c l e

i n f o

Article history: Received 23 February 2013 Received in revised form 31 October 2013 Accepted 4 November 2013 Available online 8 November 2013 Keywords: Abstract argumentation Preferred extensions Algorithms Skeptical acceptance Credulous acceptance Value based argumentation Subjective acceptance Objective acceptance

a b s t r a c t For Dung’s model of abstract argumentation under preferred semantics, argumentation frameworks may have several distinct preferred extensions: i.e., in informal terms, sets of acceptable arguments. Thus the acceptance problem (for a specific argument) can consider deciding whether an argument is in at least one such extensions (credulously accepted) or in all such extensions (skeptically accepted). We start by presenting a new algorithm that enumerates all preferred extensions. Following this we build algorithms that decide the acceptance problem without requiring explicit enumeration of all extensions. We analyze the performance of our algorithms by comparing these to existing ones, and present experimental evidence that the new algorithms are more efficient with respect to the expected running time. Moreover, we extend our techniques to solve decision problems in a widely studied development of Dung’s model: namely value-based argumentation frameworks (vafs). In this regard, we examine analogous notions to the problem of enumerating preferred extensions and present algorithms that decide subjective, respectively objective, acceptance. © 2013 Elsevier B.V. All rights reserved.

1. Introduction Computational argumentation is an active research branch of artificial intelligence, see e.g. [10,11,55] for reviews. Abstract argumentation frameworks (afs) offer a reasoning model that is likely to be a mainstay in the study of other areas such as decision support systems (see e.g. [3]), machine learning (see e.g. [47]), and agent interaction in multi-agent systems (see e.g. [42]). Following Dung [26], an abstract argumentation framework consists of a set of arguments and a binary relation that represents the conflicting arguments. Then, a solution to an af is captured by deciding the acceptable arguments. A number of argumentation semantics have been proposed to characterize such solutions [5]. For solutions defined in set-theoretic terms (extension-based semantics) one often finds cases, such as preferred semantics (defined in Section 2), in which multiple extensions are present. Thus, focusing on preferred extensions, an argument is viewed as skeptically accepted if and only if it occurs in all preferred extensions. In a similar manner, an argument is seen as credulously accepted if and only if it occurs in at least one such extension. Doutre and Mengin [24] and later Modgil and Caminada [45] presented algorithms for computing preferred extensions. Informally, both algorithms build on so-called labelling based methods, under which arguments that might be included in an extension are labelled IN while arguments which might not be in the respective extension are labelled OUT and the undecided arguments are labelled UNDEC (short for undecided). Both algorithms start with some initial label for all ✩

*

This article is an extended version of [51,50]. Corresponding author. E-mail addresses: [email protected] (S. Nofal), [email protected] (K. Atkinson), [email protected] (P.E. Dunne).

0004-3702/$ – see front matter © 2013 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.artint.2013.11.001

24

S. Nofal et al. / Artificial Intelligence 207 (2014) 23–51

arguments and then the labels change, through so-called transitions, until some condition holds. At this point, the arguments labelled IN make up an admissible set (defined in Section 2). These algorithms go through different sequences of transitions, and hence, they identify admissible sets in order to construct preferred extensions. The two algorithms differ in two key aspects: the arguments’ initial labels; and the transitions rules applied to argument labels. As we show, these issues have a significant affect on performance. The contribution of this work can be summarized in five points. Firstly, we introduce additional labels thereby gaining an improved label transition approach and, hence, construct preferred extensions faster than existing algorithms. Secondly, we introduce a new mechanism for pruning the search space so that transitions leading to “dead ends” are avoided at an early stage. Thirdly, we present a cost-effective heuristic rule that identifies arguments for transitions that might enable a goal state (i.e. a preferred extension) to be achieved earlier. Fourthly, by incorporating the three improvements, we design algorithms for deciding the skeptical/credulous acceptance question without explicitly enumerating all preferred extensions. Finally, we establish the usability of our algorithms in developments of Dung’s model by investigating an instance of such: specifically the value based argumentation frameworks (vafs) of [8]. Note that some earlier results of this work were presented in [50,51]. In Section 3 our new algorithm that enumerates all preferred extensions is presented. In Section 4 we argue that our algorithm is faster in constructing preferred extensions than the existing approaches of Doutre and Mengin [24] and Modgil and Caminada [45] supporting these claims on the basis of empirical studies. Regarding the acceptance problems, although the skeptically/credulously accepted arguments might be simply decided by enumerating all preferred extensions, in situations where interest is confined to a single specific argument then it is more efficient to avoid such complete enumeration of preferred extensions. This is especially the case when the underlying af is dynamic (i.e. changes frequently such as in a dialog setting). Accordingly, in Section 5 we engineer algorithms that outperform, with respect to running time, the existing algorithms of Cayrol et al. [19], Thang et al. [56] and the algorithm of Verheij for the credulous acceptance problem [58]. We present relevant comparisons with existing algorithms and empirical evaluation in Section 6. In Section 7 we demonstrate how our algorithms for Dung’s frameworks may be adapted to address analogous problems in vafs. More specifically, we address the following problems relevant to this context: preferred extension enumeration, subjective acceptance and objective acceptance. We offer further discussions and a review of related work in Section 8 and lastly we conclude the paper in Section 9. We first present preliminary background in Section 2. 2. Preliminary background We recall the concept of argumentation framework from [26]. Definition 1 (Dung’s argumentation frameworks). An argumentation framework (or af) is a pair ( A , R ) where A is a set of arguments and R ⊆ A × A is a binary relation. We refer to (x, y ) ∈ R as x attacks y (or y is attacked by x). We denote by {x}− respectively {x}+ the subset of A containing those arguments that attack (resp. are attacked by) the argument x, extending this notation in the natural way to sets of arguments, so that for S ⊆ A,









S − = y ∈ A: ∃x ∈ S s.t. y ∈ {x}− S + = y ∈ A: ∃x ∈ S s.t. y ∈ {x}+ Given a subset S ⊆ A, then

• • • • • • • • •

x ∈ A is acceptable w.r.t. S if and only if for every ( y , x) ∈ R, there is some z ∈ S for which ( z, y ) ∈ R. S is conflict free if and only if for each (x, y ) ∈ S × S, (x, y ) ∈ / R. S is admissible if and only if it is conflict free and every x ∈ S is acceptable w.r.t. S. S is a preferred extension if and only if it is a maximal (w.r.t. ⊆) admissible set. S is a stable extension if and only if it is conflict free and S + = A \ S. S is a complete extension if and only if it is an admissible set such that for each x acceptable w.r.t. S, x ∈ S. S is a stage extension if and only if it is conflict free and S ∪ S + is maximal (w.r.t. ⊆). S is a semi-stable extension if and only if it is admissible and S ∪ S + is maximal (w.r.t. ⊆). S is the ideal extension if and only if it is the maximal (w.r.t. ⊆) admissible set that is contained in every preferred extension. • S is the grounded extension if and only if it is the least fixed point of F ( T ) = {x ∈ A | x is acceptable w.r.t. T }. Preferred, complete, stable and grounded semantics are introduced in [26], whereas ideal semantics, stage semantics and semi-stable semantics are presented in [27,57,14] respectively. To give an example, consider the framework depicted in Fig. 1 where nodes represent arguments and edges correspond to attacks (i.e. elements of R). For this example {b, d} is the preferred, grounded, stable, ideal, complete, semi-stable and stage extension. Note that we do not intend by this example to show differences between semantics. In the above, we introduced a selection of prevalent argumentation semantics. It is

S. Nofal et al. / Artificial Intelligence 207 (2014) 23–51

25

Fig. 1. An argumentation framework.

out of the scope of the present paper to explain the motivation behind every semantics or even to review all argumentation semantics proposed in the literature. In this paper we develop algorithms for decision problems under preferred semantics. This should not be construed as giving favor to preferred semantics over other semantics. From an application perspective, it has been highlighted (e.g. [5]) that it is a matter of choice as to which semantics to use for the application at hand. See the review of Baroni et al. [5] for an excellent introduction to argumentation semantics with a comprehensive discussion on explaining the motivations behind the diversity of argumentation semantics. Therefore, although this work is about building algorithms under preferred semantics, we show how these algorithms might lead to constructing algorithms for the other argumentation semantics. The bottom line is, this work has impact on implementing algorithms for argumentation semantics in general, despite the apparent focus on the preferred semantics. The notion of extension follows a convention of defining argumentation semantics in terms of subsets of arguments in an af, i.e. those arguments that meet particular criteria. More precisely, if σ : 2 A → {, ⊥} is a predicate over subsets of arguments, then the σ -extensions of ( A , R ) – denoted by Eσ ( A , R ) – are the subsets of A which satisfy σ . A related, and often interchangeable concept characterizes argumentation semantics in terms of labelling properties. Perhaps, the most prevalent work is the labelling theory introduced by Caminada and Gabbay [17]. A central concept of this approach is to define the set of labellings that basically captures Eσ ( A , R ) under a specific semantics σ : 2 A → {, ⊥}. However, for the purpose of showing the connection of our algorithms to the theory of [17] we prescribe preferred labellings that basically correspond to preferred extensions. We refer the interested reader to [17,5] for full presentation of the theory of Caminada and Gabbay. Definition 2 (Preferred labellings). Let ( A , R ) be an af and S ⊆ A be a preferred extension (resp. admissible set). Then, the corresponding preferred (resp. admissible) labelling λ : A → {IN, OUT , UNDEC } for S is described by:

          λ = (x, IN)  x ∈ S ∪ (x, OUT )  x ∈ S + ∪ (x, UNDEC )  x ∈ A \ S ∪ S +

From now on we may use the terms (extension) and (labelling) interchangeably since we can obtain either if the other is given. Given the above context, the job of a labelling-based algorithm for listing preferred extensions can be seen as constructing all labellings within which arguments labelled IN represent a preferred extension. Both of the algorithms of [24,45] define labelling-based procedures for enumerating preferred extensions using a total mapping λ : A → {IN, OUT , UNDEC }. However, in [45] the IN label is the default label for all arguments of the given framework and the procedure constructs preferred labellings whose properties are specified in Definition 2, while in [24] the UNDEC label is the default label for all arguments of the input framework and the procedure captures a preferred extension S when the corresponding labelling has the following properties:

      λ = (x, IN)  x ∈ S ∪ (x, OUT )  x ∈ A \ S Thus, the algorithm of [45] adheres to the conventions of the theory of Caminada and Gabbay, whereas the algorithm of [24] does not. As we show in the paper, our algorithm for preferred extension enumeration basically constructs preferred extensions from labellings satisfying the properties of Definition 2, and so, our algorithms fit neatly into the labelling theory of Caminada and Gabbay. Nevertheless, the main concern of the present paper is to improve the efficiency of listing all preferred labellings/extensions by defining new implemented algorithms that outdo (in terms of running-time) existing algorithms. In the next section we present our new algorithm for enumerating preferred extensions. 3. Preferred extension enumeration: the new algorithm One significant factor in the algorithm described in this section is in its modification of the basic labelling framework. Thus, in addition to the labels IN, OUT and UNDEC that are used by the existing algorithms of [24,45], we introduce labels which are referred to as MUST_OUT and BLANK, so that a labelling of H = ( A , R ) is now a total mapping, μ : A → {IN, OUT , BLANK , MUST_OUT , UNDEC }. Note that we use the notation μ for such 5-value labelling schemes to distinguish from the 3-valued schemes referred to earlier. In what follows we explain informally how these five labels are used in our algorithm. The BLANK label is the default label for all arguments, indicating that the argument is still unprocessed. The IN label identifies arguments that might be in a preferred extension. The OUT label identifies an argument that is attacked by an IN argument. The MUST_OUT label identifies arguments that attack IN arguments. The UNDEC label designates arguments which may not be included in a preferred extension because they are not defended by any IN argument. The precise usage of these labels are introduced in Algorithm 1 for listing all preferred extensions. Algorithm 1 is a depth-first backtracking procedure that traverses an implicit binary search tree. Algorithm 1 starts with BLANK as the initial label for all arguments, this initial state represents the root node of the search tree. Then the algorithm

26

S. Nofal et al. / Artificial Intelligence 207 (2014) 23–51

Fig. 2. How Algorithm 1 works on an af.

forks to a left (resp. right) child by picking an argument, that is BLANK, to be labelled IN (resp. UNDEC). Every time an argument, say x, is labelled IN the label of the neighbour arguments might change such that for every y ∈ {x}+ the label of y becomes OUT and for every z ∈ {x}− \ {x}+ the label of z becomes MUST_OUT. This process, i.e. forking to new children, continues until for every x ∈ A the label of x is not BLANK. At this point, the algorithm captures {x | the label of x is IN} as a preferred extension if and only if for every x ∈ A the label of x belongs to {IN, OUT , UNDEC } and {x | the label of x is IN} is not a subset of a previously found preferred extension (if such exists). Then the algorithm backtracks to try to find all preferred extensions. Before giving the full specification of Algorithm 1, we define concretely the actions involved during the transition from a node of the search tree to a left (or right) child. Since expanding a left child involves labelling an argument IN, we denote such expansion by IN-TRANS (short for IN-TRANSITION). During IN-TRANS three actions are taken. Firstly, the label of a BLANK argument becomes IN. Secondly, attackers of the newly IN argument are labelled MUST_OUT. Thirdly, arguments attacked by the newly IN argument are labelled OUT. Definition 3 (IN-TRANS transition rule). Let ( A , R ) be an af, x ∈ A, μ : A → {IN, OUT , MUST_OUT , BLANK , UNDEC } be a total mapping such that μ(x) = BLANK. Then, IN-TRANS(x) is defined by the next ordered actions: 1. μ ← μ; 2. μ (x) ← IN; 3. for all y ∈ {x}+ do μ ( y ) ← OUT; 4. for all z ∈ {x}− : μ ( z) = OUT do μ ( z) ← MUST_OUT; 5. return μ . Similarly we denote expanding a right child by UNDEC-TRANS, which is a process that basically involves labelling an argument UNDEC. The purpose of UNDEC-TRANS is to try to find a preferred extension excluding the newly UNDEC argument. Simply, the UNDEC-TRANS is applied by changing the label of a BLANK argument to UNDEC in accordance with the following definition. Definition 4 (UNDEC-TRANS transition rule). Let ( A , R ) be an af, x ∈ A, μ : A → {IN, OUT , MUST_OUT , BLANK , UNDEC } be a total mapping such that μ(x) = BLANK. Then, UNDEC-TRANS(x) is defined by the following ordered steps: 1. μ ← μ; 2. μ (x) ← UNDEC; 3. return μ . Fig. 2 shows how Algorithm 1 works on the af depicted in Fig. 1. Let us now improve the efficiency of Algorithm 1 by applying three enhancements. For the first enhancement, Algorithm 1 selects an argument labelled BLANK to use as the basis for an IN-TRANS transition arbitrarily. As we demonstrate, however, it is more productive to guide the selection via the following rule:

S. Nofal et al. / Artificial Intelligence 207 (2014) 23–51

27

Algorithm 1: Enumerating all preferred extensions of an af H = ( A , R ). 1 2 3 4 5

μ : A → {IN, OUT , BLANK , MUST_OUT , UNDEC }; μ ← ∅; foreach x ∈ A do μ ← μ ∪ {(x, BLANK )}; PEXT ← ∅; call find-preferred-extensions(μ); report PEXT is the set of all preferred extensions;

6 procedure find-preferred-extensions(μ) begin 7 if ∀x ∈ A μ(x) = BLANK then 8 if ∀x ∈ A μ(x) = MUST_OUT then 9 S ← { y ∈ A | μ( y ) = IN}; 10 if ∀ T ∈ PEXT S ⊂ T then PEXT ← PEXT ∪ { S }; 11 else 12 select any x ∈ A s.t. μ(x) = BLANK; 13 μ ← IN-TRANS(x); 14 call find-preferred-extensions(μ ); 15 μ ← UNDEC-TRANS(x); 16 call find-preferred-extensions(μ ); 17 end procedure

1. Select any x with μ(x) = BLANK and satisfying for all y ∈ {x}− the condition 2. Otherwise select any x with μ(x) = BLANK and such that |{x}+ | is maximal.

μ( y ) ∈ {OUT , MUST_OUT }.

Later in this section we will explain the reason behind the first part of this selection rule. As to the second part, the intuition is that this heuristic might accelerate reaching a goal state, that is, an admissible set. Recall that the set of arguments labelled IN is admissible if and only if all arguments in the framework are labelled IN, OUT or UNDEC. Thus, the goal state of the search might be reached faster as much as we minimize the number of arguments with labels in {BLANK , MUST_OUT } by maximizing the number of arguments labelled OUT. Conversely, as long as the first part of the selection rule failed, one might pick up an argument x for IN-TRANS such that the number of arguments that attack x is minimal. At first sight, such selection seems sensible because it produces almost a minimal number of arguments labelled MUST_OUT. However, recall that we get to a goal state (i.e. admissible set) if and only if no argument is labelled BLANK or MUST_OUT, and thus, minimizing the number of arguments labelled MUST_OUT will be unhelpful as long as the number of arguments labelled BLANK is not also minimized. For the second enhancement to Algorithm 1, we exploit a pruning mechanism, originally used by [24] but here we improve its effect as we explain in Section 4.1. This detects the branch of the search tree that will eventually take us to a dead end: in the sense that further expansion of the search tree – while possible – is unproductive. In particular, the pruning mechanism says that if at any state of the search there exists an argument x with μ(x) = MUST_OUT and no argument y ∈ {x}− with the label BLANK then proceeding further is fruitless and so the algorithm simply must backtrack. For the third enhancement, we use a further pruning tactic: we skip applying an UNDEC-TRANS transition on an argument x with the BLANK label if and only if for each y ∈ {x}− the label of y belongs to {OUT , MUST_OUT }. The enhancement simply uses the following property: if an admissible set S will be constructed while the label of such x is UNDEC then S ∪ {x} is admissible and so applying the UNDEC-TRANS(x) is unnecessary. At this point it is convenient to give the basis for the first part of our rule when selecting an argument x labelled BLANK, i.e. to choose, if available, an x ∈ A for which every y ∈ {x}− is labelled OUT or MUST_OUT. Here, we note that the earlier such x is labelled IN, the greater the saving will be in terms of the part of the search tree pruned (keeping in mind the third enhancement). Furthermore, we modify Algorithm 1 by two minor changes. For the first change, note that UNDEC-TRANS(x) only changes the label of x: consequently there is no need to fork to a new set of labels via UNDEC-TRANS, and so, we just change the label of x to UNDEC within the current labelling. For the second change, we rewrite the IN-TRANS rule using its definition to make the algorithm self-contained. In total these lead to Algorithm 2 that reinforces Algorithm 1 by incorporating the three enhancements mentioned before alongside the two minor changes. Fig. 3 shows how Algorithm 2 computes the preferred extensions of the af of Fig. 1. We consider now the soundness and completeness of Algorithm 2. Proposition 1. Let H = ( A , R ) be an af and PEXT be the set of subsets of A returned by Algorithm 2. Then Epreferred ( H ) = PEXT, i.e. the algorithm reports exactly the set of preferred extensions of H . Proof. We first show that every S ∈ PEXT is admissible. Certainly every S ∈ PEXT is conflict free: assume that there are y , z ∈ S with ( y , z) ∈ R. If y had been labelled IN before z then z would be labelled OUT by Algorithm 2 (lines 10 and 11). On the other hand, were z to be labelled IN prior to y then y will never be labelled IN (lines 10–16) while z is so labelled. Both cases contradict every x ∈ S being labelled IN (line 24). To show that for each x ∈ S, x is acceptable w.r.t. S, suppose that for some y ∈ S there exists ( z, y ) ∈ R but no w ∈ S with ( w , z) ∈ R. In this case Algorithm 2 will have modified the labelling of μ( y ) = BLANK to μ ( y ) ← IN (line 10). Subsequently (line 12), z is considered together with

28

S. Nofal et al. / Artificial Intelligence 207 (2014) 23–51

Fig. 3. How Algorithm 2 works on an example af.

Algorithm 2: Improvement of Algorithm 1 that enumerates preferred extensions of an af H = ( A , R ). 1 2 3 4 5

μ : A → {IN, OUT , BLANK , MUST_OUT , UNDEC }; μ ← ∅; foreach x ∈ A do μ ← μ ∪ {(x, BLANK )}; PEXT ← ∅; call find-preferred-extensions(μ); report PEXT is the set of all preferred extensions;

6 procedure find-preferred-extensions(μ) begin 7 while ∃ y ∈ A: μ( y ) = BLANK do 8 select y ∈ A with μ( y ) = BLANK s.t. ∀ z ∈ { y }− 9 10 11 12 13 14 15 16 17 18 19 20 21 22

μ(z) ∈ {OUT , MUST_OUT } otherwise select y ∈ A with μ( y ) = BLANK s.t. ∀z ∈ A: μ(z) = BLANK, |{ y }+ |  |{ z}+ |; ← μ; μ ( y ) ← IN; foreach z ∈ { y }+ do μ ( z) ← OUT; foreach z ∈ { y }− do if μ ( z) ∈ {UNDEC , BLANK } then μ (z) ← MUST_OUT; if  w: μ ( w ) = BLANK and w ∈ { z}− then μ( y ) ← UNDEC // Note μ and not μ ; μ

goto line 7; call find-preferred-extensions(μ ); if ∃ z ∈ { y }− : μ( z) ∈ {BLANK , UNDEC } then μ( y ) ← UNDEC // Again μ and not μ ; else μ ← μ ;

23 if  y ∈ A: μ( y ) = MUST_OUT then 24 S ← { y ∈ A | μ( y ) = IN }; 25 if  T ∈ PEXT: S ⊆ T then PEXT ← PEXT ∪ { S }; 26 end procedure

arguments w ∈ { z}− (lines 14–17) in the event that μ ( z) ∈ {UNDEC , BLANK }. To begin, suppose (when y is labelled IN) / {UNDEC , BLANK }. It cannot be the case that μ (z) = IN for, as argued earlier, y would then have been earlier that μ ( z) ∈ labelled OUT (line 11). It follows, therefore, that μ ( z) ∈ {OUT , MUST_OUT } is the only possibility consistent with μ ( y ) = IN, (z, y ) ∈ R and μ (z) ∈ / {UNDEC , BLANK }. In the former case (μ (z) = OUT), we must have z ∈ { w }+ and μ ( w ) = IN (line 11) thereby contradicting the premise that such w was unavailable. In the latter case (μ ( z) = MUST_OUT), we have (from line 23) a contradiction to S having been reported as a potentially admissible set in PEXT: S can only be added if there is no w ∈ A labelled MUST_OUT. This observation suffices to deal with the case that μ ( z) is in {UNDEC , BLANK } for again such z will be labelled MUST_OUT (line 14). Our analysis of the preceding paragraph establishes that every S ∈ PEXT is admissible. We now show that each such set is, in fact maximal so completing the argument that PEXT ⊆ Epreferred ( H ). In order to prove maximality (w.r.t. ⊆) let us assume that some S ∈ PEXT is not maximal. From the actions of line 25 in Algorithm 2 this implies that there exists an admissible set S ⊃ S s.t. which has not been included in PEXT. Thus there is (at least one) argument, y belonging to the set S \ S. Algorithm 2, however, firstly labels y IN (line 10) and then later UNDEC (lines 16 and 20), and therefore, the set S would be discovered in advance of the set S and thereby (line 25) added to PEXT.

S. Nofal et al. / Artificial Intelligence 207 (2014) 23–51

29

Finally, that Epreferred ( H ) ⊆ PEXT follows directly from the fact that Algorithm 2 examines all subsets of A by labelling every argument that is (initially) labelled BLANK to be labelled IN (see line 10) and afterwards UNDEC (see lines 16 and 20) thereby considering all subsets including (respectively excluding) y. 2 4. Advantages of Algorithm 2 over earlier methods We now review the manner in which our algorithm offers potential savings over some important previous (labellingbased) techniques. 4.1. The algorithm of Doutre and Mengin In [24] Doutre and Mengin present an algorithm, which we shall refer to as the DM algorithm subsequently, to enumerate all preferred extensions. The method of [24] uses a total mapping λ : A → {IN, OUT , UNDEC }. This approach starts with every argument labelled UNDEC and then iteratively considers branches resulting via two transition rules. We can identify five differences between the DM algorithm and our approach. DM1. The DM algorithm selects an argument labelled UNDEC for transitions by heuristics in which should one rule fail to select an argument another rule is applied and so on. Three rules are given below: R1. Choose x with λ(x) = UNDEC for which both: / {IN, UNDEC } and either i. For all y ∈ {x}− , λ( y ) ∈

(x, y ) ∈ R or λ( z) = IN for some z ∈ { y }− ii. For all y ∈ {x}+ , λ( y ) = IN. / {IN, UNDEC }. R2. Choose x with λ(x) = UNDEC s.t. for all z ∈ {{x}− }− – i.e. attackers of attackers of x – λ( z) ∈ R3. Choose x with λ(x) = UNDEC s.t. there are arguments – y and z – satisfying

λ( z) = IN, y ∈ { z}− ∩ {x}+ and ∀ w ∈ { y }− \ {x}, λ( w ) = OUT Comparing only these rules (leaving aside the other DM rules) against ours (which, we recall, choose x with μ(x) = BLANK on the basis of every attacker, y of x having μ( y ) ∈ {OUT , MUST_OUT } or, failing this, choosing x for which |{x}+ | is maximal), one can see that our heuristic rules are potentially computationally lighter than the DM rules. DM2. In the DM algorithm the counterpart to our IN-TRANS transition rule operates as follows:

when λ(x) ← IN,

λ( y ) ← OUT for each y ∈ {x}−

In our approach such attackers are labelled MUST_OUT. The MUST_OUT label allows us to streamline a search pruning mechanism. In particular, the DM algorithm stops exploring a branch further and backtracks if there is some y ∈ {x}− for which

  λ( y ) = OUT and λ(x) = IN and ∀ z ∈ { y }− λ( z) = OUT A check which is performed after every transition. In contrast, Algorithm 2 backtracks as soon as an argument, y, is discovered for which





μ( y ) = MUST_OUT and ∀z ∈ { y }− μ(z) = BLANK . This check is performed only when applying the IN-TRANS rule which results in new MUST_OUT arguments. We observe that Algorithm 2 needs to check the condition of this pruning strategy less frequently than the algorithm of DM. Moreover, its applicability is verified on average more efficiently than the corresponding test of the DM algorithm: searching for y with μ( y ) = MUST_OUT is at worst | A | steps; searching for x and y with λ( y ) = OUT, λ(x) = IN, y ∈ {x}− (potentially) requires | R | steps. Typically | R | > | A |. DM3. The DM counterpart of UNDEC-TRANS labels the respective argument as OUT instead of UNDEC. To appreciate the benefit of UNDEC-TRANS consider the subsequent stages. Once the DM algorithm finds an admissible set the labels of all arguments are either IN or OUT, and thus, one cannot tell which of the OUT arguments attack (or are attacked by) an argument labelled IN. Comparing with our algorithm, an admissible set is reported if and only if all arguments are labelled IN, OUT or UNDEC. In this way, one easily can see that arguments labelled OUT attack (or are attacked by) an argument labelled IN while those labelled UNDEC are excluded from the respective admissible set on the grounds that they might be indefensible by the arguments labelled IN. DM4. In order to ensure maximality of the reported preferred extensions the DM algorithm carries out the following test:

    ∀ T : T ⊆ y: λ( y ) = OUT test T ∪ x: λ(x) = IN is not admissible In contrast the approach of Algorithm 2 includes a subset S in PEXT (the preferred extensions accumulated so far) if and only if a strict superset of S has not already been included in PEXT.

30

S. Nofal et al. / Artificial Intelligence 207 (2014) 23–51

DM5. Finally, we stress that our algorithm uses a new pruning mechanism that skips expanding the UNDEC-TRANS on arguments that are attacked exclusively by arguments whose labels are OUT or MUST_OUT. 4.2. The algorithm of Modgil and Caminada In [45] Modgil and Caminada present an algorithm (MC) to enumerate all preferred extensions. In common with the DM algorithm discussed above, this uses a total mapping λ : A → {IN, OUT , UNDEC }. An important concept in the MC approach is in its introduction of the so-called “illegally” and “super-illegally” labelled arguments. We note that these are not labels per se but rather refinements of the interpretation of the labels against which they are considered. We show six differences between the MC algorithm and Algorithm 2. MC1. The MC approach starts with all arguments labelled IN while our approach starts with all arguments BLANK. Our contention (supported by the empirical studies presented in the next section) is that this is more than a simple “stylistic” issue. In particular, the choice of initial labelling may have a dramatic effect on the overall average performance: this determines the range of applicable transition rules. Transitions influence two efficiency aspects: the computational cost of applying the transition itself; the number of applicable transitions at a time, i.e. the resulting number of branches. We further examine these issues in the subsequent sections. MC2. In applying transitions, MC selects an argument, x, with λ(x) = IN that is attacked by an argument, y that is “legally” labelled with one of {IN, UNDEC }: in the terminology of [45] such x are dubbed super-illegally IN. An argument x being legally IN if and only if every y ∈ {x}− , has λ( y ) = OUT. Similarly, x with λ(x) = UNDEC, is legally UNDEC if and only if every y ∈ {x}− has λ( y ) = IN and at least one such y has λ( y ) = UNDEC. If there are no super-illegally IN arguments then MC algorithm picks an illegally IN argument, i.e. an argument attacked by an argument labelled IN (or UNDEC). In contrast, Algorithm 2 selects an argument, x, labelled BLANK on the bases we have discussed earlier. Therefore, the selection process of MC may take up to | R |2 steps while that of Algorithm 2 requires no more than | R |. MC3. The MC approach defines an argument, x, with λ(x) = OUT to be illegally OUT when there is no y ∈ {x}− for which λ( y ) = IN. As a result there is a transition rule in MC which has no counterpart within the transition rules of Algorithm 2. This transition identifies an illegally IN argument, x, and changes its label to OUT. A consequence of such changes, however, is that arguments, y ∈ {x}+ may become illegally OUT and so have their label changed to UNDEC. Hence, in MC the transitions need to process the attackers of a set of OUT arguments. This set not only contains the argument, y say, relabelled as OUT but also the arguments labelled OUT which are attacked by y. Comparing the two transition rules of Algorithm 2, both of these require typically less computations: UNDEC-TRANS only processes one argument while IN-TRANS processes the attackers of the argument, x, relabelled to IN together with those attacked by x. MC4. During its execution, MC might find, at any stage, several illegally IN arguments. Each such argument serves as the basis for applying a transition rule. Now consider the case where every argument attacks all of the other arguments: this gives rise to | A | illegally IN arguments (from the initial labelling, since every argument is attacked by an argument labelled IN) and thus there are | A | transitions. After processing any one of these, | A | − 1 illegally IN arguments remain and so forth. Thus, MC could involve exploring | A |! transitions. Algorithm 2, however, (and also DM) require at most 2| A | transition possibilities since each branch generates at most two successors. We note that n! ∼ 2 O (n log n) so that there is potentially a significant asymptotic difference in the size of the respective search space for MC in comparison with Algorithm 2 and DM. MC5. In order to ensure maximality (w.r.t. ⊆) conditions are met, MC eliminates from its candidate collection of preferred extensions, those admissible sets (already identified) which are subsets of the current set of arguments labelled IN: such an operation being carried out once no illegally IN argument remains in the labelling. As argued when comparing Algorithm 2 with DM, the former reports a newly discovered admissible set as maximal if the set of IN arguments is not a subset of any member in the preferred extensions found so far, i.e. those in the set of subsets PEXT. In short, Algorithm 2 tests candidates against (known) preferred extensions whereas MC tests candidates against admissible sets. Typically (on the basis that there are at least as many admissible sets in ( A , R ) as there are preferred extensions) one would anticipate the decision whether or not to add a candidate S to those already identified to be computationally less onerous. MC6. Finally, MC incorporates a pruning mechanism which is different from the one used in Algorithm 2. In MC the search process stops and backtracks if the set of IN arguments is a subset of a previously identified admissible set. As would appear to be supported by the experimental evaluation presented in Section 4.3, the pruning approach applied in Algorithm 2 is more powerful. 4.3. Empirical evaluation of Algorithm 2 All algorithms, new and previous ones, were implemented in C++. We ran all experiments on a Fedora (release 13) based machine with 4 processors (Intel core i5-750 2.67 GHz) and 16 GB of memory. To compare the algorithms we tracked the average elapsed time in seconds. The elapsed time was obtained by using the time command of Linux. For all trials, we set a time limit of 60 seconds for every execution.

S. Nofal et al. / Artificial Intelligence 207 (2014) 23–51

31

Fig. 4. Enumerating all preferred extensions of 1000 afs with | A | = 25. For every p ∈ {0.01, 0.02, 0.03, . . . , 1} we tracked the average elapsed time for 10 instances generated randomly with a probability p (i.e. the probability that x attacks y for any x, y ∈ A).

Fig. 5. An argumentation framework.

Fig. 4 shows the behavior of Algorithm 2 against the existing algorithms of [24,45] and dynPARTIX, which is an implemented system that is based on a dynamic programming algorithm [32]. Given an af, dynPARTIX basically computes a tree decomposition of the af then the extensions are enumerated based on the tree decomposition. In [32] it was showed that dynPARTIX is fixed-parameter tractable such that its time complexity depends on the tree width of the given af while it is linear in the size of the af.1 Back to Fig. 4, out of 1000 runs DM algorithm encountered 220 timeouts, dynPARTIX encountered 149 timeouts, MC encountered 899 timeouts while Algorithm 2 solves all instances within 0.04 s. For those trials that exceeded the time limit we recorded 60 s as the elapsed time. 5. Algorithms for deciding skeptical and credulous acceptance In deciding acceptance, it might be desirable to produce some kind of proof (i.e. explanation) as to why an argument is credulously accepted. Thus, we might say that a credulous proof of a given argument is made up of an admissible set containing the argument in question. In order to define more rigorously what constitutes a proof of credulous acceptance we start by recalling some useful terminology. We say that an argument x is reachable from an argument y if and only if there is a directed path from y to x. For example, consider the af depicted in Fig. 5 where A = {b, c , d, e , f , g } and R = {(b, c ), (b, e ), (c , d), (d, e ), (e , f ), ( f , g ), ( g , f )}. In Fig. 5 f is reachable from c through the directed path (c , d), (d, e ), (e , f ) while c is not reachable from f . Definition 5 (Credulous proof). Let ( A , R ) be an af, S ⊆ A be an admissible set containing x s.t. for every z ∈ S, x is reachable from z. Then, S is a credulous proof for x. Algorithm 3 determines a credulous proof of an argument (should such exist). This algorithm is a modification of Algorithm 2 whereby instead of finding all preferred extensions we try to find only an admissible set containing the argument in question. In summary, with respect to deciding the credulous acceptability of an argument, p say, Algorithm 3 makes use of six labels: PRO (short for proponent), OPP (short for opponent), UNDEC, OUT, MUST_OUT and BLANK. An argument x is labelled PRO to indicate that x might be in an admissible set and p is reachable from x. An argument y is labelled OUT if and only if y is attacked by a PRO argument. The MUST_OUT label identifies arguments that attack PRO arguments. An argument y is labelled OPP if and only if y is attacked by a PRO argument and y attacks a PRO argument. An argument y is labelled UNDEC to signal that y cannot be in an admissible set with the current PRO arguments. The BLANK label is the default label for all arguments. The precise usage of these labels is defined in Algorithm 3. The basic notion of Algorithm 3 is to change arguments’ labels iteratively according to the labelling scheme outlined earlier until there is no argument remaining that is labelled MUST_OUT. At this point, PRO arguments make up a credulous

1

Linear time is guaranteed via the famous “meta-theorem” of Courcelle, see e.g. [21,22,4].

32

S. Nofal et al. / Artificial Intelligence 207 (2014) 23–51

proof for p: PRO arguments capture the admissible set containing p. Referring to the af in Fig. 5, {b, f } is a credulous proof for f , see Fig. 6 that demonstrates how Algorithm 3 works. We establish now the soundness and completeness of Algorithm 3.

Algorithm 3: Constructing a credulous proof of x in an af H = ( A , R ). 1 μ : A → {PRO, OPP , OUT , BLANK , MUST_OUT , UNDEC }; μ ← ∅; 2 foreach y ∈ A do μ ← μ ∪ {( y , BLANK )}; μ(x) ← PRO; 3 foreach y ∈ {x}+ do μ( y ) ← OUT; 4 foreach z ∈ {x}− do 5 if μ( z) = BLANK then 6 μ(z) ← MUST_OUT; 7 if  w ∈ { z}− : μ( w ) = BLANK then 8 report x is not credulously accepted; exit; 9 10

else if

μ(z) = OUT then μ(z) ← OPP;

11 if is-accepted(μ) then report x is credulously proved by { y ∈ A: 12 else report x is not credulously acceptable;

μ( y ) = PRO};

13 procedure is-accepted(μ) begin 14 foreach y ∈ A: μ( y ) = MUST_OUT do 15 while ∃ z ∈ { y }− : μ( z) = BLANK do 16 select z ∈ { y }− with μ( z) = BLANK s.t. ∀ w ∈ { z}− μ( w ) ∈ {OPP , OUT , MUST_OUT } otherwise select z ∈ { y }− with 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34

∀ w ∈ { y }− : μ( w ) = BLANK , |{ z}+ |  |{ w }+ |; ← μ; μ ( z) ← PRO; foreach u ∈ { z}+ do if μ (u ) = MUST_OUT then μ (u ) ← OPP;

μ

else if

μ (u ) = OPP then μ (u ) ← OUT;

foreach v ∈ { z}− do if μ ( v ) ∈ {UNDEC , BLANK } then μ ( v ) ← MUST_OUT; if ∀ w ∈ { v }− μ ( w ) = BLANK then μ(z) ← UNDEC; goto line 15; else if

μ ( v ) = OUT then μ ( v ) ← OPP;

if is-accepted(μ ) then μ ← μ ; return true; else μ(z) ← UNDEC; return false;

35 return true; 36 end procedure

Fig. 6. Deciding a credulous proof for the argument f by using Algorithm 3.

μ(z) = BLANK s.t.

S. Nofal et al. / Artificial Intelligence 207 (2014) 23–51

33

Algorithm 4: Deciding skeptical acceptance of an argument x in an af H = ( A , R ). 1 μ : A → {IN, OUT , BLANK , MUST_OUT , UNDEC }; μ ← ∅; 2 foreach y ∈ A do μ ← μ ∪ {( y , BLANK )}; PEXT ← ∅; 3 if {x}− = ∅ then 4 report x is skeptically accepted; exit; 5 foreach y ∈ {x}− do 6 invoke Algorithm 3 on ( H , y ); 7 if Algorithm 3 reports y is credulously accepted then 8 report x is not skeptically accepted; exit; 9 call decide-skeptical-acceptance(H , μ, x); 10 if PEXT = ∅ then 11 report x is skeptically accepted; 12 procedure decide-skeptical-acceptance(H , μ, x) begin 13 while ∃ y : μ( y ) = BLANK do 14 select y ∈ A with μ( y ) = BLANK s.t. ∀ z ∈ { y }− μ( z) ∈ {OUT , MUST_OUT } otherwise select y ∈ A with 15 16 17 18 19 20 21 22 23 24 25 26 27

|{ y }+ |  |{ z}+ |; μ ← μ; μ ( y ) ← IN; foreach z ∈ { y }+ do μ (z) ← OUT; foreach z ∈ { y }− do if μ ( z) ∈ {UNDEC , BLANK } then μ (z) ← MUST_OUT; if ∀ w ∈ { z}− μ ( w ) = BLANK then μ( y ) ← UNDEC; goto line 13; call decide-skeptical-acceptance(H , μ , x); if ∃ z ∈ { y }− : μ( z) ∈ {UNDEC , BLANK } then μ( y ) ← UNDEC; else

μ( y ) = BLANK s.t. ∀z ∈ A: μ(z) = BLANK,

μ ← μ ;

28 if ∀ y ∈ A: μ( y ) = MUST_OUT then 29 S ← { y ∈ A | μ( y ) = IN }; 30 if  T ∈ PEXT: S ⊆ T then 31 PEXT ← PEXT ∪ { S }; 32 if μ(x) = IN then 33 PEXT ← ∅; 34 report x is not skeptically accepted; terminate and exit; 35 end procedure

Proposition 2. Let H = ( A , R ) be an af and x ∈ A. Then: 1. If Algorithm 3 reports true given ( H , x) using



T = y:



μ( y ) = PRO

then the set T is admissible and x ∈ T . 2. If x is credulously accepted then Algorithm 3 reports a credulous proof, i.e. a set



T = y:



μ( y ) = PRO

with x ∈ T

Proof. To prove both parts, we need to show that { y: μ( y ) = PRO} which we denote by S, is admissible. To establish that S is conflict free, assume that there exist z, y ∈ S with ( z, y ) ∈ R. If z is labelled PRO prior to y, then y would be labelled OUT; similarly if y were labelled PRO before z then z would be labelled OPP. It follows that μ( y ) = μ( z) = PRO implies neither ( y , z) nor ( z, y ) can be in R. To show that for all y ∈ S , y is acceptable w.r.t. S, suppose we have some y ∈ S and z ∈ { y }− with which no w ∈ S has ( w , z) ∈ R. Consequently, z has to be labelled MUST_OUT (lines 6 and 25). This, however, contradicts S being reported as a credulous proof for x via lines 14 and 35: this report is given if and only if there is no w ∈ A for which μ( w ) = MUST_OUT. 2 Regarding the decision problem of skeptical acceptance, recall that an argument x is skeptically accepted if and only if every preferred extension contains x. We modified Algorithm 2 to Algorithm 4 in order to decide whether an argument x is skeptically accepted. Firstly, Algorithm 4 looks for a credulously accepted argument that attacks x. If there exists such an attacker then Algorithm 4 concludes that x is not skeptically accepted. Otherwise, Algorithm 4 searches for a preferred extension, S, such that x ∈ / S ∪ S + . If such an extension is found then x is not skeptically accepted, or else x is skeptically accepted. Algorithm 4 is somewhat self-explanatory. See, however, Fig. 7 showing how the algorithm works in confirming the skeptically accepted status of the argument d in the af depicted in Fig. 5. Note that Fig. 7 grows from left to right. The

34

S. Nofal et al. / Artificial Intelligence 207 (2014) 23–51

Fig. 7. Deciding the skeptical acceptance of the argument d by using Algorithm 4.

soundness/completeness proof of Algorithm 4 would be in two parts. The first part, which directly follows, is about showing that a given argument is not skeptically accepted if the argument is attacked by a credulously accepted argument, while the second part would be identical to the proof of Algorithm 2. 6. The advantage of Algorithms 3 and 4 As we have already done with respect to algorithms for enumerating extensions, we now compare the algorithms introduced in the preceding section with previous methods, specifically those of Cayrol et al. [19], Thang et al. [56], and Verheij [58]. 6.1. The algorithms of Cayrol et al. We start by highlighting the main differences between Algorithm 3 and the algorithm of Cayrol et al. [19] (abbreviated by CAYCred) for the decision problem of credulous acceptance. The approach adopted in CAYCred makes use of three labels: PRO, OPP and OUT. Our use of PRO & OPP is similar to that of CAYCred, however, there are some differences regarding the use of the label OUT. CAYCred labels an argument x OUT on three occasions: if x is attacked by an argument labelled PRO; if x attacks an argument labelled PRO; and if x is incompatible as a member of an admissible set containing all of the arguments, currently, labelled PRO. As we demonstrate, it is more efficient to distinguish these cases via different labels. This, in effect, is what happens in Algorithm 3: the OUT label is reserved for arguments attacked by PRO labelled arguments; MUST_OUT for those attacking PRO labelled arguments; and UNDEC for the third case. To see the potential gains from our labelling scheme consider the following. The method CAYCred stops exploring further and backtracks if there exists an x ∈ A which attacks an argument labelled PRO and for every attacker of x is labelled OUT. This halting condition is checked every time an argument is labelled PRO. Conversely, Algorithm 3 backtracks if an argument labelled MUST_OUT is not attacked by an argument labelled BLANK: this condition being checked every time an argument is labelled MUST_OUT. Searching for an argument attacking a PRO labelled argument in CAYCred runs in the order of | R | while looking for a MUST_OUT labelled argument in Algorithm 3 runs in the order of | A |. As we have previously noted, usually | R | > | A |. A further, significant difference is that CAYCred selects an argument to be PRO arbitrarily while Algorithm 3 uses a heuristic rule to choose which BLANK argument to expand the search structure. So, the expected running time is improved: empirical confirmation being provided by the experiments reported in Section 6.4. As to the UNDEC label, the objective is to discriminate, and then to avoid, those arguments that previously failed to be in an admissible set with the current PRO arguments. Indeed, the merit of the UNDEC label is also captured by CAYCred through the OUT label. Regarding the decision problem of skeptical acceptance, the idea of the algorithm from Cayrol et al. [19] (CAYSkep for short) is based on an argument x not being skeptically accepted if at least one of two conditions hold:

S. Nofal et al. / Artificial Intelligence 207 (2014) 23–51

35

CAS1 x is attacked by a credulously accepted argument z (with the status of z decided by using CAYCred). CAS2 There is an admissible set that does not contain x and that cannot be expanded into one that does contain it. Otherwise, x is skeptically accepted provided that there exists an admissible set that contains x (notice that, given the admissibility of the empty set and CAS2, it suffices to find just one admissible set containing x to ensure – CAS1 and CAS2 having reported negatively – that x is skeptically accepted). Regarding CAS1 we have already remarked upon the distinctions being CAYCred and Algorithm 3. With respect to CAS2, CAYSkep uses two labels IN and OUT, and so, an argument y is labelled IN to indicate that y might be in an admissible set. We have, in our discussion of CAYCred, described its use of the OUT label. To check whether or not S ⊆ A is an admissible set that can be expanded into one that contains the argument in question, CAYSkep verifies that S is maximally admissible in { y ∈ A | λ( y ) ∈ {IN, OUT }}. Such verification is potentially expensive, and thus, it is avoided by Algorithm 4. Recall that Algorithm 4 decides that an admissible set is maximal if and only if the set is not a subset of any previously decided preferred extension. 6.2. The algorithms of Thang et al. The algorithm of Thang et al. [56] (abbreviated by THCred) for the decision problem of credulous acceptance is based on classifying arguments into four sets: P , O , SP and SO. As an initial step, the argument in question is added to SP and P while O and SO are empty. Next, the following three operations are applied iteratively s.t. in every iteration one or more tuples of the form ( P , O , SP , SO) might be generated. ThOp1 If there is some x ∈ P s.t. SP ∩ {x}− = ∅ then x is removed from P and every y ∈ {x}− \ SO is added to O . / O ∪ SO. ThOp2 An argument x is added to SP and P if and only if {x}+ ∩ O = ∅ and x ∈ ThOp3 An argument y is moved from O to SO if { y }− ∩ SP = ∅. Hence, at any time more than one tuple of ( P , O , SP , SO) may be relevant. This reflects THCred exploration of the admissibility of different subsets of A. THCred reports that the argument in question is credulously accepted if and only if there exists a tuple ( P , O , SP , SO) s.t. P and O are both empty. Otherwise, the argument is not credulously accepted. To compare with Algorithm 3, we analyze three issues. ThCr1 The algorithm might reconsider an argument x to be added to SP and P although x may already have been identified as failing to be in an admissible set with the same, current arguments in SP. In contrast, Algorithm 3 avoids this possibility through its use of the UNDEC label. ThCr2 THCred might add arguments to O despite these being attacked by arguments in SP. This, in principle, could be costly as THCred might unnecessarily test further arguments to be added to SP and P as counterarguments to those newly added to O . Again, Algorithm 3, avoids this situation through its use of the OUT label so that as soon as an argument is labelled PRO, every argument that it attacks is labelled OUT. Recall that Algorithm 3 explores arguments labelled MUST_OUT, whereas those labelled OUT are disregarded due to their being attacked by arguments labelled PRO. ThCr3 THCred does not exploit any heuristics or pruning machinery to accelerate the search process. Regarding skeptical acceptance, the algorithm of Thang et al. [56] (THSkep for short) relies for its correctness on the concept of a complete base (for x). A base, B for x being a set of admissible sets B = { S 1 , S 2 , . . . , S n } each of which contains x, and such that for every preferred extension, E containing x, there is S ∈ B with S ⊆ E. A base B being complete if for every preferred extension, E, there is some S ∈ B for which S ⊆ E. The process of verifying skeptical acceptance of x is shown to be equivalent to identifying a complete base for x. Thus the skeptical proof of x consists of such a base and the efficiency of THSkep is determined not only by the performance of THCred since THSkep depends on THCred in searching for admissible sets but also on the efficiency with which a candidate collection can be validated as a complete base: this approach is not that adopted within Algorithm 4. 6.3. The algorithm of Verheij Verheij [58] presented an algorithm for the credulous acceptance problem, which we denote VERCred. This classifies arguments into two sets J and D. Initially, J contains the argument in question while D is empty. Then, two functions are repeatedly executed on every pair of ( J , D ). The first function is

ExtendByAttack(( J , D )) ≡



  J , D  D = D ∪ J −

The second function ExtendByDefence(( J , D )) is given by,



  J , D  J is a conflict free, minimal superset of J , s.t. ∀ y ∈ D ∃x ∈ J ∩ { y }−

36

S. Nofal et al. / Artificial Intelligence 207 (2014) 23–51

Fig. 8. Deciding credulous acceptance for all arguments in 1000 afs with | A | = 100. For every p ∈ {0.01, 0.02, 0.03, . . . , 1} we tracked the average elapsed time for 10 instances generated randomly with a probability p.

Next, if there exist ( J , D ) and ( J , D ) such that J = J and D = D then the argument in question is credulously proved by ( J , D ). At any stage if no new pair ( J , D ) is produced from applying the two functions on all current pairs of ( J , D ) then the argument is not accepted. To evaluate the performance of Verheij’s approach in contrast to Algorithm 3 we consider five efficiency issues. V1. The cost of finding a minimal defense set J against the arguments in D, see the definition of ExtendByDefence earlier. This is not a factor in Algorithm 3. V2. VERCred might extend D by adding superfluous arguments already attacked by arguments in J and, in consequence, this might reduce the efficiency of computing J : more arguments in D could lead to more possible defense sets, thus finding a minimal defense set J becomes harder. In Algorithm 3 this situation is handled by using the OUT label designating arguments that are attacked by PRO arguments, and thus no further action is needed. V3. VERCred might extend J by adding arguments that had already failed to form an admissible set with the current arguments in J . As noted before, Algorithm 3 use of the UNDEC label prevents this. V4. VERCred expands the search space on breadth-first basis, and so, the space complexity of VERCred is prohibitive. This is because VERCred should store more than one node (i.e. a pair of ( J , D)) at the same time. As Algorithm 3 is a depth-first search procedure it needs to store only one labelling at a time which means the space complexity is linear in the size of | A |. V5. Again, VERCred, does not employ heuristics or pruning techniques to enhance the search progression. 6.4. Empirical evaluation: Algorithms 3 and 4 Regarding the credulous acceptance problem, Fig. 8 shows the behavior of Algorithm 3 against the existing algorithms of [19,56,58]. Out of 1000 runs, VERCred algorithm [58] encountered 987 timeouts, THCred [56] encountered 980 timeouts. All these timeouts are plotted within the figure as 60 s: the time limit set for the experiments. Regarding the skeptical acceptance problem, Fig. 9 shows the behavior of Algorithm 4 against the existing algorithms of [19,56]. Referring to Fig. 9, THSkep algorithm [56] could not solve any af within the 60-s time limit. Timeouts of THSkep are presented within the figure as 60 s. 7. Labelling algorithms for value based argument frameworks To consider mechanisms with which to model persuasive argument in practical reasoning Bench-Capon [8] extends Dung’s framework to accommodate the notion of arguments promoting “social values”. Bench-Capon’s value based argumentation framework (vaf) is a 4-tuple ( A , R , V , η) where A is a set of arguments, R ⊂ A × A is a binary relation, V is a non-empty set of social values, η : A → V maps (abstract) arguments in A to values in V . A total ordering, α of V is referred to as a specific audience. Given a specific audience, α , we refer to ( v i , v j ) ∈ α as “v i is preferred to v j ” or “v i  v j ”. We denote the set of all specific audiences by U . Audiences offer a means to distinguish attacks (x, y ) ∈ R which do not succeed as a consequence of expressed value priorities. Formally, we say x defeats y w.r.t. the audience α if and only if (x, y ) ∈ R and (η(x), η( y )) ∈ α . An argument x is acceptable for an audience α w.r.t. S ⊆ A if and only if for every y that defeats x (w.r.t. α ) there is some z ∈ S that defeats y w.r.t. α .

S. Nofal et al. / Artificial Intelligence 207 (2014) 23–51

37

Fig. 9. Deciding skeptical acceptance for all arguments in 1000 afs with | A | = 30. For every p ∈ {0.01, 0.02, 0.03, . . . , 1} we tracked the average elapsed time for 10 instances generated randomly with a probability p.

Fig. 10. A value based argument framework.

A set S ⊆ A is conflict free for the audience α if and only if for all x, y ∈ S it is not the case that x defeats y w.r.t. α . Similarly, S is admissible under α if and only if it is conflict free under α and every x ∈ S is acceptable for α w.r.t. S. The α -preferred extensions are the maximal (w.r.t. ⊆) admissible under α sets. An argument x is objectively accepted if and only if for every α , x is in every α -preferred extension. On the other hand, x is subjectively accepted if and only if there some α for which x is in an α -preferred extension.2 For instance, consider the vaf in Fig. 10 where A = {b, c , d}, R = {(b, c ), (c , d), (d, c )}, V = { v 1 , v 2 } and η = {(b, v 1 ), (c , v 2 ), (d, v 2 )}. The nodes in Fig. 10 are labelled by argumentvalue identifiers. If v 1  v 2 the ( v 1  v 2 )-preferred extension is {b, d} otherwise the ( v 2  v 1 )-preferred extensions are {{b, c }, {b, d}}. Therefore, b is objectively accepted while c and d are subjectively accepted. In this section we approach the problem of α -preferred extension enumeration over all specific audiences while in Appendix A we offer algorithms that decide objective/subjective acceptance without requiring enumeration of all α -preferred extensions. Before treating this in depth, however, it is helpful to address some questions that have been raised regarding instantiating vafs in practical settings. Thus, Prakken [54,53] identifies some concerns with respect to preference-handling in the preference-based af (paf) formalism proposed by Amgoud and Cayrol [1]. It has even been claimed by Caminada and Wu [18] that there is a “consistency problem of value-based argumentation” [18, p. 64] requiring resolution. The basis for this claim being the conclusions that can, according to the authors, be drawn from the example presented as [18, Fig. 2, p. 62] by applying value-based semantics. Given that this example has often been raised as a potential drawback of reasoning via vafs, it is worth revisiting. In fact, we believe that the example fails to offer a sound demonstration of “inconsistency in vaf reasoning”. We will now justify our assertion by examining the example presented in [18] in more detail. The system considered in [18, Fig. 2] is reproduced here in Fig. 11. We note that, for reasons which are expanded upon subsequently, we treat this example in purely abstract terms, rather than as arising from the specific scenario proposed in [18]. In this example, A = { A 1 , A 2 , . . . , A 9 }, the attacks involving the pairs





{ A 7 , A 8 }, { A 7 , A 9 }, { A 8 , A 9 }

are symmetric and the remaining attacks in R are



 ( A 7 , A 4 ), ( A 8 , A 5 ), ( A 9 , A 6 )

2 The definition of “objective acceptability” used here differs from the original formulation of Bench-Capon. In [8] it is assumed that directed cycles of arguments involve at least two distinct values, so that the α -preferred extension is unique. We do not retain this assumption in our development.

38

S. Nofal et al. / Artificial Intelligence 207 (2014) 23–51

Fig. 11. The value-based argument framework from [18].

Interpreted as a vaf, [18] ascribes the same value to each of A 4 , A 5 and A 6 . The values promoted by the remaining arguments are all distinct (so that | V | = 7). Now the base (i.e. value-free) af is described as resulting through the following set of rules in which → defines a “strict” rules and ; defines a “defeasible” rule.3

A1: A2: A3: A4: A5: A6: A7: A8: A9:

→ p1 → p2 → p3 A1 ; p4 A2 ; p5 A3 ; p6 A5 , A6 → ¬ p4 A4 , A6 → ¬ p5 A4 , A5 → ¬ p6

Let us first consider this framework simply in terms of the sub-structure  A , R , that is to say in terms of the standard Dung af semantics. As, correctly, stated in [18] the grounded extension is { A 1 , A 2 , A 3 } and there are exactly three preferred extensions, namely the sets





{ A 1 , A 2 , A 3 , A 5 , A 6 , A 7 }, { A 1 , A 2 , A 3 , A 4 , A 6 , A 8 }, { A 1 , A 2 , A 3 , A 4 , A 5 , A 9 }

In terms of the rules represented by the arguments, these correspond to three distinct sets of conclusions, i.e.





{ p 1 , p 2 , p 3 , p 5 , p 6 , ¬ p 4 }, { p 1 , p 2 , p 3 , p 4 , p 6 , ¬ p 5 }, { p 1 , p 2 , p 3 , p 4 , p 5 , ¬ p 6 }

Within each, the collective conclusions are consistent. What happens if we now consider the influence of values and the semantics defined through specific audiences? According to [18], audiences which assert the primacy of V 1 (associated with { A 4 , A 5 , A 6 }) all yield extensions which contain the arguments { A 1 , A 2 , A 3 , A 4 , A 5 , A 6 } irrespective of the ordering of the other six values in V . Furthermore, depending on the relative importance attached by audiences to the (distinct) values of A 7 , A 8 , and A 9 , this set { A 1 , A 2 , A 3 , A 4 , A 5 , A 6 } can be extended to4

a. { A 1 , A 2 , A 3 , A 4 , A 5 , A 6 , A 9 } if V 7  V 6  V 5 or V 7  V 5  V 6 b. { A 1 , A 2 , A 3 , A 4 , A 5 , A 6 , A 8 } if V 6  V 7  V 5 or V 6  V 5  V 7 c. { A 1 , A 2 , A 3 , A 4 , A 5 , A 6 , A 7 } if V 5  V 6  V 7 or V 5  V 7  V 6 Again, case (a) is a correct application of value-based reasoning. At this point in [18] it is claimed that the set of arguments { A 1 , A 2 , A 3 , A 4 , A 5 , A 6 , A 9 } supports the (inconsistent) set of conclusions { p 1 , p 2 , p 3 , p 4 , p 5 , p 6 , ¬ p 6 } as a consequence of applying the rules governing A 1 through A 9 . This claim is fallacious: it interprets the rules governing, A 4 say, as they are originally expressed (describing conclusions that could be drawn from A 4 in the classical Dung af model), but overlooks the crucial fact that since the notion of specific audience now applies, the conclusions that can be drawn are affected. In total, the original rules, since, these do not reflect the consequences resulting by expressing value preferences require reformulation in order that such consequences are explicitly stated. For example, focusing on A 4 : within the abstract framework this describes the defeasible rule A 1 ; p 4 . In the valuebased setting, however, A 4 now describes: 3 4

We indicate defeasible rules via the relation ; rather than the potentially misleading ⇒ notation used in [18]. We note that [18, p. 62] treats only case (a).

S. Nofal et al. / Artificial Intelligence 207 (2014) 23–51

39

A4 : ( A1 ; p4 ) ∨ (V 1  V 5 ) That is to say, “The argument A 4 states that, if it is accepted, then it is either the case that p 4 is a (defeasible) consequence of accepting A 1 ( A 1 ; p 4 ) OR that the specific audience accepting A 4 regards the value V 1 as having greater importance than the value V 5 (V 1  V 5 ).” Now suppose we continue this rewriting process for the remaining rules. We then obtain:

A1: A2: A3: A4: A5: A6: A7: A8: A9:

→ p1 → p2 → p3 ( A1 ; p4 ) ∨ (V 1  V 5 ) ( A2 ; p5 ) ∨ (V 1  V 6 ) ( A3 ; p6 ) ∨ (V 1  V 7 ) ( A5 , A6 → ¬ p4 ) ∨ (V 5  V 6 ) ∨ (V 5  V 7 ) ( A4 , A6 → ¬ p5 ) ∨ (V 6  V 5 ) ∨ (V 6  V 7 ) ( A4 , A5 → ¬ p6 ) ∨ (V 7  V 6 ) ∨ (V 7  V 5 )

For a more rigorous description, we ought to express the conclusions respecting possible value orderings as arguments in themselves. Such an approach is, of course, implicit within Modgil’s work on extended afs from [44] and has been more directly investigated in Modgil and Bench-Capon’s treatment of so-called Metalevel argumentation [46]. In [46] assertions that arise through expressing value preferences (such as those within this particular example), are, themselves, explicitly represented as nodes within the frameworks. Proceeding to consider what conclusions now follow from { A 1 , A 2 , A 3 , A 4 , A 5 , A 6 , A 9 } using the reformulated rules we obtain:





p 1 , p 2 , p 3 , p 4 ∨ ( V 1  V 5 ), p 5 ∨ ( V 1  V 6 ), p 6 ∨ ( V 1  V 7 ), ¬ p 6 ∨ ( V 7  V 6 ) ∨ ( V 7  V 5 )

This set of conclusions is consistent. We have expanded on this example in some detail for a number of reasons. Firstly, it should be noted that we are not claiming vaf based reasoning and semantics will always produce consistent conclusions. At best this issue, to our knowledge, remains open. It is the case, however, that one oft claimed demonstration of inconsistency (specifically [18]), is erroneous. In alleging that reasoning via vafs may raise doubts concerning whether “their collective conclusions will be consistent, or satisfy any other reasonable properties”, the example presented in [18], not only fails to take account of the interaction between topological structures and the rules by which such are generated, but also of the important fact that the structure of the rules themselves may require reformulation depending on the semantics germane to the framework. To summarize, if the ordering of values is a factor in the acceptability (or otherwise) of particular arguments then the rule basis (or other logical foundation) from which the framework was derived must embody the use of value in its specification. Finally we note that we are not dealing with some superficial dichotomy between “pure abstract argumentation” and the “logical contents of arguments”: we have considered, as noted earlier, an abstracted form of the example put forward in [18]. It is not difficult to see, however, that treating the literal scenario from [18] with respect to the required modifications to its rule-base would, again, dispel the illusion of “inconsistent conclusions”. We now present our methods for determining α -preferred extensions. We recall that a naive approach would enumerate all specific audiences leading to | V |! running time. We develop a new approach that avoids forming all such audiences leading to improved expected running time. Our approach is presented by Algorithms 5, 6, 7, and 8. Algorithm 5 builds total orders on V (that is, specific audiences) incrementally. For this purpose, we define q : V → Z a mapping from social values to integers.5 Every time a social value is mapped to an integer by q (line 10), Algorithm 5 might call (line 14) Algorithm 6 and attempt to label an argument x IN: the effect of the value order encoded in q. In doing so, Algorithm 6 may then call (line 6) Algorithm 7. Algorithm 7 checks whether an argument, y ∈ {x}− , labelled BLANK may be labelled OUT under q or not. That is to say whether, w.r.t. the audience described by q, y is defeated. Thus, Algorithm 7 might call (line 5) Algorithm 6 to decide whether a BLANK labelled attacker of y can be labelled IN or not. To avoid infinite recursion Algorithms 6 and 7 employ W ⊆ A to hold processed arguments. In summary, every time q is incremented, Algorithms 6 and 7 together determine IN/OUT labels on BLANK arguments to reflect this change in q. Once no eligible social value is left unmapped by q (line 8 of Algorithm 5), Algorithm 5 calls (line 17) Algorithm 8 to find the α -preferred extensions under the value order (α ) encoded by q. Algorithm 8 is almost identical to Algorithm 1 with one exception related to the defeat notion of vafs. This requires us to define a transition rule – IN-TRANS-vafs – for Algorithm 8 instead of IN-TRANS that is used by Algorithm 1.

5

We discuss the benefit of using q with more examples later in this section.

40

S. Nofal et al. / Artificial Intelligence 207 (2014) 23–51

Algorithm 5: Enumerating all preferred extensions of a vaf H = ( A , R , V , η). 1 2 3 4 5 6

μ : A → {IN, OUT , BLANK , MUST_OUT , UNDEC }; μ ← ∅; foreach y ∈ A do μ ← μ ∪ {( y , BLANK )}; q : V → Z; q ← ∅; foreach v ∈ V do q ← q ∪ {( v , ∞)}; i ← 1; call find-preferred-extensions-vafs(μ, q, i );

7 procedure find-preferred-extensions-vafs(μ, q, i ) begin 8 foreach v ∈ V : (q( v ) = ∞) ∧ (∃x μ(x) = BLANK ) ∧ (η(x) = v ) do 9 q ← q; 10 q ( v ) ← i; 11 μ ← μ ; 12 foreach z with μ ( z) = BLANK s.t. η( z) = v do 13 W ← ∅; 14 invoke Algorithm 6 on ( H , μ , z, q , W ); 15

call find-preferred-extensions-vafs(μ , q , i + 1);

16 if  v ∈ V : (q( v ) = ∞) ∧ (∃x μ(x) = BLANK ) ∧ (η(x) = v ) then 17 invoke Algorithm 8 on ( H , μ, q); 18 end procedure

Algorithm 6: Labelling an argument x IN in a vaf H = ( A , R , V , η) under some q : V → Z, given a labelling W ⊆ A holds already processed arguments.

μ and

1 W ← W ∪ {x}; 2 foreach y ∈ {x}− : q(η( y ))  q(η(x)) do 3 if μ( y ) = IN then 4 return false; 5 else 6 if y ∈ W ∧ μ( y ) = BLANK then 7 return false; 8 if y ∈ / W ∧ μ( y ) = BLANK then 9 W ← W; 10 invoke Algorithm 7 on ( H , μ, q, W , y ); 11 if Algorithm 7 returned false then 12 return false; 13 μ(x) ← IN; 14 foreach z ∈ {x}+ : q(η(x))  q(η( z)) do 15 μ(z) ← OUT; 16 return true;

Algorithm 7: Labelling an argument y OUT in a vaf H = ( A , R , V , η) under some q : V → Z, given a labelling W ⊆ A holds already processed arguments.

μ and

1 W ← W ∪ { y }; 2 foreach s ∈ { y }− : q(η(s))  q(η( y )) do 3 if s ∈ / W ∧ μ(s) = BLANK then 4 W ← W; 5 invoke Algorithm 6 on ( H , μ, q, W , s); 6 if Algorithm 6 returned true then 7 μ( y ) ← OUT; 8 return true; 9 return false;

Definition 6 (IN-TRANS-VAFs transition rule). Let ( A , R , V , η) be a vaf, x ∈ A with IN-TRANS-vafs(x, q) transition rule is defined by the following ordered steps:

μ(x) be BLANK, and q : V → Z. The

1. μ ← μ; 2. μ (x) ← IN; 3. forall y ∈ {x}+ : q(η(x))  q(η( y )) do μ ( y ) ← OUT; 4. forall z ∈ {x}− : μ ( z) = OUT and q(η( z))  q(η(x)) do μ ( z) ← MUST_OUT; 5. return μ . Fig. 12 shows how the algorithms work on the framework of Fig. 10. A benefit of q : V → Z defined in Algorithm 5, we believe, is that building value orders on V incrementally by using q improves the efficiency of computing α -preferred extensions. To see why, notice that q does not map the social values that are promoted by OUT labelled arguments because these values are irrelevant in deciding the labels of the remaining BLANK arguments. Since the example of Fig. 12 does not

S. Nofal et al. / Artificial Intelligence 207 (2014) 23–51

Algorithm 8: Enumerating

41

α -preferred extensions of a vaf H = ( A , R , V , η) under some q : V → Z, given a labelling μ.

1 PEXT ← ∅; 2 call find-α -preferred-extensions(μ); 3 report PEXT is the set of preferred extensions under the audience described by q; 4 procedure find-α -preferred-extensions(μ) begin 5 if ∀x μ(x) = BLANK then 6 if ∀x μ(x) = MUST_OUT then 7 S ← { y ∈ A | μ( y ) = IN}; 8 if  T ∈ PEXT : S ⊆ T then 9 PEXT ← PEXT ∪ { S }; 10 else 11 select any x ∈ A with μ(x) = BLANK; 12 μ ← IN-TRANS-vafs(x, q); 13 call find-α -preferred-extensions(μ ); 14 μ ← UNDEC-TRANS(x); 15 call find-α -preferred-extensions(μ ); 16 end procedure

Fig. 12. How Algorithms 5, 6, 7 and 8 work on an example vaf.

show the gain of q we present Fig. 13 that shows how the algorithms work on another vaf. In this example the algorithms decide the α -preferred extensions in four stages corresponding to q as:



 ( v 1 , 1), ( v 2 , ∞), ( v 3 , 2) ,   ( v 1 , 2), ( v 2 , 1), ( v 3 , ∞) ,   ( v 1 , 2), ( v 2 , ∞), ( v 3 , 1) ,   ( v 1 , 3), ( v 2 , 2), ( v 3 , 1) . In contrast, working on a pre-computed set of all specific audiences enforces 6 total orders which necessitates 6 stages. More specifically, referring to Fig. 13 the total orders: v 1  v 2  v 3 and v 1  v 3  v 2 would produce the same α -preferred extensions. Noting that as v 1 is the most preferred value then the argument c is OUT and hence the relative position of v 2 is unimportant. Thus, the value order v 1  v 2  v 3 is not critical and can be ignored. This is exactly what our approach does where Algorithm 5 does not build a function q to embody v 1  v 2  v 3 . Similarly, Algorithm 5 does not develop a

42

S. Nofal et al. / Artificial Intelligence 207 (2014) 23–51

Fig. 13. Showing the benefit of our approach by tracing Algorithms 5, 6, 7 and 8 on an example vaf.

Table 1 The average number of total value orders processed in executing Algorithms 5, 6, 7 and 8.

|V |

7

8

9

Naive approach New approach

5040.00 4093.10

40,320.00 31,103.40

362,880.00 181,176.50

10 3,628,800.00 887,016.90

11 39,916,800.00 7,038,751.30

function q to represent the value order v 2  v 3  v 1 indicating that the order of v 3 is not critical: v 3 is associated with only OUT arguments (the argument d in this example). To confirm the benefit of q experimentally we generated instances of vafs randomly by setting attacks between arguments with a probability of 0.1 where each instance has 30 arguments. For each | V | ∈ {7, 8, 9, 10, 11} we report in Table 1 the average number of processed total value orders in an execution of 100 instances. We turn now to check the soundness and completeness of our approach. To accomplish this let us first define the set of critical total value orders, denoted by U , which is basically a subset of the set of all specific audiences U . In fact, U is the set of total orders that Algorithm 5 investigates for enumerating preferred extensions as we will establish later. Definition 7 (Critical value orders). Let H = ( A , R , V , η) be a vaf and U denote the set of all specific audiences over V . Then the set of critical total value orders, U , is

 U \ α ∈ U | ∃( v i , v j ) ∈ α : v i = v j ∧ ∀x: η(x) = v i ,  S 1 ⊆ A: S 1 is admissible under α with x ∈ S 1 ∧ α∧ ∃ S 2 ⊆ A: S 2 is admissible under  ∃ y ∈ S 2 : y defeats x w.r.t. α .

Next we verify that the set of all preferred extensions under the critical value orders U correspond to the set of all preferred extensions under all specific audiences U . Proposition 3. Let ( A , R , V , η) be a vaf, PREF be the set of all preferred extensions w.r.t. all specific audiences in U , PREF be the set of all preferred extensions w.r.t. the set of critical total value orders U . Then PREF = PREF . Proof. Assume there exists T ∈ PREF : T ∈ / PREF . In this case

∃ p 1 ∈ U : T is a preferred extension under p 1 and ∀ p 2 ∈ U , T is not a preferred extension under p 2 . By definition of U and U

S. Nofal et al. / Artificial Intelligence 207 (2014) 23–51

43

∃( v i , v j ) ∈ p 1 : v i = v j ∧ ∀x ∈ A: η(x) = v i ,  S 1 ⊆ A: S 1 is admissible under p 1 with x ∈ S 1 ∧ ∃ S 2 ⊆ A: S 2 is admissible under p 1 ∧ ∃ y ∈ S 2 : y defeats x w.r.t. p 1 . Thus,

∀ p 2 ∈ U : p2 =  p 1 \ ( v i , v j ) ∈ p 1  v i = v j ∧ ∀x ∈ A: η(x) = v i ,  S 1 ⊆ A: S 1 is admissible under p 1 with x ∈ S 1 ∧  ∃ S 2 ⊆ A: S 2 is admissible under p 1 with some y ∈ S 2 : y defeats x w.r.t. p 1 ∪   (vk , vl ) ∈ / p1  (vl , v k ) ∈ p1 ∧ ∀x ∈ A: η(x) = v l ,  S 1 ⊆ A: S 1 is admissible under p 1 with x ∈ S 1 ∧  ∃ S 2 ⊆ A: S 2 is admissible under p 1 with some y ∈ S 2 : y defeats x w.r.t. p 1 , ∀ T ∈ PREF: T is a preferred extension under p 1 , T is a preferred extension under p 2 . Contradiction.

2

Now we establish the soundness and completeness of Algorithm 5. Specifically, we show that Algorithm 5 develops a mapping q : V → Z for every critical total value order. Proposition 4. Let ( A , R , V , η) be a vaf and U be the set of all critical total value orders. Then: 1. for every q : V → Z constructed by Algorithm 5, ∃ p ∈ U : ∀ v i , v j ∈ V : q( v i )  q( v j ), ( v i , v j ) ∈ p. 2. ∀ p ∈ U , Algorithm 5 constructs q : V → Z such that ∀ v i , v j ∈ V : ( v i , v j ) ∈ p , q( v i )  q( v j ). Proof. 1. Assume

∃q constructed by Algorithm 5: ∀ p ∈ U , ∃ v i , v j ∈ V : q( v i )  q( v j ) ∧ ( v i , v j ) ∈ /p By Algorithm 5, lines 12 and 14

∃x ∈ A: η(x) = v i and μ(x) ∈ {IN, BLANK } w.r.t. q By Proposition 5 (which basically states the conditions under which an argument is labelled IN or left BLANK by Algorithm 6) and after simplification

∃ S 1 ⊆ A: S 1 is admissible under q with x ∈ S 1 or  S 2 ⊆ A: S 2 is admissible under q with some y ∈ S 2 ∩ {x}− s.t. q(η( y ))  q(η(x)) This contradicts the definition of U (Definition 7): recall that U contains total value orders including those which satisfy the previous implication. 2. Assume that

∃ p ∈ U : for every q : V → Z constructed by Algorithm 5 ∃ v i , v j ∈ V : ( v i , v j ) ∈ p ∧ q( v i ) > q( v j ) By definition of U

∃x ∈ A: η(x) = v i , ∃ S 1 ⊆ A: S 1 is admissible under p with x ∈ S 1 or  S 2 ⊆ A: S 2 is admissible under p with some y ∈ S 2 : y defeats x w.r.t. p By Proposition 5, x will be labelled IN or it stays BLANK. By Algorithm 5 (lines 5, 8 and 10), there exists q( v i ) = 1. Contradiction.

2

In the following we examine the soundness and completeness of Algorithms 6 and 7. In particular, we show that if Algorithm 6 labels an argument, x, IN w.r.t. some q : V → Z then x is in every preferred extension under the audience corresponding to q. Similarly, we demonstrate that if Algorithms 6 and 7 label an argument x OUT w.r.t. some mapping q : V → Z then x does not belong to any preferred extension under the audience represented by q.

44

S. Nofal et al. / Artificial Intelligence 207 (2014) 23–51

Proposition 5. Let ( A , R , V , η) be a vaf, μ(x) = BLANK and q : V → Z formed by Algorithm 5. Then: 1.

μ(x) ← IN under q by Algorithm 6 if and only if ∃ S 1 ⊆ A: S 1 is admissible under q with x ∈ S 1 and  S 2 ⊆ A: S 2 is admissible under q with some y ∈ S 2 ∩ {x}− s.t . q(η( y ))  q(η(x))

2.

μ(x) ← OUT under q by Algorithm 6 (respectively Algorithm 7) if and only if  S 1 ⊆ A: S 1 is admissible under q with x ∈ S 1 and ∃ S 2 ⊆ A: S 2 is admissible under q with some y ∈ S 2 ∩ {x}− s.t . q(η( y ))  q(η(x))

3. If neither (1) nor (2) hold, μ(x) = BLANK. Proof. 1. Assume x is labelled IN under q by Algorithm 6 and

 S 1 ⊆ A: S 1 is admissible under q with x ∈ S 1 or ∃ S 2 ⊆ A: S 2 is admissible under q with some y ∈ S 2 ∩ {x}− s.t . q(η( y ))  q(η(x)) By Algorithm 6 lines 4, 7 and 12

    ∀( y , x) ∈ R: q η( y )  q η(x) μ( y ) = OUT . Contradiction. 2. Assume x is labelled OUT under q by Algorithm 6 (respectively Algorithm 7) and

∃ S 1 ⊆ A: S 1 is admissible under q with x ∈ S 1 or  S 2 ⊆ A: S 2 is admissible under q with some y ∈ S 2 ∩ {x}− s.t. q(η( y ))  q(η(x)) By Algorithm 6 line 15 (respectively Algorithm 7 line 7)

    ∃( y , x) ∈ R : q η( y )  q η(x) ∧ μ( y ) = IN. Contradiction. 3. Immediate from (1) and (2).

2

Lastly, the following proposition asserts that Algorithm 8, which enumerates plete.

α -preferred extensions, is sound and com-

Proposition 6. Let ( A , R , V , η) be a vaf, q : V → Z, and let PEXT be the set of subsets returned by Algorithm 8 under q. Then, 1. ∀ T ∈ PEXT, ∃ S ⊆ A: S is a preferred extension under q ∧ S = T , and 2. ∀ S ⊆ A, if S is a preferred extension under q then ∃ T ∈ PEXT: S = T . Proof. Follows from Proposition 1 and similar structure of Algorithm 8 and Algorithm 2.

2

8. Discussion We presented a new, implemented algorithm for enumerating preferred extensions in Dung’s model of argumentation. We have shown that the new algorithm computes extensions faster than the existing algorithms presented in [24,45]. From an applications point of view, it stands to reason that among other factors the running time performance of an agent powered by argument-based reasoning machinery is bounded to the efficiency of the implemented algorithms of the respective reasoning problems. In that sense, our main concern in this paper was to develop concrete, efficient algorithms for the decision problems related to the preferred semantics. Although the main focus in the paper was on preferred semantics, we believe that this work influences the development of algorithms for other argumentation semantics where the notions: state transition, argument selection tactics and search space pruning strategies are all relevant to other semantics. For example, if we drop line 25 (i.e. the maximality check) from Algorithm 2 then we get a procedure for listing all admissible sets. Also, if we modify the condition of line 23 (in Algorithm 2) to

 y ∈ A with μ( y ) ∈ {MUST_OUT , UNDEC } then we get a procedure for listing all stable extensions. We note further evidence that developing an algorithm for an argumentation semantics might lead to building algorithms for other semantics which is the algorithms introduced by

S. Nofal et al. / Artificial Intelligence 207 (2014) 23–51

45

Caminada in [15,16] for finding semi-stable, respectively stage, extensions. Indeed, these two algorithms can be viewed as a reproduction of the algorithm of Modgil and Caminada [45] for preferred extension enumeration. As we stated earlier, argumentation semantics can be defined by using either the labelling notion or the extension notion. However, this paper presents new algorithms that make use of labellings as an algorithmic vehicle rather than introducing new labelling-based semantics. In fact, Doutre and Mengin [24] developed their labelling-based algorithm without elaborating labelling-based semantics. On the other hand, the algorithm of Modgil and Caminada [45] complies with the labelling-based semantics of Caminada and Gabbay [17]. As it has been illustrated throughout the paper, the specification of our algorithms fit neatly into the labelling theory of Caminada and Gabbay. Likewise, we presented algorithms that decide the credulous and skeptical acceptance problems without explicitly enumerating all preferred extensions. An added feature of the developed algorithms is the production of proofs as to why an argument is accepted. We have shown, analytically and empirically, that our algorithms are more efficient than the existing algorithms presented in [19,56,58]. Some authors call the algorithms that yield proofs “dialectical proof procedures” referring to the fact that a proof of an accepted argument might be, informally speaking, defined by the arguments put forward during a dialog between two parties. In fact, argumentation semantics can be defined by using the dialog notion (see e.g. [36,59,30,43,13]). Hence, Cayrol et al. [19] describe dialogs under preferred semantics as a means for presenting their algorithms. However, Thang et al. [56] make use of so-called “dispute trees” to pave the way for introducing their algorithms, while Verheij [58] defines his algorithm by employing the notion of “labellings” rather than specifying formal dialogs. Furthermore, argument-based dialogs have been extensively studied as a backbone for interactions between agents in multi-agent systems, see e.g. [42] for an overview. Broadly, in the literature there are several works on computing decision problems in afs. In [60] Vreeswijk discussed algorithmically the efficiency of deciding minimally admissible sets. In [25] Doutre and Mengin specified dialectical proofs for skeptically accepted arguments under preferred semantics. Their dialog process is basically centered around the following property: given an argument x, then x is skeptically accepted if and only if

for each admissible set S 1 s.t. x ∈ / S 1 , there is an admissible set S 2 ⊇ S 1 ∪ {x} In [25] Doutre and Mengin did not give an algorithm for constructing their dialectical proof. However, adopting a naive algorithm for such proof would be prohibitive since the running time complexity in this case will be in the order of 22| A | while our algorithm for skeptical acceptance runs in the order of 2| A | . Reviewing further related works in the context of afs, in [39] approximating argumentation semantics was evaluated versus exact computations, whereas the experiments presented in [7] evaluated the effect of splitting an af on the computation of the preferred extensions. The work of [40] shows how to partially re-evaluate the acceptance of arguments if R changes. From a computational theoretical perspective, the decision problems of skeptical and credulous acceptance under preferred semantics are believed to be intractable, see e.g. [23,28,52]. Another line of research concerns encoding decision problems of afs into other formalisms and then solving them by using a respective solver, see for example [12,48,33,2,31,20]. Additionally, we have established the usability of our algorithms in the context of vafs. In [9] a dialog framework was developed for vafs. This dialectical framework basically extends the framework of Cayrol et al. [19]. Thus, the distinctions made in Section 6 in which we compare our algorithms with the algorithms of Cayrol et al. applies equally in showing differences between the dialog processes of [9] and our algorithms for acceptance in vafs. Recall that our algorithms for vafs use Algorithms 3 and 4 for credulous and skeptical acceptance respectively. In addition, the developments in [9] for vafs were made with the assumption that directed cycles of arguments involve at least two distinct values. As we stated earlier, we do not retain this assumption for developing the algorithms for vafs. We consider now an approach to rewriting a vaf into an af [44]. In general, this approach adds an argument to A for every specific audience α ∈ U and thus A will grow by | V |!. In consequence, any algorithm (e.g. Algorithm 1) working on the target af will run in the order of 2| A |+| V |! while our approach (i.e. Algorithms 5, 6, 7 and 8 altogether working on the original vaf) runs in the order of | V |!2| A | which is more efficient. More importantly, recall the profit of q that might induce steps fewer than | V |! as we illustrated earlier. The bottom line is, our approach is faster than any other mechanism that would consider every specific audience α ∈ U while our algorithms only consider critical audiences U ⊆ U as stated by Proposition 4, which is a key contribution of our algorithms. Equally, our algorithms establish an efficient method for encoding total orders over V such that the space complexity is upper bounded to the number of values (i.e. | V |) rather than to the number of all total value orders (i.e. | V !|), which is the case if a naive approach is adopted. We close this section by listing some further related works in the context of vafs. The computational complexity of subjective and objective acceptance is studied in [28,37,38] while the algorithms of [29] and [49] decide the preferred extensions, respectively subjective/objective acceptance, under the assumption that directed cycles of arguments involve at least two distinct values. The work of [34] proposes an algorithm that translates vafs to neural networks. 9. Conclusion In this work, we developed a new algorithm6 for enumerating preferred extensions of an af. We have shown that the new algorithm is more efficient than existing algorithms. Likewise, we implemented algorithms that decide the credulous

6

C++ implementation can be found at http://sourceforge.net/projects/argtools/files/?source=navbar.

46

S. Nofal et al. / Artificial Intelligence 207 (2014) 23–51

Algorithm 9: Deciding subjective acceptance of x in a vaf H = ( A , R , V , η). 1 2 3 4 5 6 7

μ : A → {IN, OUT , BLANK , MUST_OUT , UNDEC }; μ ← ∅; foreach y ∈ A do μ ← μ ∪ {( y , BLANK )}; q : V → Z; q ← ∅; foreach v ∈ V do q ← q ∪ {( v , ∞)}; i ← 1; if is-subjectively-accepted(μ, q, i ) then x is subjectively accepted; else x is not subjectively accepted;

8 procedure is-subjectively-accepted(μ, q, i ) begin 9 foreach v ∈ V : (q( v ) = ∞) ∧ (∃ y: μ( y ) = BLANK ) ∧ (η( y ) = v ) do 10 q ← q; 11 q ( v ) ← i; 12 μ ← μ ; 13 foreach z: μ ( z) = BLANK ∧ η( z) = v do 14 W ← ∅; 15 invoke Algorithm 6 with (μ , H , z, q , W ); 16 if μ (x) = IN then 17 return true;

if is-subjectively-accepted(μ , q , i + 1) then return true;

18 19 20 if 21 22 23

μ(x) = BLANK and  v ∈ V : (q( v ) = ∞) ∧ (∃ y: μ( y ) = BLANK ) ∧ (η( y ) = v ) then invoke Algorithm 10 on ( H , x, q); if Algorithm 10 reports x is credulously accepted then return true;

24 return false; 25 end procedure

and skeptical acceptance problems without explicitly enumerating all preferred extensions. Supported by experiments, we showed that these algorithms are more efficient than existing algorithms. Moreover, we engineered labelling-based algorithms in vafs for enumerating preferred extensions and deciding subjective/objective acceptance. Opening perspectives for further developments, we see the potential of our algorithm to be used, after appropriate changes, for enumerating extensions under other argumentation semantics such as the ideal semantics, stage semantics and semi-stable semantics. Also, we plan to extend our work to investigate other generalizations of Dung’s model such as varied-strength attack systems [41]. Nonetheless, it is just a matter of modifying our algorithm to make it functional in some other generalizations such as [6]. In this work we adopted a basic rule for selecting the next argument that induces a transition from a parent node to a child node within the search tree. We plan to explore more heuristics-based selection options, especially those with somewhat expensive overheads. This is to see how far we can rely on such involved rules in gaining improved efficiency. Similarly, further strategies for pruning the search space are to be studied. Our current methods are based on a preliminary look-ahead strategy. It is still open whether or not foreseeing further the search space in advance of expanding it will be of any benefit in terms of the overall performance. Finally, in the context of multi-agent systems, we note that the presented notions: labelling transition, argument selection and pruning strategies are rich concepts that potentially have a relevant role in designing efficient dialogs for different scenarios. However, this is to be seen by further investigations. Acknowledgements We thank the anonymous reviewers for their helpful comments and suggestions. Appendix A. Algorithms for the subjective and objective acceptance problems Here we provide algorithms that decide subjective/objective acceptance without explicitly enumerating extensions of a vaf. Algorithms 9 and 10 (besides Algorithms 6 and 7) decide subjective acceptance while the Algorithms 11 and 12 (besides Algorithms 6 and 7) decide objective acceptance. In fact these algorithms are self-explanatory since they are alterations of Algorithms 3, 4 and 5 as we specify in what follows. Firstly, note that Algorithms 9 and 11 are modified versions of Algorithm 5 for deciding subjective, respectively objective, acceptance. Secondly, we reform Algorithm 3 to get Algorithm 10 that works in conjunction with Algorithm 9 to decide the subjective acceptance problem. Thirdly, we change Algorithm 4 to get Algorithm 12 that works jointly with Algorithm 11 in deciding the objective acceptance problem. Appendix B. Further examples illustrating the new algorithms This section intends to help the reader to envisage various aspects of the algorithms running on natural arguments. In particular, we present two different frameworks to give the reader a flavor of the argumentation enabled in specific domain applications: legal reasoning and moral debate.

S. Nofal et al. / Artificial Intelligence 207 (2014) 23–51

47

Algorithm 10: Finding a credulous proof of x in H = ( A , R , V , η) w.r.t. some q : V → Z. 1 μ : A → {PRO, OPP , BLANK , MUST_OUT , UNDEC }; μ ← ∅; 2 foreach y ∈ A do μ ← μ ∪ {( y , BLANK )}; μ(x) ← PRO; 3 foreach y ∈ {x}+ : q(η(x))  q(η( y )) do μ( y ) ← OUT; 4 foreach z ∈ {x}− : q(η( z))  q(η(x)) do 5 if μ( z) ∈ {UNDEC , BLANK } then 6 μ(z) ← MUST_OUT; 7 if  w ∈ { z}− : μ( w ) = BLANK and q(η( w ))  q(η( z)) then 8 x is not credulously accepted; exit; 9 10

else if

μ(z) = OUT then μ(z) ← OPP;

11 if is-accepted(μ) then 12 x is proved by { y ∈ A | μ( y ) = PRO}; 13 else 14 x is not credulously acceptable; 15 procedure is-accepted(μ) begin 16 foreach y ∈ A: μ( y ) = MUST_OUT do 17 while ∃ z ∈ { y }− : μ( z) = BLANK ∧ q(η( z))  q(η( y )) do 18 select z ∈ { y }− with μ( z) = BLANK and q(η( z))  q(η( y )) s.t. ∀ w ∈ { z}− with q(η( w ))  q(η( z)) μ( w ) ∈ {OPP , OUT , MUST_OUT }, 19 otherwise select z ∈ { y }− with μ( z) = BLANK and q(η( z))  q(η( y )) s.t. ∀ w ∈ { y }− with μ( w ) = BLANK and q(η( w ))  q(η( y )) 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37

|{s ∈ { z}+ : q(η( z))  q(η(s))}|  |{t ∈ { w }+ : q(η( w ))  q(η(t ))}|; μ ← μ; μ (z) ← PRO; foreach u ∈ { z}+ : q(η( z))  q(η(u )) do if μ (u ) = MUST_OUT then μ (u ) ← OPP; else if

μ (u ) = OPP do μ (u ) ← OUT;

foreach u ∈ { z}− : q(η(u ))  q(η( z)) do if μ (u ) ∈ {UNDEC , BLANK } then μ (u ) ← MUST_OUT; if  w ∈ {u }− : μ ( w ) = BLANK and q(η( w ))  q(η(u )) then μ(z) ← UNDEC; goto line 17; else if

μ (u ) = OUT then μ (u ) ← OPP;

if is-accepted(μ ) then μ ← μ ; return true; else μ(z) ← UNDEC; return false;

38 return true; 39 end procedure

Example 1 (Legal reasoning). This is taken from [36], which is actually an adapted version from the original presentation in [35]. Consider the following hypothetical exchange of allegations. The plaintiff (p) and the defendant (o) have both loaned money to Miller for the purchase of an oil tanker; which is the collateral for both loans. Miller has defaulted on both loans, and the practical question is which of the two lenders will first be paid from the proceeds of the sale of the ship. One subsidiary issue is whether the plaintiff perfected his security interest in the ship or not. (argument b) p: My security interest in Miller’s ship was perfected. A security interest in goods may be perfected by taking possession of the collateral, Uniform Commercial Code (UCC) Article 9. I have possession of Miller’s ship. (argument c) o: Ships are not goods for the purposes of Article 9. (argument d) p: Ships are movable, and movable things are goods according to UCC Article 9. (argument e) o: According to the Ship Mortgage Act, a security interest in a ship may only be perfected by filing a financing statement. (argument f ) p: The Ship Mortgage Act does not apply, since the UCC is newer and therefore has precedence. (argument g) o: The Ship Mortgage Act is federal law, which has precedence over state law such as UCC. This discussion can be associated with the af in Fig. 14 [36]. The question now is about whether the plaintiff perfected his security interest in the ship or not. In other words, is argument b of the plaintiff acceptable? (i.e. deciding the credulous acceptance of b). On the other hand, we might need to check whether the argument b can be unacceptable or not (i.e. deciding the skeptical acceptance of b). To conclude, Figs. 15 and 16 show how to decide credulous (resp. skeptical) acceptance by using Algorithms 3 (resp. 4): argument b is credulously accepted while it is not skeptically accepted. Furthermore, one can get explanations as to why the argument b is acceptable (resp. unacceptable) from the arguments labeled PRO in Fig. 15 (resp. 16).

48

S. Nofal et al. / Artificial Intelligence 207 (2014) 23–51

Algorithm 11: Deciding objective acceptance of x in a vaf H = ( A , R , V , η). 1 2 3 4 5 6 7 8 9 10 11

μ : A → {IN, OUT , BLANK , MUST_OUT , UNDEC }; foreach y ∈ A do μ ← μ ∪ {( y , BLANK )}; q : V → Z; q ← ∅; foreach v ∈ V do q ← q ∪ {( v , ∞)}; i ← 1; if is-objectively-accepted(μ, q, i ) then x is objectively accepted; else x is not objectively accepted;

μ ← ∅;

12 procedure is-objectively-accepted(μ, q, i ) begin 13 foreach v ∈ V : (q( v ) = ∞) ∧ (∃ y: μ( y ) = BLANK ) ∧ (η( y ) = v ) do 14 q ← q; 15 q ( v ) ← i; 16 μ ← μ ; 17 foreach z: μ ( z) = BLANK ∧ η( z) = v do 18 W ← ∅; 19 invoke Algorithm 6 (μ , H , z, q , W ); 20 if μ (x) = OUT then 21 return false; 22 23 24 if 25 26 27

if ¬ is-objectively-accepted(μ , q , i + 1) then return false;

μ(x) = BLANK and  v ∈ V : (q( v ) = ∞) ∧ (∃ y: μ( y ) = BLANK ) ∧ (η( y ) = v ) then invoke Algorithm 12 on (μ, H , x, q); if Algorithm 12 decided that x is not skeptically accepted then return false;

28 return true; 29 end procedure

Algorithm 12: Deciding skeptical acceptance of x in a vaf H = ( A , R , V , η) w.r.t. some q : V → Z, given a labelling 1 PEXT ← ∅; 2 if  y ∈ {x}− : q(η( y ))  q(η(x)) then 3 x is skeptically accepted; exit; 4 foreach y ∈ {x}− : q(η( y ))  q(η(x)) do 5 invoke Algorithm 10 on ( H , y , q); 6 if Algorithm 10 decided that y is credulously accepted then 7 x is not skeptically accepted; exit; 8 call decide-skeptical-acceptance(μ); 9 if PEXT = ∅ then 10 x is skeptically accepted; exit; 11 procedure decide-skeptical-acceptance(μ) begin 12 while ∃ y: μ( y ) = BLANK do 13 select y with μ( y ) = BLANK s.t. 14 ∀ z ∈ { y }− with q(η( z))  q(η( y )) μ( z) ∈ {OUT , MUST_OUT }, 15 otherwise select y with μ( y ) = BLANK s.t. ∀ w with μ( w ) = BLANK |{s ∈ { y }+ : q(η( y ))  q(η(s))}|  |{t ∈ { w }+ : q(η( w ))  q(η(t ))}|; 16 μ ← μ; μ ( y ) ← IN; 17 foreach z ∈ { y }+ : q(η( y ))  q(η( z)) do 18 μ (z) ← OUT; 19 foreach z ∈ { y }− : q(η( z))  q(η( y )) do 20 if μ ( z) ∈ {UNDEC , BLANK } then 21 μ (z) ← MUST_OUT; 22 if  w ∈ { z}− : μ ( w ) = BLANK ∧ q(η( w ))  q(η( z)) then 23 μ( y ) ← UNDEC; goto line 12; 24 25 26 27 28

call decide-skeptical-acceptance(μ ); if ∃ z ∈ { y }− : μ( z) ∈ {UNDEC , BLANK } then μ( y ) ← UNDEC; else μ ← μ ;

29 if  y: μ( y ) = MUST_OUT then 30 S ← { y | μ( y ) = IN }; 31 if  T ∈ PEXT: S ⊆ T then 32 PEXT ← PEXT ∪ { S }; 33 if μ(x) = IN then 34 PEXT ← ∅; 35 x is not skeptically accepted; terminate; 36 end procedure

μ.

S. Nofal et al. / Artificial Intelligence 207 (2014) 23–51

Fig. 14. The af associated with Example 1.

Fig. 15. Deciding the credulous acceptance of b in the af of Fig. 14 by using Algorithm 3.

Fig. 16. Deciding the skeptical acceptance of b in the af of Fig. 14 by using Algorithm 4: b is not skeptically accepted.

49

50

S. Nofal et al. / Artificial Intelligence 207 (2014) 23–51

Fig. 17. Deciding that b is subjectively accepted by using Algorithm 10.

Example 2 (Moral debate). Consider the vaf in Fig. 17, which is obtained from the moral debate presented in [8] concerning the competing values of respect for life and respect for property. Before applying algorithms we present the debate as introduced in [8]. Hal, a diabetic, loses his insulin in an accident through no fault of his own. Before collapsing into a coma he rushes to the house of Carla, another diabetic. She is not at home, but Hal enters her house and uses some of her insulin. Was Hal justified, and does Carla have a right to compensation? The arguments are: • b: Hal is justified, since a person has a privilege to use the property of others to save their life. • c: It is wrong to infringe the property rights of another. • d: If, however, Hal compensates Carla, then Carla’s rights have not been infringed. • e: If Hal were too poor to compensate Carla, he should none the less be allowed to take the insulin, as no one should die because they are poor. Moreover, since Hal would not pay compensation if too poor, neither should he be obliged to do so, even if he can. • f : Hal is endangering Carla’s life. • g: Hal checks that Carla has abundant insulin before using it. • h: Carla does not have ample insulin. • i: Poverty is no defence for theft, that we prosecute the starving when they steal food. To summarize, the social values promoted by arguments are as follows





η = (b, life), (c , property), (d, property), (e, life), ( f , life), ( g , fact), (h, fact), (i , property)

Note that fact is not literally a promoted social value but in [8] it was considered in η to emphasize that the respective argument is a fact and should always be given the highest preference (for all parties) over other arguments regardless of their associated values. Referring to Fig. 17, for the audience fact  life  property, b is accepted as decided by Algorithm 10, which means Hal was justified to take insulin from Carla’s house because he checked that Carla has abundant insulin before using it (see argument g). Recall that c does not defeat b w.r.t. the audience fact  life  property. References [1] [2] [3] [4] [5] [6] [7] [8] [9]

L. Amgoud, C. Cayrol, A reasoning model based on the production of acceptable arguments, Ann. Math. Artif. Intell. 34 (2002) 197–215. L. Amgoud, C. Devred, Argumentation frameworks as constraint satisfaction problems, in: SUM, 2011, pp. 110–122. L. Amgoud, H. Prade, Using arguments for making and explaining decisions, Artif. Intell. 173 (2009) 413–436. S. Arnborg, J. Lagergren, D. Seese, Easy problems for tree-decomposable graphs, J. Algorithms 12 (1991) 308–340. P. Baroni, M. Caminada, M. Giacomin, An introduction to argumentation semantics, Knowl. Eng. Rev. 26 (4) (2011) 365–410. P. Baroni, F. Cerutti, M. Giacomin, G. Guida, Argumentation framework with recursive attacks, Int. J. Approx. Reason. 52 (1) (2011) 19–37. R. Baumann, G. Brewka, R. Wong, Splitting argumentation frameworks: An empirical evaluation, in: TAFA, 2011, pp. 17–31. T.J.M. Bench-Capon, Persuasion in practical argument using value-based argumentation frameworks, J. Log. Comput. 13 (3) (2003) 429–448. T.J.M. Bench-Capon, S. Doutre, P.E. Dunne, Audiences in argumentation frameworks, Artif. Intell. 171 (1) (2007) 42–71.

S. Nofal et al. / Artificial Intelligence 207 (2014) 23–51

[10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60]

51

T.J.M. Bench-Capon, P.E. Dunne, Argumentation in artificial intelligence, Artif. Intell. 171 (2007) 619–641. P. Besnard, A. Hunter, Elements of Argumentation, MIT Press, 2008. P. Besnard, S. Doutre, Checking the acceptability of a set of arguments, in: NMR, 2004, pp. 59–64. G. Boella, D. Gabbay, A. Perotti, L. van der Torre, S. Villata, Conditional labelling for abstract argumentation, in: TAFA, 2011, pp. 232–248. M. Caminada, Semi-stable semantics, in: COMMA, 2006, pp. 121–128. M. Caminada, An algorithm for computing semi-stable semantics, in: ECSQARU, 2007, pp. 222–234. M. Caminada, An algorithm for stage semantics, in: COMMA, 2010, pp. 147–158. M. Caminada, D.M. Gabbay, A logical account of formal argumentation, Stud. Log. 93 (2–3) (2009) 109–145. M. Caminada, Y. Wu, On the limitations of abstract argumentation, in: Proc. 23rd Benelux Conf. on AI (BNAIC), 2011, pp. 59–66. C. Cayrol, S. Doutre, J. Mengin, On decision problems related to the preferred semantics for argumentation frameworks, J. Log. Comput. 13 (3) (2003) 377–403. F. Cerutti, P.E. Dunne, M. Giacomin, M. Vallati, A sat-based approach for computing extensions in abstract argumentation, in: TAFA, 2013. B. Courcelle, The monadic second-order logic of graphs. I. Recognizable sets of finite graphs, Inf. Comput. 85 (1) (1990) 12–75. B. Courcelle, M. Mosbah, Monadic second-order evaluations on tree-decomposable graphs, Theor. Comput. Sci. 109 (1–2) (1993) 49–82. Y. Dimopoulos, B. Nebel, F. Toni, Finding admissible and preferred arguments can be very hard, in: KR, 2000, pp. 53–61. S. Doutre, J. Mengin, Preferred extensions of argumentation frameworks: Query, answering, and computation, in: IJCAR, 2001, pp. 272–288. S. Doutre, J. Mengin, On sceptical versus credulous acceptance for abstract argument systems, in: JELIA, 2004, pp. 462–473. P.M. Dung, On the acceptability of arguments and its fundamental role in non monotonic reasoning, logic programming and n-person games, Artif. Intell. 77 (2) (1995) 321–357. P.M. Dung, P. Mancarella, F. Toni, Computing ideal skeptical argumentation, Artif. Intell. 171 (10–15) (2007) 642–674. P.E. Dunne, Computational properties of argument systems satisfying graph-theoretic constraints, Artif. Intell. 171 (2007) 701–729. P.E. Dunne, Tractability in value-based argumentation, in: Proc. of Comput. Models of Arguments, 2010, pp. 195–206. P.E. Dunne, T.J.M. Bench-Capon, Two party immediate response disputes: Properties and efficiency, AI 149 (2) (2003) 221–250. W. Dvorák, M. Jarvisalo, J.P. Wallner, S. Woltran, Complexity-sensitive decision procedures for abstract argumentation, in: KR, 2012. W. Dvorák, R. Pichler, S. Woltran, Towards fixed-parameter tractable algorithms for abstract argumentation, Artif. Intell. 186 (2012) 1–37. U. Egly, S. Gaggl, S. Woltran, Aspartix: Implementing argumentation frameworks using answer-set programming, in: ICLP, 2008, pp. 734–738. A. Garcez, D. Gabbay, L. Lamb, Value-based argumentation frameworks as neural-symbolic learning systems, J. Log. Comput. 15 (6) (2005) 1041–1058. T.F. Gordon, The pleadings game: Formalizing procedural justice, in: ICAIL, 1993, pp. 10–19. H. Jakobovits, D. Vermeir, Dialectic semantics for argumentation frameworks, in: ICAIL, 1999, pp. 53–62. E.J. Kim, S. Ordyniak, S. Szeider, Algorithms and complexity results for persuasive argumentation, AI 175 (2011) 1722–1736. E.J. Kim, S. Ordyniak, Valued-based argumentation for tree-like value graphs, in: COMMA, 2012, pp. 378–389. H. Li, N. Oren, T.J. Norman, Probabilistic argumentation frameworks, in: TAFA, 2011, pp. 1–16. B.S. Liao, L. Jin, R.C. Koons, Dynamics of argumentation systems: A division-based method, Artif. Intell. 175 (11) (2011) 1790–1814. D. Martinez, A. Garcia, G. Simari, An abstract argumentation framework with varied-strength attacks, in: KR, 2008, pp. 135–143. P. McBurney, S. Parsons, Dialogue games for agent argumentation, in: Guillermo Simari, Iyad Rahwan (Eds.), Argumentation in Artificial Intelligence, Springer, 2009, pp. 261–280. S. Modgil, Labellings and games for extended argumentation frameworks, in: IJCAI, 2009, pp. 873–878. S. Modgil, Reasoning about preferences in argumentation frameworks, AI 173 (2009) 901–934. S. Modgil, M. Caminada, Proof theories and algorithms for abstract argumentation frameworks, in: I. Rahwan, G.R. Simari (Eds.), Argumentation in AI, Springer, 2009, pp. 105–129. S. Modgil, T.J.M. Bench-Capon, Metalevel argumentation, J. Log. Comput. 21 (6) (2011) 959–1003. M. Mozina, J. Zabkar, I. Bratko, Argument based machine learning, Artif. Intell. 171 (2007) 922–937. J.C. Nieves, U. Cortes, M. Osorio, Preferred extensions as stable models, Theory Pract. Log. Program. 8 (4) (2008) 527–543. S. Nofal, P.E. Dunne, K. Atkinson, Towards average case algorithms for abstract argumentation, in: ICAART, 2012, pp. 225–230. S. Nofal, P.E. Dunne, K. Atkinson, Algorithms for acceptance in argument systems, in: ICAART, 2013. S. Nofal, P.E. Dunne, K. Atkinson, On preferred extension enumeration in abstract argumentation, in: COMMA, 2012, pp. 205–216. S. Ordyniak, S. Szeider, Augmenting tractable fragments of abstract argumentation, in: IJCAI, 2011, pp. 1033–1038. H. Prakken, An abstract framework for argumentation with structured arguments, Argument Comput. 1 (2) (2010) 93–124. H. Prakken, Some reflections on two current trends in formal argumentation, in: Logic Programs, Norms and Action (Essays in Honor of Marek J. Sergot on the Occasion of His 60th Birthday), in: LNAI, vol. 7360, Springer, 2012, pp. 249–272. I. Rahwan, G.R. Simari, Argumentation in Artificial Intelligence, Springer, 2009. P.M. Thang, P.M. Dung, N.D. Hung, Towards a common framework for dialectical proof procedures in abstract argumentation, J. Log. Comput. 19 (6) (2009) 1071–1109. B. Verheij, Two approaches to dialectical argumentation: admissible sets and argumentation stages, in: The Eighth Dutch Conference on AI, 1996, pp. 357–368. B. Verheij, A labeling approach to the computation of credulous acceptance in argumentation, in: IJCAI, 2007, pp. 623–628. G.A.W. Vreeswijk, H. Prakken, Credulous and sceptical argument games for preferred semantics, in: JELIA, 2000, pp. 239–253. G.A.W. Vreeswijk, An algorithm to compute minimally grounded and admissible defence sets in argument systems, in: COMMA, 2006, pp. 109–120.