Valued preference-based instantiation of argumentation frameworks with varied strength defeats

Valued preference-based instantiation of argumentation frameworks with varied strength defeats

JID:IJA AID:7683 /FLA [m3G; v 1.121; Prn:8/01/2014; 13:23] P.1 (1-24) International Journal of Approximate Reasoning ••• (••••) •••–••• Contents li...

1MB Sizes 0 Downloads 25 Views

JID:IJA AID:7683 /FLA

[m3G; v 1.121; Prn:8/01/2014; 13:23] P.1 (1-24)

International Journal of Approximate Reasoning ••• (••••) •••–•••

Contents lists available at ScienceDirect

International Journal of Approximate Reasoning www.elsevier.com/locate/ijar

Valued preference-based instantiation of argumentation frameworks with varied strength defeats ✩ Souhila Kaci a , Christophe Labreuche b,∗ a b

LIRMM – UMR 5506, University of Montpellier 2, 161 rue ADA, F34392 Montpellier Cedex 5, France Thales Research & Technology, 1 avenue Augustin Fresnel, F91767 Palaiseau Cedex, France

a r t i c l e

i n f o

Article history: Available online xxxx Keywords: Argumentation Varied-strength defeat relation Valued preference relation

a b s t r a c t A Dung-style argumentation framework aims at representing conflicts among elements called arguments. The basic ingredients of this framework is a set of arguments and a Boolean abstract (i.e., its origin is not known) binary defeat relation. Preference-based argumentation frameworks are instantiations of Dung’s framework in which the defeat relation is derived from an attack relation and a preference relation over the arguments. Recently, Dung’s framework has been extended in order to consider the strength of the defeat relation, i.e., to quantify the degree to which an argument defeats another argument. In this paper, we instantiate this extended framework by a preference-based argumentation framework with a valued preference relation. As particular cases, the latter can be derived from a weight function over the arguments or a Boolean preference relation. We show under some reasonable conditions that there are “less situations” in which a defense between arguments holds with a valued preference relation compared to a Boolean preference relation. Finally, we provide some conditions that the valued preference relation shall satisfy when it is derived from a weight function. © 2013 Elsevier Inc. All rights reserved.

1. Introduction Argumentation is a framework for reasoning about inconsistent knowledge. It consists first in constructing the arguments, then identifying the acceptable ones and finally drawing conclusions. Many fields of Artificial Intelligence such as autonomous agents, decision making and non-monotonic reasoning need to deal with inconsistent information. Among the frameworks dedicated to inconsistency handling, argumentation theory is based on the notion of arguments in favor or against statements. In his pioneer work, Dung has proposed an abstract argumentation framework that is composed of a set of arguments and a binary relation which is interpreted as a defeat between the arguments [17]. Then acceptable arguments are computed, from which conclusions are drawn. Two basic properties are necessary to define the acceptable arguments: the conflict-freeness and the defense of an argument by a set of arguments. These two concepts define the output of an argumentation framework which is a set of sets of arguments, called extensions, which can be accepted together. Preferences play an important role to solve conflicts between arguments. Preference-based argumentation frameworks are instantiations of Dung’s framework in which the defeat relation is derived from an attack relation between arguments and a preference relation over the arguments [32,1,2,8,25,21]. An attack succeeds (thus called a defeat) if the attacked argument is not strictly preferred to the attacking one. An argument is preferred to another argument because it is stronger, more certain, more reliable, etc. For example with the success of internet technologies, individuals mostly construct their ✩

*

This paper is an extended version of three conference papers [22], [23] and [24]. Corresponding author. E-mail addresses: [email protected] (S. Kaci), [email protected] (C. Labreuche).

0888-613X/$ – see front matter © 2013 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.ijar.2013.12.001

JID:IJA AID:7683 /FLA

[m3G; v 1.121; Prn:8/01/2014; 13:23] P.2 (1-24)

S. Kaci, C. Labreuche / International Journal of Approximate Reasoning ••• (••••) •••–•••

2

Fig. 1. Attack among arguments.

opinion on various subjects from the (conflicting) information, opinions and arguments that are exchanged on the internet and especially on social networks, such as Tweeter. During the analysis of the arguments, it is important to take into account the reliability of Tweeter counts and the information they convey [10]. The next example models a debate on whether the exploitation of shale gas using multi-stage hydraulic fracturing should be allowed in France. Example 1. The following arguments are typically used from both parties in favor or against this exploitation.

• • • • • • • • •

a1 : The exploitation of shale gas through multi-stage hydraulic fracturing would be beneficial in France. a2 : Multi-stage hydraulic fracturing is expected to pollute the water table. a3 : Multi-stage hydraulic fracturing is already used in the USA. There is no proven cases of pollution of the water table. a4 : The only studies on the impact of gas extraction on water table correspond to older extraction technologies, and are not relevant for multi-stage hydraulic fracturing. a5 : The potential for exploitation of shale gas is just 10% of France’s needs in gas. The economic benefits of shale gas are low compared to the potential negative impact on tourism and agriculture. a6 : We do not know exactly the potential of shale gas in France. There should be a survey of the ground and start making an operating test to estimate reserves. It is planned in the 2011 law and has never been done. a7 : Start exploitation of shale gas is to get a foothold in a more intensive use. a8 : In the USA, the exploitation of shale gas mainly occurs in the Dakota where the potential environmental impact on humans is low (due to the few people living there). This is not the case in France, where promising shale gas acreages are for instance in the vicinity of Paris. a9 : Dakota is an exception in the USA. Most wells are located around inhabited areas, such as Pennsylvania.

The relation between the arguments is represented by attacks as depicted in Fig. 1. These arguments come from different Tweeter counts, having various reliability. For instance, the proponent of argument a4 is a recognized researcher in geology and has thus a high reliability. Proponent of argument a3 has the reputation of not providing well-founded arguments. Being more reliable a4 is preferred to a3 . The proponents of argument a7 and a6 are a politician and a lawyer respectively. And so on. In this paper, we address the question of how should preferences such as valuations on arguments (see Example 1) be considered when analyzing the arguments. Preferences can be modeled in different ways [26]. Let us cite the three main representations which have been used in argumentation: (1) a binary relation as previously mentioned [32,1], which is the lightest representation in terms of elicitation; (2) a valuation on individual arguments, which may be interpreted as the confidence or the trust in the proponent of the argument [9]; (3) a degree of preference over each pair of arguments [22–24], which corresponds for instance to the number of individuals preferring an argument over another one. Note that the last representation is the most general one and encompasses the first two. Depending on the problem, preferences are given in one of these three models. Therefore, ideally, we aim in this paper to propose a framework that can handle these three representations. The key ingredient is the representation of the strength of attacks. Note however that the presence of preferences in argumentation is not without drawbacks. In particular the presence of preferences may also violate the conflict-freeness of extensions [3], as it will be illustrated in Example 9 in Section 3.2. Moreover, there are situations in which the defense obtained from a preference relation is not discriminative enough, so that some debatable extensions are obtained [16]. This point will be illustrated in Example 6 (see Section 3.1). Dung’s argumentation framework and its various instantiations rely on a Boolean defeat relation over arguments. Recently, it has been argued that all defeats have not necessarily the same strength [7,27,28,18,12]. Consequently, Dung’s argumentation framework has been extended to consider a defeat relation with varied strengths. Standard defeat (resp. preference) relations are particular cases of relations with varied strength that can take only two values, and will be thus called Boolean defeat (resp. preference) relations. One may compute defeat relation with varied strengths in different ways. In the spirit of Dung’s argumentation framework vs. preference-based argumentation framework, we investigate the way where the defeat relation with varied-strength is computed from an attack relation and a valued preference relation referring to the certainty/validity/intensity of preference between arguments. More precisely the larger the preference of an argument a over an argument b, the larger the defeat relation of a on b, if a attacks b. In this paper, we extend the framework proposed in [22–24] with a detailed analysis. When the preferences between the arguments are described by a Boolean relation, we propose some examples and principles showing that preference-based argumentation frameworks need to be refined by distinguishing between strict

JID:IJA AID:7683 /FLA

[m3G; v 1.121; Prn:8/01/2014; 13:23] P.3 (1-24)

S. Kaci, C. Labreuche / International Journal of Approximate Reasoning ••• (••••) •••–•••

3

preference, inverse strict preference and indifference/incomparability. An important result in this paper states that the defense obtained from a valued preference relation is always more discriminative than that obtained from the corresponding Boolean preference relation. Moreover, when the valued preference relation is computed from weights, the discrimination gain is strict when the valued preference relation is not Boolean, under some mild conditions on the valued preference relation. In particular, the problem (raised in [16]) that there are situations in which the defense obtained from a preference relation is not discriminative enough, is solved by our framework. Finally, we give conditions on the construction of the valued preference relation from weights of the arguments. They correspond to the strengthening of the previous conditions. The paper is structured as follows. Section 2 recalls Dung’s argumentation framework, preference-based argumentation framework and argumentation framework with varied-strength defeats. In Section 3 the need to extend PAF is motivated by several examples. Several properties on acceptable arguments are also proposed. We instantiate argumentation framework with varied-strength defeats by a preference-based argumentation framework where the preference relation is pervaded with intensity degrees. The particular case where the preferences between the arguments is described by a Boolean relation is considered in Section 4. Section 5 focuses on a property saying that there shall be less acceptable arguments with valued-preference relations compared to Boolean ones. In Section 6 we study different ways to compute the intensity of a preference relation from weights associated to arguments. Section 7 surveys related works. Lastly we conclude. 2. Argumentation theory Argumentation theory has been firstly formalized to perform non-monotonic reasoning [31,32,34]. The idea is to construct and evaluate arguments which are composed of reasons for the validity of a given claim. However the validity of a claim can be disputed by other arguments. Therefore a claim is accepted only if there exists an argument which supports it and which is robust against its counterarguments. Most of works that can be modeled in argumentation theory are based on Dung’s argumentation framework [17]. 2.1. Dung’s argumentation framework Dung proposed an abstract framework to formalize argumentation theory [17]. In his framework, arguments are supposed to be given. Conflicts between arguments are represented by a binary defeat relation. Therefore the origin of arguments and conflicts is not known. Definition 1 (Argumentation framework). (See [17].) An argumentation framework (AF) is a tuple A,  where A is a finite set of arguments and  ⊆ A × A is a binary defeat relation. The outcome of an argumentation framework is a set of sets of arguments, called extensions, that are robust against defeats. The extensions rely on two conditions namely conflict-freeness and defense. Definition 2 (Conflict-freeness & Defense). (See [17].) Let A,  be an AF.

• A ⊆ A is conflict-free if there are no a, b ∈ A such that a  b. • A ⊆ A defends c if ∀b ∈ A such that b  c, ∃a ∈ A such that a  b. Without any damage we say that A defends c (with A ⊆ A and c ∈ A) w.r.t. A,  if A defends c in the sense of Definition 2, or c is not attacked. There are several definitions of extension, each corresponding to an acceptability semantics. Definition 3 (Acceptability semantics). (See [17].) Let A,  be an AF.

• A subset A ⊆ A of arguments is an admissible extension iff it is conflict-free and defends all elements in A. The set of admissible extensions is denoted by Accadm (). • A subset A ⊆ A is a preferred extension iff it is a maximal (in the sense of ⊆) admissible extension. The set of preferred extensions is denoted by Accpre (). • A subset A ⊆ A is a stable extension iff it is a preferred extension that attacks any argument in A \ A. The set of stable extensions is denoted by Accsta (). 2.2. Preference-based argumentation framework Preference-based argumentation framework is an instantiation of Dung’s framework which is based on a binary attack relation between arguments and a preference relation over the set of arguments. A preference relation over a set of objects X = {x, y , z, . . .} is a reflexive binary relation with ⊆ X × X such that x y stands for “x is at least as preferred as y”. x ≈ y represents an indifference between x and y, i.e., x and y are equally

JID:IJA AID:7683 /FLA

[m3G; v 1.121; Prn:8/01/2014; 13:23] P.4 (1-24)

S. Kaci, C. Labreuche / International Journal of Approximate Reasoning ••• (••••) •••–•••

4

preferred. It holds when both x y and y x hold. x ∼ y means that neither x y nor y x holds, i.e., x and y are incomparable. The notation x y means that x is strictly preferred to y. We have x y if x y holds but y x does not. The strict preference relation is asymmetric. Moreover is a preorder over X if and only if is transitive, i.e., if x y and y z then x z. is complete if and only if ∀x, y ∈ X we have x y or y x. Otherwise is partial. Definition 4 (Preference-based argumentation framework). (See [32,1].) A preference-based argumentation framework (PAF) is a 3-tuple A, ;,  where A is a set of arguments, ; is a binary attack relation ⊆ A × A and is a complete or partial preorder over A.

is called a Boolean preference relation. Definition 5 (From PAF to AF). (See [32,1].) A preference-based argumentation framework A, ;,  represents A,  iff

∀a, b ∈ A :

ab

iff





a ; b and not(b a) .

(1)

( )

In the rest of the paper, A,  will denote an AF when the latter is represented by a PAF A, ;, . The extensions of a preference-based argumentation framework are simply the extensions of the argumentation framework it represents. A preference-based argumentation framework represents a unique argumentation framework while an argumentation framework can be represented by several preference-based argumentation frameworks [25]. A possible way to construct a preference relation over the set of arguments is to start with a set K of weighted propositional logic formulas [32]. Let K = {(φi , αi ) | i = 1, . . . , n}, where φi is a propositional logic formula and αi ∈ (0, 1] is the certainty/priority degree associated with φi . We put K ∗ = {φi | (φi , αi ) ∈ K }. An argument is a pair  H , h where (c1) (c2) (c3) (c4)

h is a formula of the language, H is a consistent subset of K ∗ , H entails h and H is minimal (i.e., no strict subset of H satisfies (c1), (c2) and (c3)).

H is called the support of the argument and h its conclusion. One can then construct a function w : A → [0, 1], where w ( H , h) depends on the weight of formulas involved in H [2]. Example 2. Let p, q and r be three propositional variables. Let K = {( p , 0.8), (¬q, 0.8), (¬ p ∨ q, 0.6), (r , 0.5)}. We can construct the following arguments:









a1 = { p }, p ,





a2 = {¬ p ∨ q}, ¬ p ∨ q ,



a3 = {¬q}, ¬q ,



a4 = {r }, r ,





a5 = { p , ¬ p ∨ q}, q .

We define two weight functions A → [0, 1] such that given a =  H , h ∈ A, we have w 1 (a) = min{αi |(φi , αi ) ∈ K , φi ∈ H } and w 2 (a) = max{αi |(φi , αi ) ∈ K , φi ∈ H }. Therefore we have

w 1 (a1 ) = 0.8,

w 1 (a2 ) = 0.6,

w 1 (a4 ) = 0.5,

w 1 (a5 ) = 0.6

w 2 (a1 ) = 0.8,

w 2 (a2 ) = 0.6,

w 2 (a4 ) = 0.5,

w 2 (a5 ) = 0.8.

w 1 (a3 ) = 0.8,

and

w 2 (a3 ) = 0.8,

Definition 6 (Weighted preference-based argumentation framework). A weighted preference-based argumentation framework (WPAF) is a 3-tuple A, ;, w  where A is a set of arguments, ; is a binary attack relation ⊆ A × A and w : A → [0, 1] is a weight function over the arguments. Definition 7 (From WPAF to PAF). A weighted preference-based argumentation framework A, ;, w  represents a preferencebased argumentation framework A, ;, w  if and only if

∀a, b ∈ A :

a w b

iff

w (a)  w (b).

(2)

One can easily prove that a weighted preference-based argumentation framework represents a unique preference-based argumentation framework while a preference-based argumentation framework can be represented by several weighted preference-based argumentation frameworks.

JID:IJA AID:7683 /FLA

[m3G; v 1.121; Prn:8/01/2014; 13:23] P.5 (1-24)

S. Kaci, C. Labreuche / International Journal of Approximate Reasoning ••• (••••) •••–•••

5

2.3. Argumentation framework with varied-strength defeats Strength of defeat relations has been incorporated in argumentation framework in a qualitative relative way by means of a partial preorder [27,28] and a quantitative way by means of a numerical function [18]. We follow the second modeling as it is more suitable to the purpose of our work. Definition 8 (Argumentation framework with varied-strength defeats). (See [18].) An argumentation framework with variedstrength defeats (AFV) is a 3-tuple A, , VDef  where A,  is a Dung’s argumentation framework and VDef is a function defined from A × A to [0, 1]. For simplicity, we consider the interval [0, 1] but any bipolar linearly ordered scale with top, bottom and neutral elements can be used as well. VDef (a, b) is the degree of credibility of the statement “a defeats b”. Values 0, 12 and 1 for VDef (a, b) respectively mean that the validity of the previous statement is certainly false, unknown and certainly true. We can intuitively argue that the following equivalence holds:

VDef (a, b) = 0 iff

not(a  b).

(3)

Henceforth (3) will be assumed to hold. Definition 9 (Defeat). Given A, , VDef , we say that a defeats b w.r.t. A, , VDef  iff a  b. Extensions in AFV also rely on the defense and conflict-freeness while considering the strength of defeats. Regarding the defense, when a  b and b  c then a is considered as a “serious” defender of c if the defeat of a on b is at least as strong as the defeat of b on c. Formally, we write: Definition 10 (Defense in AFV). (See [27].) The set A ⊆ A defends c ∈ A w.r.t. A, , VDef  iff for all b ∈ A such that b  c, there exists a ∈ A with

ab

and

VDef (a, b)  VDef (b, c ).

Again without any damage we say that A defends c (with A ⊆ A and c ∈ A) w.r.t. A, , VDef  if A defends c in the sense of Definition 10 or c is not attacked w.r.t. . Other definitions of defense can be found in [13]. However we focus on Definition 10 as it takes into account strengths of defeat in a very natural way. Conflict-freeness has been defined in two different ways in AFV depending on whether strength of defeats is considered or not. Definition 11 (Conflict-freeness in AFV). Let A, , VDef  be an AF.

• A ⊆ A is conflict-free w.r.t. A, , VDef  if there does not exist a, b ∈ A such that a  b [27]; • A ⊆ A is α -conflict-free (with α ∈ R+ ) w.r.t. A, , VDef  if [18]



VDef (a, b)  α .

a,b∈ A , ab

We note that the concepts of 0-conflict-freeness and conflict-freeness are similar for AFV. Lemma 1. Let A, , VDef  an AFV. Then for every α ∈ R+

{ A ⊆ A | A is conflict-free} ⊆ { A ⊆ A | A is α -conflict-free}. Proof. Let A be conflict-free w.r.t. . Hence A is

α -conflict free for any α ∈ R+ . 2

Given that we have two definitions of conflict-freeness, we also have two definitions of acceptability semantics in AFV. Definition 12 (Acceptability semantics in AFV). Given a semantics sem:

• Accsem (, VDef ) is the set of extensions where conflict-freeness is computed w.r.t.  (see first item in Definition 11), and the defense is given by Definition 10;

JID:IJA AID:7683 /FLA

[m3G; v 1.121; Prn:8/01/2014; 13:23] P.6 (1-24)

S. Kaci, C. Labreuche / International Journal of Approximate Reasoning ••• (••••) •••–•••

6

• Accsem (α , , VDef ) is the set of extensions where conflict-freeness is the α -conflict freeness (see second item in Definition 11), and the defense is given by Definition 10. In the next lemma we compare the two variants of acceptability definitions in AFV. Lemma 2. Let A, , VDef  be an AFV. We have:

• Accadm (, VDef ) ⊆ Accadm (α , , VDef ) for any α ∈ R+ , and • Accsem (, VDef ) = Accsem (0, , VDef ) for any semantics sem. Proof. The concepts of defense are the same in Accadm (, VDef ) and Accadm (α , , VDef ). The inclusion of the first item follows from Lemma 1. As we already noted, if A is 0-conflict free, then it is conflict free w.r.t. . This, combined with the first item of the lemma, proves that Accadm (, VDef ) = Accadm (0, , VDef ). Hence this equality also holds for the two other semantics. 2 Example 3. Let A = {a1 , a2 , a3 , a4 } with a1  a4 , a4  a2 , a4  a3 , and VDef (a1 , a4 ) = 0.2, VDef (a4 , a2 ) = 0.5, VDef (a4 , a3 ) = 0.1, and VDef (., .) = 0 for all other pairs of arguments. Then





Accadm () = {a1 }, {a1 , a2 }, {a1 , a3 }, {a1 , a2 , a3 } ,









Accpre () = {a1 , a2 , a3 } , Accsta () = {a1 , a2 , a3 } ,





Accadm (, VDef ) = {a1 }, {a1 , a3 } ,





Accpre (, VDef ) = {a1 , a3 } , Accsta (, VDef ) = ∅,





Accadm (0, , VDef ) = {a1 }, {a1 , a3 } ,





Accpre (0, , VDef ) = {a1 , a3 } , Accsta (0, , VDef ) = ∅,





Accadm (0.3, , VDef ) = {a1 }, {a1 , a3 }, {a1 , a4 }, {a3 , a4 }, {a1 , a3 , a4 } ,









Accpre (0.3, , VDef ) = {a1 , a3 , a4 } , Accsta (0.3, , VDef ) = {a1 , a3 , a4 } . We can check that all results of Lemma 2 hold in this example. Moreover, the first inclusion in Lemma 2 is not true with other semantics, e.g. Accpre (, VDef ) ∩ Accpre (0.3, , VDef ) = ∅. 3. Valued preference-based argumentation framework In this section we extend standard preference-based argumentation framework – in which the preference relation is Boolean – to the case where the preference relation is valued. The new framework is called “valued preference-based argumentation framework”.1 Before we go into details of the framework, we present two motivating examples in the next section. 3.1. Motivating examples Example 4 (Example 1 continued). We consider here a subpart of Example 1, focusing on the following three arguments: a2 (Multi-stage hydraulic fracturing is expected to pollute the water table), a3 (Multi-stage hydraulic fracturing is already used in the USA. There is no proven case of pollution of the water table) and a4 (The only studies on the impact of gas extraction correspond to older extraction technologies, and are not relevant for multi-stage hydraulic fracturing). In this section, the three arguments a2 , a3 and a4 are relabeled c, b and a respectively (see Fig. 2).

1 Valued preference-based argumentation framework must not be confused with value-based argumentation framework [8]. In the latter, arguments promote values which may be point of views, decisions, opinions, etc. Then a preference relation over the set of arguments is derived from a preference relation over the values. In our framework, the preference relation over the set of arguments is valued, i.e. it expresses preferences with varied strength over arguments, as we will see later.

JID:IJA AID:7683 /FLA

[m3G; v 1.121; Prn:8/01/2014; 13:23] P.7 (1-24)

S. Kaci, C. Labreuche / International Journal of Approximate Reasoning ••• (••••) •••–•••

7

Fig. 2. Attacks among arguments.

Fig. 3. Attack relation ;.

Our concern is the acceptance of argument c. In standard PAF c is accepted in the following cases: (i) c b, (ii) not(b a). In the first case c does not need to be defended as it is stronger than its attacker. In the second case, c is defended by a. However we may have the following situation: b c and (a ∼ b or a ≈ b). We believe that the defense provided by a is not strong as a is only equally preferred or incomparable with b while b is strictly preferred to c. More precisely, when b and a are either indifferent or incomparable, argument a is sufficiently strong to imply that b can be unacceptable, but the strict preference (which can be viewed as stronger than indifference/incomparability) of b over c implies that one cannot really believe that c is acceptable. This intuition is illustrated in the following example. Example 5 (Example 4 continued). Proponent of argument c is typically the man in the street and has thus a weak reliability; on the other hand, arguments a and b are provided by researchers and have thus a high and similar reliability. Hence we may have the following situation: b c and a ∼ b. In this situation, a is acceptable and defeats b. Even though b is defeated by a, the defeat of b on c is so strong that there are some real doubts about accepting c. In other words, the defense of a is not strong enough. Unfortunately standard PAF cannot prevent this situation. Consequently, our aim is to have an argumentation framework which satisfies the following wished property: Property 1. Let a, b and c be three arguments such that b  c and a  b. If b c then a defends c against b only if a b. That is, when an attacker is strictly preferred to the attacked argument then a defense against this attacker should be provided with a strict preference. To push this idea further, let us consider the following example, which is adapted from Example 4.1.1 in [9]. Example 6. The following arguments discuss about whether UK government proposal to raise tuition fees for students in universities are fair2 :

• • • •

a1 : a2 : a3 : a4 :

Universities cannot continue using taxpayers’ money; so government proposals are good. Charging tuition fees will discourage students from poorer families, so proposals are not good. There will be a regulator to check that universities take enough poorer students. Universities will apply positive discrimination to poorer students, which is not good.

The attack relation is: a1 ; a2 , a2 ; a1 , a3 ; a2 , a4 ; a3 , a4 ; a1 and a1 ; a4 (see Fig. 3). In particular, a4 attacks a3 as the use of a regulator (argument a3 ) implies that some positive discrimination is necessary conducted (argument a4 ). If there is no preference between these arguments, then we have two extensions A = {a1 , a3 } and B = {a2 , a4 }. This makes complete sense from an intuitive side. In fact the two sets of arguments A and B are symmetric (in terms of defeat relations between their arguments, as there are three defeat arcs from A to B, and also three defeat arcs from B to A) and deserve the same treatment. The preferences over the previous arguments depend on the audience. Assume that the audience is sensitive to the statement denoted by p that “positive discrimination is not good”. The attack of a4 over a3 is based on statement p, so that the audience clearly prefers a4 to a3 . On the other hand, the attack of a4 on a1 is not based on statement p, but just on the fact that a1 supports that government proposals are good and a4 supports the opposite statement. Hence the audience judges that a1 and a4 are incomparable. All other pairs of arguments are incomparable too. To sum-up, the preference over arguments is only: a4 a3 . 2

This proposal appeared in the UK in early 2003.

JID:IJA AID:7683 /FLA

[m3G; v 1.121; Prn:8/01/2014; 13:23] P.8 (1-24)

S. Kaci, C. Labreuche / International Journal of Approximate Reasoning ••• (••••) •••–•••

8

The situation is no more symmetric between the two sets A and B. Intuitively one feels that preference a4 a3 gives a clear advantage to B. More precisely, the defense provided by B is stronger than that provided by A. ( )

Under PAF, A and B are both extensions. Indeed A, ;,  represents Dung’s AF A,  with ( )

( )

a1  a2 ,

( )

a2  a1 ,

a1  a4 ,

( )

a4  a1 ,

( )

a4  a3 ,

( )

a3  a2 .

Then we have the following picture:

• Defense provided by A = {a1 , a3 }. – a1 defends a3 against a4 because a1 attacks a4 and a1 is incomparable with a4 . Note that a4 is strictly preferred to a3 . – a3 defends a1 against a2 because a3 attacks a2 and a3 is incomparable with a2 . Note that a1 and a2 are also incomparable. • Defense provided by B = {a2 , a4 }. – a2 defends a4 against a1 because a2 attacks a1 and a2 is incomparable with a1 . Note that a1 and a4 are also incomparable. – a4 defends a2 against a3 because a4 attacks a3 and a4 is strictly preferred to a3 . Yet we can see that A provides a defense with only an “incomparability” against its attackers while B provides a defense with a “strict preference” against one of its attackers. Strict preference is definitely stronger than incomparability. Indeed we believe that B should be retained as a stable extension but not A because the former provides a stronger defense. Unfortunately, standard preference-based argumentation framework cannot capture this idea because the defense in this framework is not discriminative. As we have previously seen, the reasoning drawn with PAF fails to recover some intuitions. This because it is not discriminative enough due to the fact that the defense is flat, i.e. all defenses have the same strength. The previous examples suggest that a defense provided with a strict preference in PAF should be stronger than (and thus overheads) a defense provided with an incomparability or equal preference. This point was also noted in [4]. Property 2. When the strength of the defense is varied, the defense holds at most as often as in the flat case. That is, we encounter less situations of defense (or at most as much situations) when the strength of defense is varied. In order to deal with the concerns previously described, we develop in the next section an argumentation framework in which the preference relation is valued. The latter allows to compute varied-strength defeat relations. Consequently, we have less situations of defense compared to the Boolean case as we will see in Section 5. 3.2. The framework The idea is to introduce varied levels of preference between arguments, and in particular to differentiate between strict preference and incomparability. As mentioned in the introduction, we consider a very general preference representation, called the valued preference relation (called fuzzy preference relation in preference modeling) [19]. It encompasses the Boolean preference relation . A valued preference relation over A is a function P from A × A into [0, 1]. From (1), we note that only the asymmetric part of is used in PAF. Accordingly, for valued preferences, we need to consider strict preference relation only. Therefore P is interpreted as a strict preference. More precisely, P (a, b) is the degree of credibility of the statement “a is strictly preferred to b”. In particular, P (a, b) = 1 means that the previous statement is certainly true, P (a, b) = 0 means that the previous statement is certainly not true, and P (a, b) = 12 means that it is unknown whether the previous statement is true or not. The following examples provide two ways to construct function P . Example 7. Imagine that internet users are asked to vote on the choice between each pair of arguments. Let na,b be the number of individuals thinking that argument a is strictly preferred to b. Then

P (a, b) =

na,b − nb,a 2M

+

1 2

∈ [0, 1],

where M is the number of voters (we have na,b + nb,a  M). Example 8. P (a, b) may represent the intensity of preference of argument a over b according to an audience. The MACBETH approach is a methodology from measurement theory that allows to construct an (interval) scale on a set given information on the intensity of preferences between the elements of the set [6]. Seven labels are used to represent the intensities of preferences P (a, b): nul, very weak, weak, moderate, strong, very strong and extreme. These labels can be mapped onto the [0, 1] scale.

JID:IJA AID:7683 /FLA

[m3G; v 1.121; Prn:8/01/2014; 13:23] P.9 (1-24)

S. Kaci, C. Labreuche / International Journal of Approximate Reasoning ••• (••••) •••–•••

9

Fig. 4. Attack among arguments.

The following relation is a weak generalization of the fact that is asymmetric (i.e., one cannot have both a b and b a):

∀a, b ∈ A,





min P (a, b), P (b, a) < 1.

(4)

Definition 13 (Valued preference-based argumentation framework). A valued preference-based argumentation framework (VPAF) is a 3-tuple A, ;, P  where A is the set of arguments, ; is a binary attack relation ⊆ A × A and P is a function defined from A × A to [0, 1]. The valued preference relation over arguments will serve to evaluate how strong a defeat relation is. This relation together with the attack relation are used to compute a varied-strength defeat relation. Intuitively, the more an argument a is preferred to an argument b, the less the strength of the defeat of b on a is. We instantiate an argumentation framework with varied-strength defeats with a preference-based argumentation framework where preferences have varied intensity. Definition 14 (From VPAF to AFV). A valued preference-based argumentation framework A, ;, P  represents an argumenta(P )

tion framework with varied-strength defeats A, , VDef P  iff (P )

ab

if a ; b and P (b, a) < 1,



VDef P (a, b) =

1 − P (b, a) 0

(5)

if a ; b and P (b, a) < 1, else.

(6) (P )

This definition is consistent with (3) as VDef P (a, b) > 0 iff a  b. Note that the Boolean condition not(b a) in Definition 1 is extended into the credibility value 1 − P (b, a). Hence relation (5) extends (1). (P )

Notations  and VDef P will be used throughout the paper when an argumentation framework with varied-strength defeats is represented by a valued preference-based argumentation framework. Given a VPAF A, ;, P , it is of course possible to use the semantics defined in Definition 12 on the associated AFV (P )

(P )

namely Accsem (, VDef P ) and Accsem (α , , VDef P ). However one may argue for another definition regarding the first set. Example 9. Consider the following arguments:

• a1 : John will go to jail since he stolen a diamond necklace on Monday in Paris. • a2 : John could not have stolen the necklace since Bob saw him on Monday in Bruxelles. • a3 : John could not have been in Bruxelles as he hates Bruxelles. The attack among arguments is depicted in Fig. 4. Assume that the testimony of Bob is more reliable than that of Paul, and argument a3 is very weak in court. Thus a2 is preferred to a3 : a2 a3 . ( )

( )

In PAF, a2 and a3 do not defeat each other (we have neither a2  a3 nor a3  a2 ) as a2 a3 . Hence {a2 , a3 } is a stable ( )

extension w.r.t. A, . We distinguish between two visions about this extension: 1. on the one hand, one may strictly follow Dung’s framework and accept {a2 , a3 } as an extension since this set fully satisfies the basic requirements of an extension namely conflict-freeness and defense; 2. on the other hand, one cannot ignore the fact that a2 and a3 are conflicting due to ;. In some sense, the attack on a1 is weakened by the fact that a3 attacks a2 . John’s attorney will not use both testimonies of Bob and Paul, as they will introduce too many doubts in the mind of the jury. John’s attorney will surely focus on argument a2 . Therefore a more appropriate instantiation of Dung’s framework is to consider conflict-freeness w.r.t. ;. This property is a postulate proposed in [4]. Following this reasoning, a natural extension of Dung framework is to define extensions ( )

from conflict-freeness w.r.t. ; and defense w.r.t.  .

JID:IJA AID:7683 /FLA

[m3G; v 1.121; Prn:8/01/2014; 13:23] P.10 (1-24)

S. Kaci, C. Labreuche / International Journal of Approximate Reasoning ••• (••••) •••–•••

10

( )

Lemma 3. Let A, ;,  be a PAF and A,  the AF it represents. Then ( )

{ A ⊆ A | A is conflict-free w.r.t. ;} ⊆ { A ⊆ A | A is conflict-free w.r.t. }. Proof. Follows from Eq. (1).

2 ( )

The next lemma states that for a PAF, the concepts of conflict-freeness w.r.t. ; or w.r.t.  are equivalent when relation ; is symmetric (i.e., it is a conflict relation). ( )

Lemma 4. Let A, ;,  be a PAF and A,  the AF it represents. Assume that ; is symmetric. Then A ⊆ A is conflict-free w.r.t. ; ( )

iff it is conflict-free w.r.t.  . ( )

Proof. Let A ⊆ A be conflict-free w.r.t.  . Assume by contradiction that A is not conflict-free w.r.t. ;. Hence there exist ( )

a, b ∈ A with a ; b and b ; a. This implies that both a b and b a as A is conflict-free w.r.t.  . This is a contradiction as is asymmetric. The opposite inclusion is already shown in Lemma 3. 2 Translating the problem described above to VPAF means that conflict-freeness should be derived from ; rather than (P )

from  . In this paper, we consider both visions and propose an acceptability semantics for each. Definition 15 (Acceptability semantics in VPAF). Given a semantics sem:

• Accsem (;, VDef P ) is the set of extensions where conflict-freeness is computed w.r.t. ;, and the defense is given by (P )

Definition 10 for A, , VDef P ; (P )

• Accsem (α , , VDef P ) is the set of extensions where conflict-freeness is the α -conflict freeness, and the defense is given (P )

by Definition 10 for A, , VDef P . The following translates Lemma 15 to the case of valued preference relation and defeat with varied strength. (P )

Lemma 5. Let A, ;, P  be a VPAF and A, , VDef P  be the AFV it represents. Assume that ; is symmetric. Then A ⊆ A is conflictfree w.r.t. ; iff it is α -conflict-free, with α = 0. Proof. Let A ⊆ A be α -conflict free, with α = 0. Hence VDef P (a, b) = 0 for all a, b ∈ A. Assume by contradiction that A is not conflict-free w.r.t. ;. This means that there exist a, b ∈ A with a ; b and b ; a. As A is 0-conflict-free, this implies that both P (b, a) = 1 and P (a, b) = 1. This is a contradiction given (4).

Suppose now that A ⊆ A is conflict-free w.r.t. ;. Following (5), we have that a  b is not true, ∀a, b ∈ A. Therefore a,b∈ A , ab VDef P (a, b ) = 0 which means that A is α -conflict-free with α = 0. 2 4. From a Boolean preference relation to a valued preference relation We study in this section how to construct the valued preference relation P if the preferences over arguments are represented by a Boolean relation. We have provided several examples and properties related to a Boolean preference relation in Section 3.1. Let us now analyze our VPAF in the case where the valued preference relation is derived from a Boolean preference relation . One may imagine different ways to make such a derivation. The simplest one is to define P with two levels.

Definition 16. Given , we define a valued preference relation P bool from as:

∀a, b ∈ A,



P bool (a, b) =



1 if a b, 0 otherwise.

The next definition gives a more refined computation of P from .

JID:IJA AID:7683 /FLA

[m3G; v 1.121; Prn:8/01/2014; 13:23] P.11 (1-24)

S. Kaci, C. Labreuche / International Journal of Approximate Reasoning ••• (••••) •••–•••

11

Fig. 5. Attack among arguments.



Definition 17. Given , we define a valued preference relation P tern from as:

∀a, b ∈ A,

P tern (a, b) =

Note that value a and b w.r.t. .

1 2

⎧ ⎨1 ⎩

0 1 2

if a b, if b a, otherwise.

for VDef P (a, b), with a ; b, corresponds either to incomparability (∼) or indifference (≈) between tern

Example 10 (Example 6 continued). We have

VDef P (a4 , a3 ) = 1,

VDef P (a3 , a2 ) =

tern

VDef P (a1 , a2 ) = tern

VDef P (a1 , a4 ) = tern

VDef P

tern

1 2 1 2

tern

,

VDef P (a2 , a1 ) =

,

VDef P (a4 , a1 ) =

tern

tern

1

,

2 1 2 1 2

, .

= 0 for the other pairs of arguments as there is no defeat (see Fig. 5). Let α = 0. Now, A = {a1 , a3 } is no more

an admissible extension since the defeat of a4 ∈ B = {a2 , a4 } on a3 ∈ A is stronger than the defense that A can give.

( P tern )

In fact we have VDef P (a4 , a3 ) = 1 while VDef P (a1 , a4 ) = 0.5. Therefore A is not a stable extension w.r.t. A,  tern

tern

, VDef P . Consequently, there remains only one stable extension namely B. The problem raised by this example on the tern

stable extension is thus solved by the introduction of the strength of defeat relations.

Next proposition states that VPAF corresponds to PAF when P bool is considered. This means that our framework is more general in the sense that it can recover PAF.

Proposition 1. Let A, ;,  be a PAF. Let A, ;, P bool  be a VPAF defined from A, ;, . For every semantics sem and every α ∈ [0, 1),

Accsem



( P bool )

α ,  , VDef P

bool



( )

= Accsem ( ). ( P bool )

( )

Proof. Clearly, A ⊆ A defends c w.r.t. A,  , VDef P  iff A defends c w.r.t. A, . As VDef P bool

or 1, A is

bool

( P bool )

can take only values 0

α -conflict free (with α ∈ [0, 1)) iff for all a, b ∈ A, a  b is not true, iff for all a, b ∈ A, b a whenever a ; b ( )

( )

(by (5)), iff a, b ∈ A s.t. a  b, iff A is conflict-free w.r.t.  . Then A ⊆ A is

( P bool )

α -conflict-free w.r.t. A,  , VDef P  iff A bool

( )

is conflict-free w.r.t. A, . As VDef P

bool

( P bool )

( )

can take only values 0 or 1, A defends itself for A,  , VDef P  iff it defends itself for A, . Hence bool

( P bool )

( )

Accadm (α ,  , VDef P ) = Accadm ( ). Therefore this equality is also true for the two other semantics. bool

( P bool )

2

( P bool )

Note that Lemma 2 already proves the equality between Accsem (  , VDef P ) and Accsem (0,  , VDef P ). bool

bool

The next lemma states that when the preference relation never coincides with this attack relation, the preference relation has no effect over the extensions.

JID:IJA AID:7683 /FLA

[m3G; v 1.121; Prn:8/01/2014; 13:23] P.12 (1-24)

S. Kaci, C. Labreuche / International Journal of Approximate Reasoning ••• (••••) •••–•••

12



Lemma 6. Let A, ;,  be a PAF. Let A, ;, P tern  be a VPAF defined from A, ;, . Under Definition 17, if ; ∩ = ∅, then for every α ∈ R+ ,



Accadm (;) = Accadm , VDef P



tern

)  ( P tern  = Accadm α ,  , VDef P . tern

The proof of Lemma 6 is clear and is thus omitted. The next lemma shows that Property 1 is fulfilled, which was an important part of our motivation in Section 3.1. ( P tern )

( P tern )

Lemma 7. Let A, ;,  be a PAF and A,  , VDef P  be a VAF, such that  is derived from following Definition 17. Then, ( P tern )

tern

if a ; b and b ; c then a defends c w.r.t. A,  , VDef P  if one of the following cases holds tern

• c b, • [c ∼ b or c ≈ b] and not(b a), • b c and a b. Indeed, the third item fulfills Property 1. Proof. One shall have VDef (a, b)  VDef (b, c ). We distinguish between three cases:

• VDef (b, c ) = 0 (i.e. c b); • VDef (b, c ) = 12 (i.e. c ∼ b or c ≈ b) and VDef (a, b)  12 (i.e. not(b a)); • VDef (b, c ) = 1 (i.e. b c) and VDef (a, b) = 1 (i.e. a b). 2 5. Link between the defense for Boolean and valued preference relations – Fulfillment of Property 2 We show in this section that Property 2 is fulfilled in our framework. In Example 6, there are two stale extensions obtained with PAF. One of them provides a weaker defense and should not be considered as an extension. This shortcoming of preference-based argumentation frameworks comes from the fact that the concept of a defense with Boolean preference relation is not discriminative enough. In Example 10, we also saw that the defense occurs less often with valued preference-based argumentation framework compared to preference-based argumentation framework. In fact, a2 , a4 , a3 ( )

and a1 respectively defend a4 , a2 , a1 and a3 against a1 , a3 , a2 and a4 respectively w.r.t. A, . On the other hand, only

( P tern )

the first three defenses hold given A,  . This property, i.e. that of defense occurring less often with VPAF compared to PAF, is depicted in Property 2. We first show in Section 5.1 that this property holds, under a very mild condition on the Boolean and valued preference relations. Then we prove in Section 5.2 that the defense occurs in the same situations for valued preference-based argumentation framework and preference-based argumentation framework if and only if the valued preference relation P is somehow Boolean. In other words, whenever there are some real graduality in P , there are strictly less situations of defense in valued preference-based argumentation framework compared to preference-based argumentation framework. 5.1. General inclusion result between the defense for Boolean and valued preference relations Consider a PAF A, ;,  and a VPAF A, ;, P . We introduce a very weak assumption on the relationship between P and to express that P is a refinement of . The assumption simply says that P (a, b) is larger when a is strictly preferred to b than when it is not the case. Formally,

∀a, b, c , d ∈ A:

if a b and not(c d)

then P (a, b) > P (c , d).

(7)

Relation (7) can be seen as some consistency condition between and P in the sense that the more a is preferred to b with respect to , the larger P (a, b). Henceforth it is assumed to hold. We also make the following assumption on P . Recall that P (a, b) is the degree of credibility of the statement “a is strictly preferred to b”. Hence, if P (a, b) is equal to 1 then a should be strictly preferred to b w.r.t. :

∀a, b ∈ A:

if P (a, b) = 1 then a b.

(8)

Lastly the degree of preference P (a, b) should be strictly positive when a b:

∀a, b ∈ A:

if a b

then P (a, b) > 0.

Definition 18 formalizes Property 2.

(9)

JID:IJA AID:7683 /FLA

[m3G; v 1.121; Prn:8/01/2014; 13:23] P.13 (1-24)

S. Kaci, C. Labreuche / International Journal of Approximate Reasoning ••• (••••) •••–•••

(P )

13

( )

Definition 18. Let A, ;, P  and A, ;,  respectively representing A, , VDef P  and A, . The defense w.r.t. A, ;, P  is said to be more discriminative than that w.r.t. A, ;,  if

  (P ) ( A , c )  A ⊆ A, c ∈ A and A defends c w.r.t. A, , VDef P    ( )  ⊆ ( A , c )  A ⊆ A, c ∈ A and A defends c w.r.t. A,  .



(10) (P )

( )

Proposition 2. Let A, ;, P  and A, ;,  respectively representing A, , VDef P  and A, . Under (7) and (8), property (10) holds. ( )

Proof. Let A ⊆ A and c ∈ A. Then A defends c w.r.t. A,  iff for every b ∈ A such that b ; c,

c b

or



 ∃a ∈ A: a ; b and not(b a) . (P )

(11) (P )

Moreover A defends c w.r.t. A, , VDef P  iff not(b  c ) or [∃a ∈ A: a ; b and VDef P (a, b)  VDef P (b, c )], that is iff





P (c , b) = 1 or ∃a ∈ A: a ; b and P (b, a)  P (c , b) .

(12) (P )

The result of the proposition is clear if c is not attacked w.r.t. ;. Assume now that A defends c w.r.t. A, , VDef P . Let b ∈ A such that b ; c. Then (12) holds. By (7) and (8), we obtain that either c b or there exists a ∈ A such that a ; b and [not(b a) or c b]. Hence (11) holds and the proposition is proved. 2 Next result shows that Property 2 is fulfilled. (P )

(P )

( )

Corollary 1. A, ;, P  and A, ;,  respectively representing A, , VDef P  and A, . Under (7) and (8), Accadm (0, , ( )

VDef P ) ⊆ Accadm ( ). ( )

Note that according to Lemma 2, Accadm (;, VDef P ) ⊆ Accadm ( ). (P )

( )

Proof. Let A ∈ Accadm (0, , VDef P ). By Proposition 2, A defends itself against all attacks w.r.t.  (defined by (1)). Moreover, by (3), as A is

(P )

α -conflict free, with α = 0, we have not(a  b) for all a, b ∈ A. Hence, either not(a ; b) or P (b, a) = 1. By (8), ( )

( )

we obtain either not(a ; b) or b a. Thus A is also conflict-free w.r.t.  . Hence A ∈ Accadm ( ).

2

5.2. Case when P derives from a valuation of the arguments We have shown in Corollary 1 that the extensions obtained with VPAF from a valued-preference P are necessarily included in the extension of a PAF from a Boolean relation provided that P is a natural extension of (where natural is defined by (7) and (8)). We investigate in this section the necessary and sufficient conditions under which the inclusion is just an equality. This will provide the situations where the inclusion is strict. This is an important issue as the idea of Property 2 is that there are strictly less extensions with VPAF compared to PAF. We consider thus the case where A, ;,  and A, ;, P  yield the same situations of defense. We show that property (10), where the inclusion is replaced with an equality, holds only when the valued preference relation is Boolean. We restrict ourselves in this section to the case when P derives from a valuation w on the arguments. Let A, ;, w  be a weighted preference-based argumentation framework and A, ;, w  be its associated preferencebased argumentation framework according to Eq. (2). Let A, ;, P w  be a valued preference-based argumentation frame w w work where P w is computed from w. The simplest expression of P w is the one that is similar to w : P bool = P bool (see Definition 16), i.e.



∀a, b ∈ A,

w P bool (a, b) =

1 if w (a) > w (b), 0 if w (a)  w (b).

(13)

We will give other examples of functions P w in Section 6. Given w, we compare the defense in both frameworks A, ;, w  and A, ;, P w . We assume that the strict preference relation P w can be written from w and a function p : [0, 1] × [0, 1] → [0, 1] [19]. It is denoted by P pw . Formally, we have

∀a, b ∈ A,





P pw (a, b) = p w (a), w (b) .

(14)

JID:IJA AID:7683 /FLA

[m3G; v 1.121; Prn:8/01/2014; 13:23] P.14 (1-24)

S. Kaci, C. Labreuche / International Journal of Approximate Reasoning ••• (••••) •••–•••

14

The larger the valuation of argument a, the more statement “a is strictly preferred to b” is true; the smaller the valuation of argument b, the more previous statement is true. Hence we assume some monotonicity conditions on p:

∀ v 1  v 1 , ∀ v 2  v 2 ,





p ( v 1 , v 2 )  p v 1 , v 2 .

(15)

Relation (7) applied on w and P w is equivalent to:

p (α , β) > p (δ, γ ) whenever α > β and δ  γ .

(16)

In the situation where w (a) = 1 and w (b) = 0, we have a clear strict preference of a over b (see (8) and (9)). Hence statement “a is strictly preferred to b” is certainly true and P pw (a, b) = 1. Likewise, when w (a) = 0 and w (b) = 1, statement “a is strictly preferred to b” is certainly not true and P pw (a, b) = 0. Hence, we have the boundary conditions:

p (0, 1) = 0

p (1, 0) = 1.

and

(17) P pw (a, b),

Function p shall be basically continuous in its two coordinates. Indeed, in if the valuation of argument a or b is changed just a little bit, then one expects that the overall preference between a and b changes also only slightly. Yet w there are examples for which function p is discontinuous. This is for instance the case of P bool (a, b) where p bool ( v 1 , v 2 ) = 1 if v 1 > v 2 and p bool ( v 1 , v 2 ) = 0 otherwise. In the next Section 6, we will consider continuous functions p only, which corresponds to a non-degenerate valued preference relation. However, in this Section 5.2, we want to show that we have w an equality in Property 2 if and only if P = P bool . Hence we need to allow that p can be equal to p bool , and thus to relax the continuity property. Yet, function p bool is discontinuous on the diagonal, i.e. for p bool (t , v ) at t = v. We only allow p to be discontinuous on the diagonal as it does not make sense to have discontinuity elsewhere. This leads to the following assumption:

Function (t , v ) → p (t , v ) is continuous except at t = v .

(18)

We have seen in Example 5, that arguments b and c have the same valuation (noted t). We write VDef P pw (b, c ) = 1 − p ( w (b), w (c )) when b ; c. Note that w (b) = w (c ) = t. In this example, the degree of preference of b over c shall not depend on the common valuation t of these two arguments. Hence, for symmetry reasons, we assume that

∀t , v ∈ [0, 1],

p (t , t ) = p ( v , v ).

(19)

Due to the possible discontinuity on the diagonal, the previous assumption can be generalized in the following way:

∀t , v ∈ (0, 1],









p t, t− = p v, v− ,

(20)

with the notation p (t , v − ) = limε→0, ε>0 p (t , v − ε ). Among the previous properties, we assume in this section that (14), (15), (17), (18) and (20) hold. The other properties (16) and (19) will be used in Section 6. Lastly, we also assume that the function p is fixed and does not depend on w. In the light of our assumed properties, we wish to show that (10) holds iff p = p bool . We consider a situation where an argument a defends an argument c and both a and c can be in the same extension. Hence not(a ; c ), and there exists b such that a ; b, b ; c. We can state the following result: Proposition 3. Let A be a set of arguments and ; an attack relation on A such that there exist a, b, c ∈ A with a ; b, b ; c and not(a ; c ). Let A, ;, w  be a weighted preference-based argumentation framework and A, ;, w  be its associated preferencebased argumentation framework according to Eq. (2). Let A, ;, P pw  be a valued preference-based argumentation framework where ( P pw )

( w )

P pw is computed from w. Let A,   be the AF represented by A, ;, w  and A,  , VDef P  be the AFV represented by A, ;, P pw . Assume that p satisfies (15), (17), (18) and (20). Then

   ( A , c )  A ⊆ A, c ∈ A and A defends c w.r.t. A, ;, w     = ( A , c )  A ⊆ A, c ∈ A and A defends c w.r.t. A, ;, P pw



(21)

w is fulfilled for every w : A → [0, 1] iff P pw = P bool (see (13)).

From the above result, P is the valued preference relation representing w . w then it is easy to see that (21) holds. Proof. First of all, if P pw = P bool Assume now that (21) holds. Let a, b, c ∈ A such that a ; b, b ; c and not(a ; c ). In the rest of the proof, we will consider weight functions w such that

∀d ∈ A \ {a, b, c },

w (d) = 0 and

w (c ) > 0.

(22)

JID:IJA AID:7683 /FLA

[m3G; v 1.121; Prn:8/01/2014; 13:23] P.15 (1-24)

S. Kaci, C. Labreuche / International Journal of Approximate Reasoning ••• (••••) •••–•••

15

( w )

In the framework A, ;, w , c can be defeated only by b. Then a defends c w.r.t. A,   iff

w (c ) > w (b)

or

w (a)  w (b).

(23) ( P pw )

In the framework A, ;, P pw , a defends c w.r.t. A,  , VDef P  iff p ( w (c ), w (b)) = 1 or p ( w (b), w (a))  p ( w (c ), w (b)), that is iff









p w (b), w (a)  p w (c ), w (b) .

(24)

We have assumed that (21) is true. Now, from Proposition 2, we only need to consider the case when the left hand side of (21) is included in the right hand side of (21). Considering this inclusion, when (21) hold we obtain that “if (23) is true then (24) is also true”. Let us consider the first condition in (23). We consider a sequence { w k }k∈N of weights on A and {εk }k∈N of numbers in (0, 1] that tend to 0 when k increases, such that

w k (c ) = 1,

w k (b) = 1 − εk

and

w k (a) = 0.

This sequence satisfies (22) and (23). Then relation (24) applied to (a, b, c ) gives p (1 − εk , 0)  p (1, 1 − εk ). Letting k tend to infinity, we obtain by (17) and (18), 1  p (1, 1− )  p (1− , 0) = p (1, 0) = 1. Hence by (20), p (t , t − ) = 1 for all t ∈ (0, 1]. For t > v, 1  p (t , v )  p (t , t − ) = 1 by (15). Then

∀t , v ∈ [0, 1] with t > v ,

p (t , v ) = 1.

(25)

Let us consider the second condition in (23). We consider a sequence { w k }k∈N of weights on A and {εk }k∈N of numbers in (0, 1] that tend to 0 when k increases, such that

w k (c ) = εk

and

w k (b) = w k (a) = 1.

This sequence satisfies (22) and (23). Then, relation (24) gives 0  p (1, 1)  p (εk , 1). At the limit, we obtain by (17), 0  p (1, 1)  p (0, 1) = 0. Hence p (1, 1) = 0. This implies that for t  v, we have 0  p (t , v )  p ( v , v ) = 0.

∀t , v ∈ [0, 1] with t  v ,

p (t , v ) = 0.

w Combining (25) and (26), we obtain that P pw = P bool .

(26)

2

The equivalence between having the same defense as a Boolean preference relation and having a Boolean relation can be extended to the admissible extensions. The following result shows that we have an equality in Property 2 if and only if w P = P bool . Corollary 2. Let A be a set of arguments and ; an attack relation on A such that there exist a, b, c ∈ A with a ; b, b ; c and not(a ; c ). Let A, ;, w  be a weighted preference-based argumentation framework and A, ;, w  be its associated preference( P pw )

( )

based argumentation framework according to Eq. (2). We have Accadm ( ) = Accadm (α ,  , VDef P pw ) for all w : A → [0, 1] iff w P pw = P bool .

w then we have the same defense for A, ;, w  and A, ;, P pw  (see Proposition 3), and Proof. Clearly, if P pw = P bool conflict-freeness w.r.t. A, ;, w  is the same as α -conflict freeness w.r.t. A, ;, P pw  for α ∈ [0, 1). Hence we have the same extensions in both frameworks.

( w )

( P pw )

w Conversely, assume that P pw = P bool and Accadm (  ) = Accadm (α ,  , VDef P pw ) for all w : A → [0, 1]. Let a, b, c ∈ A with a ; b, b ; c, not(a ; c ). Assume first by contradiction that c ; a. Let us consider w such that w (b) > w (c ) > w (a) > 0 and w (d) = 0 for all d ∈ A \ {a, b, c }. For the Boolean framework, the admissible extensions are {b}, {a, b}. This set corresponds also to the admissible extensions in the valued framework if p ( w (b), w (a)) = 1 (as {a, b} is 0-conflict free) and p ( w (c ), w (b))  p ( w (a), w (c )). This holds for every w defined as before. The first relation implies that p (u , v ) = 1 whenever u > v. Proceeding as in the proof of Proposition 3, taking a family of weight functions w k satisfying the previous constraints and such that w k (a) tends to 0, w k (b) = 1 and w k (c ) tends to 1, we obtain at the limit 0  p (1, 1)  p (0, 1) = 0 (see (17)). Hence if u  v, w . 0  p (u , v )  p ( v , v ) = p (1, 1) = 0 (see (15) and (19)). This contradicts the fact that P pw = P bool Hence it remains to consider the case when not(c ; a). From the proof of Proposition 3, we have shown that there exists w such that a defends c w.r.t. A, ;, w  and not w.r.t. A, ;, P pw . As a and c are not in conflict, {a, c } is an admissible extension of A, ;, w  but not for A, ;, P pw . We also obtain a contradiction. 2

JID:IJA AID:7683 /FLA

[m3G; v 1.121; Prn:8/01/2014; 13:23] P.16 (1-24)

S. Kaci, C. Labreuche / International Journal of Approximate Reasoning ••• (••••) •••–•••

16

If P does not derive from a valuation of the arguments then the satisfaction of (21) does not necessarily imply that P is Boolean. The following lemma states that in this case, P has a special form. Indeed the intensity of preference is not linear and is somehow exponential in the relative order between arguments: the intensity of preference between the preferred arguments w.r.t. is much larger than that for the less preferred arguments. Lemma 8. Assume that is acyclic. Let C 0 , C 1 , . . . , C q be a topological sorting of A w.r.t. , i.e. C i is the set of dominated elements of A \ (C 0 ∪ · · · ∪ C i −1 ) w.r.t. , where C −1 = ∅. Define P by



∀a ∈ C i , ∀b ∈ C j ,

P (a, b) =

1 2q−i



1 2q− j

0

if i > j , otherwise.

Then

  ( A , c )  A ⊆ A, c ∈ A and A defends c w.r.t. A, ;,     = ( A , c )  A ⊆ A, c ∈ A and A defends c w.r.t. A, ;, P  .



(27)

Proof. The topological sorting is well-defined as is acyclic. From Proposition 2, we need only to consider the case when the left hand side of (27) is included in the right hand side of (27). Under the assumption that “if (11) is true then (12) is also true”, then the previous inclusion is satisfied. This assumption holds if for every a, b, c ∈ A s.t. a ; b and b ; c

if c b

or

not(b a)

then P (c , b)  P (b, a).

(28)

Let us now consider P as defined in the statement of the lemma, and let us show that (28) is fulfilled. We will consider two cases. The first situation arises when not(b a). Then from the definition of P , we have P (b, a) = 0 and thus (28) is necessarily fulfilled whenever not(b a). The second situation arises when b a and c b. Let c ∈ C i , b ∈ C j and a ∈ C k with i > j > k. Then

P (c , b) − P (b, a) 

1 2q−i



2

+

1

2q− j 2q−k Hence (28) is fulfilled whenever b a and c b. We conclude that (28) is satisfied. 2



1 2q−k

> 0.

6. Instantiation of the valued preference relation: From a valuation of arguments to a valued preference relation We study in this section how to construct the valued preference relation P if the preferences over arguments are represented by a valuation w over arguments. The idea is to derive such a construction from the extensions that are obtained. More precisely, for some constructions of P from w, non-relevant extensions are obtained. This yields some properties that such a construction should satisfy. As mentioned in the introduction, there are many possible interpretations of a valuation on the arguments. Example 11. Tweeter has become one of the major micro-blogging social network. A major issue is to assess the reliability of Tweeter counts and the information they convey [10]. A method for identifying the reliable Tweeter users in a particular domain is proposed in [10]. Several indicators on the reliability of Tweeter sources are proposed in [14], in the particular context of event having just happen. In [11], a classification method is used to assess the credibility of a tweet. An in-depth analysis of the factors influencing the perception that users have on the credibility of a tweet is performed in [29]. The most relevant features to assess the reliability of a tweet are: commitment of the user in his Tweeter count (presence of a picture, level of detail of the description of the count, volume and frequency of publications, . . .), legitimacy of the count in a specific thematic (proximity of the thematic to the subject of interest described in the count), richness of expression (quality of spelling, grammar). A multi-criteria approach based on these factors has been proposed in [30] to assess the credibility of a tweet. Let A, ;, w  be a WPAF and A, ;, w  be its associated PAF. Let A, ;, P w  be a VPAF where P w is computed from w. Let us give two examples fulfilling (14), (15), (16), (17) and (19).



∀a, b ∈ A,

P 1w (a, b) =

∀a, b ∈ A,

P 2w (a, b) =



1 if w (a) > w (b), w (a) − w (b) + 1 if w (a)  w (b), 0 w (a) − w (b)

if w (a) < w (b), if w (a)  w (b).

Given w, we compare the defense in both frameworks A, ;, w  and A, ;, P w  in order to derive the properties that P w should satisfy.

JID:IJA AID:7683 /FLA

[m3G; v 1.121; Prn:8/01/2014; 13:23] P.17 (1-24)

S. Kaci, C. Labreuche / International Journal of Approximate Reasoning ••• (••••) •••–•••

17

Table 1 The Yes/No are the answers to the question “Does a defend c?”. The admissible extensions are also given, where P ( A ) denotes all non-empty subsets of A. Sit.

Condition on w

α

w (a) < w (b) < w (c ) { w (c ), w (a)} < w (b) w (b) < { w (c ), w (a)} w (c ) < w (b)  w (a) w (c )  w (b) < w (a)

β

γ δ1 δ2

a defends c w.r.t.

Acceptability Accadm w.r.t.

w

P 1w

()

(0,  , VDef P 1w )

(;, VDef P 1w )

Yes No Yes Yes Yes

Yes No Yes Yes No

P ({a, b, c }) P ({a, b}) P ({a, c }) P ({a, c }) P ({a, c })

P ({a, b, c }) P ({a, b}) P ({a, c }) P ({a, c }) {a}

{a}, {b}, {c }, {a, c } {a}, {b} P ({a, c }) P ({a, c }) {a}

( P 1w )

( w )

6.1. Study of P 1w Following P 1w , the strict preference between two arguments a and b is certain as soon as w (a) > w (b). Hence we have ( P 1w )

VDef P w (a, b) = w (a) − w (b) if w (a) > w (b) and VDef P w (a, b) = 0 otherwise (as not(a  b)). 1 1 Let a, b and c be three arguments such that a ; b and b ; c. We are interested in the defense provided by a in favor of c against b, in the two frameworks A, ;, w  and A, ;, P 1w . We distinguish between the five situations summarized in Table 1. ( P 1w )

( P 1w )

( )

( )

• Situation α : w (a) < w (b) < w (c ). Then not(a  b) and not(b  c ). We also have neither b  c nor a  b since ( )

c w b and b w a. The following figure depicts the defeats with the Boolean  and VDef P w relations. 1

Bool : VDef P w : 1

a b c, a b c.

The Boolean and valued cases yield the same conclusion: there is no defeat of b on c. ( P 1w )

• Situation β : w (c ) < w (b) and w (a) < w (b) (written in a compact way as { w (c ), w (a)} < w (b)). Then not(a  b), ( P 1w )

( )

( )

b  c with VDef P w (b, c ) > 0. We also have b  c but not(a  b). The intensity of the defeat is presented above the 1 arrow in the latter.

Bool :

a b

VDef P w : 1

a b

c,

 w (b)− w (c )



c.

The defense provided by a fails both in the Boolean and the valued cases. ( P 1w )

• Situation γ : w (b) < { w (c ), w (a)} with the compact notation. We have VDef P 1w (a, b) > 0 and not(b  c ). We also have ( )

( )

not(b  c ) but a  b.

Bool :

a

VDef P w : 1

a

 w (a)− w (b)



b

c, 0

b  c.

Hence a defeats b and b does not defeat c both in the Boolean and the valued cases.

• Situation δ : w (c ) < w (b) < w (a). Bool :

a

VDef P w : 1

a

 w (a)− w (b)



b b

 w (b)− w (c )



c, c.

We have two sub-situations: – Situation δ 1: w (c ) < w (b)  w (a) which means that w (b) − w (c ) < w (a) − w (b). Hence VDef P w (a, b) = w (a) − 1

( )

( )

w (b) > VDef P w (b, c ) = w (b) − w (c ). We also have b  c and a  b. Indeed a defends c both in the Boolean and the 1 valued cases. – Situation δ 2: w (c )  w (b) < w (a), which means that w (b) − w (c ) > w (a) − w (b). Hence, VDef P w (a, b) = w (a) − 1

( )

( )

w (b) < VDef P w (b, c ) = w (b) − w (c ). On the other hand, we have b  c and a  b. Indeed the Boolean and the 1 valued cases yield different conclusions: a defends c in the Boolean case but not in the valued case. In sum, we get the same conclusion w.r.t. A, ;, w  and A, ;, P 1w , except in situation δ 2 in which the defeat of b on c is large whereas the defeat of a on b is weak. However, the intuition of the Boolean case ( w ) is valid: a is stronger

JID:IJA AID:7683 /FLA

[m3G; v 1.121; Prn:8/01/2014; 13:23] P.18 (1-24)

S. Kaci, C. Labreuche / International Journal of Approximate Reasoning ••• (••••) •••–•••

18

than both b and c, and, because of that, it deserves to defend c against the attack of b (even if a is just slightly stronger than b). This situation is confirmed in Example 5 and Property 1. Consequently, we conclude that the expression P 1w is not suitable. ( P 1w )

Regarding the extensions, we note that Accadm (α ,  , VDef P w ) and Accadm (;, VDef P w ) differ in two situations 1 1 (α and β ). There is only one situation in which the defense is not the same between the Boolean and the valued cases. Yet, we also note that in terms of acceptability, there are much more situations in which we do not find the same extension (situations α , ( P 1w )

β and δ 2 in Table 1). In situation β , {a, b} does not form an extension for A,  , VDef P 1w  as a ; b. ( P 1w )

Lemma 9. For VDef P w , let A ∈ Accadm (0,  , VDef P w ) \ Accadm (;, VDef P w ). Then there exist a, b ∈ A such that either 1 1 1 [a ; b, not(b ; a) and w (a)  w (b)] or [a ; b, b ; a and w (a) = w (b)]. The situation in which the two definitions provide different extensions arises thus quite often. Proof. For such A, there exist a, b ∈ A such that a ; b and VDef P w (a, b) = VDef P w (b, a) = 0. If not(b ; a), this implies that 1 1 w (a)  w (b). Now if b ; a, we obtain w (a) = w (b). 2 6.2. General properties of P w As in Section 5.2, we assume that the valued preference depends on a function p, and is given by (14). It is denoted by P pw . We start from the assumptions (16), (17) and (19) introduced in Section 5.2, and the monotonicity conditions on p. Moreover, condition (18) is strengthened. By virtue of Proposition 3, we consider a non-Boolean preference function P pw . Hence there shall be no discontinuity of p on the diagonal. Function p shall thus be continuous. We assume that p is fixed and does not depend on A and w. It is also supposed to satisfy all previous requirements. Following Example 5 and Property 1, the situation δ2 raised in the study of P 1w can be formalized in the following way. Unrestricted positive defense (UPD): Let A be a set of arguments, and w be a function from A to [0, 1]. Let A, ;, P pw  ( P pw )

be a VPAF, where P pw is given by (14), representing an AFV A,  , VDef P pw . Let a, b, c ∈ A. If a ; b, b ; c and w (a)  ( P 1w )

w (b)  w (c ) then a defends c against b w.r.t. A,  , VDef P pw . Consequently, we have the following result. Proposition 4. Under UPD valid for every weight function w : A → [0, 1], we have P pw (a, b) = 0 whenever w (a)  w (b). Proof. Let us consider A ⊆ A, b, c ∈ A and a ∈ A satisfying a ; b and b ; c. Consider furthermore a weight function w such that

w (c ) = 0,

w (b) = 1 and

w (a) = 1.

By UPD, one shall have VDef P pw (a, b) = 1 − p ( w (b), w (a))  VDef P pw (b, c ) = 1 − p ( w (c ), w (b)), that is p ( w (c ), w (b))  p ( w (b), w (a)). For w (c ) = 0, w (b) = 1 and w (a) = 1, we obtain 0 = p (0, 1)  p (1, 1)  0. Hence p (1, 1) = 0. From (19), we obtain p (t , t ) = 0 for all t ∈ [0, 1]. By non-decreasingness of p w.r.t. the first argument, 0  p (t , v )  p ( v , v ) = 0 for all t  v with t , v ∈ [0, 1]. 2 From Proposition 4, there is no way the statement “a is strictly preferred to b” is true when w (a) < w (b). One is sure about the credibility of this assertion only when w (a) is significantly larger than w (b). Therefore P 1w is ruled out and P 2w is retained. We denote by VDef P w the value of VDef for preference P 2w . Assume that a ; b and b ; c. Considering P 2w , 2 VDef P w (a, b) = min(1 + w (a) − w (b), 1). We distinguish between the five situations depicted in Table 2. 2

• Situation α : w (a) < w (b) < w (c ). We have VDef P 2w (a, b) = 1 − ( w (b) − w (a)) and VDef P 2w (b, c ) = 1 − ( w (c ) − w (b)). ( )

( )

Moreover, we have neither a  b nor b  c since b a and c b.

Bool :

a

VDef P w : 2

a

b 1−( w (b)− w (a))



b

c, 1−( w (c )− w (b))



c.

JID:IJA AID:7683 /FLA

[m3G; v 1.121; Prn:8/01/2014; 13:23] P.19 (1-24)

S. Kaci, C. Labreuche / International Journal of Approximate Reasoning ••• (••••) •••–•••

19

Table 2 The Yes/No are the answers to the question “Does a defend c?”. The admissible extensions are also given, where P ( A ) denotes all non-empty subsets of A. Sit.

α1 α2 β

γ δ

Condition on w

w (a)  w (b) < w (c ) w (a) < w (b)  w (c ) { w (c ), w (a)} < w (b) w (b) < { w (c ), w (a)} w (c ) < w (b) < w (a)

a defends c w.r.t.

Acceptability Accadm w.r.t.

w

P 2w

()

(0,  , VDef P 2w )

(;, VDef P 2w )

Yes Yes No Yes Yes

No Yes No Yes Yes

P ({a, b, c }) P ({a, b, c }) P ({a, b}) P ({a, c }) P ({a, c })

{a} P ({a, c }) {a} P ({a, c }) P ({a, c })

{a} P ({a, c }) {a} P ({a, c }) P ({a, c })

( P 2w )

( w )

Fig. 6. Attacks among arguments.

We distinguish between two sub-situations: – Situation α 1: w (a)  w (b) < w (c ) (i.e. w (b) − w (a) > w (c ) − w (b)). Then we have VDef P w (a, b) = 1 + w (a) − w (b) < 2

( )

( )

VDef P w (b, c ) = 1 + w (b) − w (c ). We also have neither b  c nor a  b. Hence the Boolean and the valued cases yield 2 different conclusions: a defends c in the Boolean case but not in the valued case. – Situation α 2: w (a) < w (b)  w (c ) (i.e. w (b) − w (a) < w (c ) − w (b)). Hence VDef P w (a, b) = 1 + w (a) − w (b) > 2

( )

( )

VDef P w (b, c ) = 1 + w (b) − w (c ). We also have neither b  c nor a  b. Hence a defends c both in the Boolean 2 and the valued cases. ( )

( )

• Situation β : { w (c ), w (a)} < w (b). We have VDef P 2w (a, b) < 1 and VDef P 2w (b, c ) = 1. We also have b  c but not(a  b). Bool :

a

VDef P w : 2

a

b  c, 1−( w (b)− w (a))



1

b  c.

The defense of a fails both in the Boolean and the valued cases. ( )

( )

• Situation γ : w (b) < { w (c ), w (a)}. We have VDef P 2w (a, b) > 0 and VDef P 2w (b, c ) = 0. We also have not(b  c ) but a  b. Bool :

a  b

VDef P w : 2

1

a  b

c, 1−( w (c )− w (b))



c.

Hence a defeats b and b does not defeat c in both the Boolean and the valued cases. ( )

( )

• Situation δ : w (c ) < w (b) < w (a). Hence VDef P 2w (a, b) = 1 and VDef P 2w (b, c ) = 1. We also have b  c and a  b. Bool :

a  b  c,

VDef P w : 2

1

1

a  b  c.

Indeed a defends c both in the Boolean and the valued cases. Situation α in Table 1 (resp. δ in Table 2) is decomposed into two situations α 1 and α 2 in Table 2 (resp. δ 1 and δ 2 in Table 1). Having these correspondences in mind, we obtain different results in Tables 1 and 2. Situation δ actually follows from UPD. Regarding α 1 and α 2, consider the following example. Example 12. In medicine, there is an important question about whether human genes should be patented. Consider the following three arguments (attacks are given in Fig. 6):

• c: Prices on breast cancer analysis are high because of the companies patent on human genes. So human genes should not be patented.

• b: Almost all patients receive insurance coverage, often without co-payments. Hence the cost for most patients is quite reasonable.

• a: Beast cancer analysis should be free of charge for patients. Argument c is the preferred one as it relates to the affordability of medical cares. Argument a is the least preferred one as one cannot require that medical care are completely free of charge. Hence

w (c ) > w (b) > w (a).

JID:IJA AID:7683 /FLA

[m3G; v 1.121; Prn:8/01/2014; 13:23] P.20 (1-24)

S. Kaci, C. Labreuche / International Journal of Approximate Reasoning ••• (••••) •••–•••

20

Argument b is put forward by private biotechnology companies. The valuation of b (compared to that of a and c) depends in particular on the trust the audience has in biotechnology companies. In situations α 1 and α 2, c is weaker than b, and b is weaker than a. Hence the defeats of c over b, and of b over a are weak. In situation α 1, w (a)  w (b) < w (c ) means that a, that is supposed to defend c, is much weaker than b and c. In Example 12, it means that the audience rates argument a much less than b and c. The attack of c by b is present (even though it is a little bit weakened by the fact that c is slightly preferred to b). As the defense a is too weak (due to the low valuation of a), it is thus reasonable that the defense of c by a fails in this case. In situation α 2, condition w (a) < w (b)  w (c ) means that the weight of a is not too far from that of b compared to c. Returning to Example 12, this means that the audience finds that both arguments a and b are much less preferred to c so that it makes sense that argument c is acceptable. One then may admit that a is sufficiently strong to defend c against b. Hence the results of Table 2 are natural. There is only one situation in which the defense is not the same between the Boolean and the valued cases. Yet, we also note that in terms of acceptability, there are much more situations in which we do not find the same extension (see Table 1). As attacks always hold between a and b, and between b and c, with various strengths, we eliminate the extensions where both a and b are present. ( P 2w )

( P 2w )

Lemma 10. For VDef P w , let A ∈ Accadm (0,  , VDef P w ) \ Accadm (;, VDef P w ). For VDef P w , let A ∈ Accadm (0,  , VDef P w ) \ 2

2

2

2

Accadm (;, VDef P w ). Then there exist a, b ∈ A such that a ; b, not(b ; a), w (a) = 0 and w (b) = 1.

2

2

The situations in which the two definitions provide different extensions arise thus only marginally. This shows that changing the definition of conflict-freeness with VDef P w does not basically refine the extensions. 2

( P 2w )

( P 2w )

Proof. For such A, there exist a, b ∈ A such that a ; b, not(a  b) and not(b  a). If not(b ; a), this implies that 1 + w (a) − w (b) = 0, and thus w (a) = 0 and w (b) = 1. Now, if b ; a, then we have both 1 + w (a) − w (b) = 0 and 1 + w (b) − w (a) = 0, which is impossible. 2 Regarding the extensions, notice that Accadm (0, , VDef ) and Accadm (;, VDef ) are always the same. Hence the refinement of conflict-freeness does not make a difference in our framework. 6.3. Link with residual implications This section is motivated by the remark that the expression of VDef P w uses a well-known residual implication. We con2 sider more general classes of residual implications. The main interest of this section is twofold. First it presents an interesting link between valued defeat relations and the concept of residual implications well-known in fuzzy set theory. More importantly, this section provides a class of valued defeat functions that generalize VDef P w . We believe this is important for practitionners. 2

Given a t-norm T ,3 the residual implication I T of T is given by [19]





I T (x, y ) = sup u ∈ [0, 1]: T (u , x)  y . When a ; b, we can write VDef P w (a, b) = I ( w (b), w (a)), where I (x, y ) = min(1 − x + y , 1). Function I is the residual 2

implication corresponding from the Łukasiewicz t-norm [19].4 VDef P w is thus the inverse of a residual implication. One can 2 obtain more general valued defeat relations by considering the residual implications derived from the Łukasiewicz t-norm: I ϕ (x, y ) := ϕ −1 (min(1 − ϕ (x) + ϕ ( y ), 1)), where ϕ [0, 1] → [0, 1] is strictly increasing. To this end, let us first recall the characterization of Łukasiewicz implication (up to an increasing bijection).

3

A t-norm is a function T : [0, 1] × [0, 1] → [0, 1] which satisfies:

(neutral element) T (1, x) = x for all x ∈ [0, 1], (commutativity) T (x, y ) = T ( y , x) for all x, y ∈ [0, 1], (monotonicity) T (x, y )  T (u , v )







for all 0  x  u  1 and 0  y  v  1,

(associativity) T x, T ( y , z) = T T (x, y ), z 4



for all x, y , z ∈ [0, 1].

The Łukasiewicz t-norm is defined by T L (x, y ) = max(x + y − 1, 0).

JID:IJA AID:7683 /FLA

[m3G; v 1.121; Prn:8/01/2014; 13:23] P.21 (1-24)

S. Kaci, C. Labreuche / International Journal of Approximate Reasoning ••• (••••) •••–•••

21

Proposition 5. (See [5, Theorem 2.4.20 on page 66].) For a function I: [0, 1]2 → [0, 1] the following are equivalent: 1. I is continuous and satisfies both the exchange principle









I x, I ( y , z) = I y , I (x, z) and the ordering property

I (x, y ) = 1

⇐⇒

x  y,

x, y ∈ [0, 1].

2. I is conjugate with the Łukasiewicz implication I LK , i.e., there exists unique increasing bijection ϕ : [0, 1] → [0, 1], such that I has the form







I (x, y ) = ϕ −1 min 1 − ϕ (x) + ϕ ( y ), 1 ,

x, y ∈ [0, 1].

Originally, with more axioms, this proposition has been presented in [33]. Next, some axioms have been deleted in [19]. Put

I (x, y ) = 1 − p (x, y ), and let us see how the exchange principle reads for p. We have I (x, I ( y , z)) = I ( y , I (x, z)) iff









p x, 1 − p ( y , z) = p y , 1 − p (x, z) .

(29)

The number p (t , s) can be interpreted as the difference of attractiveness between t and s. Hence the previous relation is somehow related to quaternary relation ∗ . (x, y ) ∗ ( z, t ) means that the intensity of preference of x over y is larger than the intensity of preference of z over t. The interest of ∗ is that it is used in measurement theory to construct interval scales [26]. It will allow us to give a more interpretable formula than (29). We note that 1 is the neutral element of an implication

I (1, x) = x

for all x ∈ [0, 1].

Hence p (1, x) = 1 − x. Hence (29) is equivalent to









p 1 − p (1, x), 1 − p ( y , z) = p 1 − p (1, y ), 1 − p (x, z)

for all x, y , z ∈ [0, 1].

(30)

This expression is more symmetric than (29). We note that 1 − p (s, t ) is the degree to which s is not strictly preferred to t. When s  t, it is equal to the degree to which t is close to s. If 1  x and y  z, p (1 − p (1, x), 1 − p ( y , z)) is the degree to which x to closer to 1 than z is close y, and can be interpreted as the intensity of the quaternary preference ( y , z) ∗ (1, x). Hence (30) can be seen as a quantitative version of the following equivalence

( y , z) ∗ (1, x)

(x, z) ∗ (1, y ),

⇐⇒

which is an exchange property in measurement theory [26]. Proposition 6. Under UPD and (29), there exists ϕ [0, 1] → [0, 1] is strictly increasing such that







p (x, y ) = 1 − ϕ −1 min 1 − ϕ (x) + ϕ ( y ), 1 . Proof. Let us define I (x, y ) = 1 − p (x, y ). We now derive properties on I from that given previously on p. First of all, p is continuous and so is I . Moreover, from Proposition 4, p (x, y ) = 0 whenever x  y, and p (x, y ) > 0 whenever x > y (by (9)). Thus I (x, y ) = 1 iff x  y, and I also satisfies the ordering property. Finally, (29) is equivalent to the exchange principle. Hence the result follows from Propositions 5. 2 Proposition 6 proposes a class of valued defeat functions parametrized by (strictly increasing) function ϕ . Function VDef P w is recovered when ϕ is the identity function. Various ways of gathering the weights on the arguments can be 2 obtained by using different functions ϕ . Many types of functions can be used: convex vs. concave functions, symmetric vs. asymmetric functions, functions with an inflection point. In the domain of decision under uncertainty, a similar function is used to distort probabilities, in models such as rank-dependent expected utility [35,36]. Many forms on function ϕ have been defined in decision under uncertainty and their impact of the decision rules has also been analyzed [35,36,15]. 7. Related work Our valued preference-based argumentation framework can be compared to existing works on the basis of two main grounds: the output in terms of extensions and “conceptual definition”.

JID:IJA AID:7683 /FLA

[m3G; v 1.121; Prn:8/01/2014; 13:23] P.22 (1-24)

S. Kaci, C. Labreuche / International Journal of Approximate Reasoning ••• (••••) •••–•••

22

Fig. 7. Attack relation ;.

7.1. Output-based comparison To the best of our knowledge the only noticeable work which can be compared to our approach is [4]. In the latter, the authors define a preference relation over 2A given a preference relation over A. More precisely, they define two preference relations 1 and 2 such that ∀ A , B ⊆ A, A 1 B if and only if ∀a ∈ A , ∃b ∈ B such that a b and A 2 B if and only if ∀b ∈ B , ∃a ∈ A such that a b. Then the output of a preference-based argumentation framework should return the best extensions w.r.t. 1 (resp. 2 ). The difference between this approach and ours is explicitly apparent in the next examples. Example 13. Let A, ;,  be a preference-based argumentation framework where A = {a1 , a2 , a3 , a4 , a5 , a6 },

a2 ; a1 ,

a4 ; a3 ,

a6 ; a5 ,

a1 ; a6 ,

a3 ; a2 ,

a5 ; a4 ( )

(see Fig. 7) with a2 a5 , a4 a1 and a6 a3 . The preference relation is useless for  (as ; and are never true for ( )

the same pairs of arguments) so that  is equal to ;. There are two stable extensions A = {a1 , a3 , a5 } and B = {a2 , a4 , a6 } with both standard PAF and our approach while the approach of [4] rules out A since B 1 A (resp. B 2 A). However the difference between the two approaches illustrated by the previous example is debatable. The selection of both extensions A and B is coherent with our basic idea. More precisely, none of A and B is stronger than the other in terms of defense. So both are kept. Following [4], only B is kept because it is preferred to A. However both extensions satisfy the basic requirement of Dung’s framework namely defense and conflict-freeness. Intuitively, it is not compulsory at all to exclude A just because it is less preferred than B. Better is to provide the user with both A and B and an additional information that B is preferred to A. Up to her/him to choose the preferred extension or not. This is more suitable when preferences are used since in preference representation, the aim is not to focus on the preferred outcomes only but compute a preference relation over the set of outcomes. We refer the reader to [21] where different preference relations are given to compare extensions on the basis of the preference relation over the set of arguments. Example 10 illustrates a case where an extension is removed in our approach because it does not provide a strong defense while it is kept following [4] just because it is not less preferred w.r.t. 1 (resp. 2 ). In our opinion, strength of defense should take precedence over preference relation. 7.2. Comparison with existing argumentation frameworks with varied-strength defeats The extension of Dung’s argumentation framework with varied-strength defeat relations requires to define the basic notions namely admissibility (thus defense) and conflict-freeness in the extended framework. We distinguish between two main approaches in considering varied strengths of defeat relations. The authors of [18] model the strength by a numerical function where each defeat relation is associated with a non-zero positive real number representing its strength. The idea is to use the strength of defeat relations in order to define an inconsistency tolerance degree of a set of arguments. More precisely, a set of arguments is α -conflict-free if the strengths of the defeat relations between arguments of the set sum up to no more than α . Then the authors focus on the complexity of computing the grounded extension in the extended framework. They do not explicitly study the incorporation of strengths in admissible extensions. Our framework resembles to that framework in the way strengths are modeled and the conflict-freeness notion is extended. Moreover we give a way to derive the strength of defeat relations from a valued preference relation over the set of arguments. The second approach to consider defeat relations with varied strengths has been proposed in [27,28]. Regarding the extension of the defense, our approach is conceptually similar to the one proposed in [27] but technically different leading to a different definition of admissibility. Our framework is different in the following four points: 1. in [27] the strength of defeat relations is modeled in a relative way by means of a partial/complete preorder on defeat relations. Our framework is more informative since strengths are modeled by a numerical function representing how strong a defeat relation is. The relative order between defeat relations is straightforwardly derived from the function. Note that incomparability between defeat relations can be modeled by a preorder but not by a numerical function; however this has no incidence on the framework as incomparability does not play any role in the definition of defense and admissibility,

JID:IJA AID:7683 /FLA

[m3G; v 1.121; Prn:8/01/2014; 13:23] P.23 (1-24)

S. Kaci, C. Labreuche / International Journal of Approximate Reasoning ••• (••••) •••–•••

23

2. in [27] four types of defenders have been defined. An argument a is a strong (resp. weak, normal, unqualified) defender of c against b if the defeat of a on b is stronger than (resp. equal to, incomparable to) the defeat of b on c. Then a set of arguments A defends c if each defender of c in A falls in one of the above types. Lastly a set A is admissible if it is conflict-free and defends all its elements. Then admissible extensions are compared w.r.t. to common arguments they contain. In our framework, the defense is more restrictive as we require each defender to be stronger than or equal to the defeater, 3. in [28], the strength of defeat relations is derived from a Boolean preference relation over arguments. An argument a is called a proper defeater of b if a b, otherwise it is called blocking defeater of b. The concept of defense in this framework coincides with our definition of defense when VDef is Boolean. In order to compute the strength of a defense, the authors of [28] computes a 3-valued preference relation over arguments, denoted pref , such that pref (a, b) = 2 if a b, pref (a, b) = 1 if a and b have equal preference and pref (a, b) = 0 if a and b are incomparable. This function is then used to compute defenders. Let a be defeated by b, which is in turn defeated by c and d. Then c and d are equivalent in force defenders of a if pref (c , b) = pref (d, b). c is a stronger defender than d if pref (c , b) > pref (d, b). Here again sets of conflict-free arguments (a la Dung) are compared w.r.t. the strength of defense they provide to common arguments. In contrast, our framework is more general since we allow for different levels of strict preference between arguments. Moreover the strength of α -conflict-free sets of arguments is evaluated independently of other sets of arguments. Consequently the two frameworks do not lead to the same results. In Example 6, while the above framework returns both A and B as stable extensions, our framework returns B only, 4. lastly, in contrast to [27,28] we use α -conflict-free instead of classical conflict-free notion in the sense of Dung. In [20], probabilities are assigned to the models for the support of each of the arguments. Assuming independence of the models, one can define a probability distribution over subgraphs of the initial probabilistic argumentation graphs, where standard extensions are computed over each subgraph. Hence a probability distribution over extensions is obtained at the end. This approach cannot be applied to our approach as our valuations on the arguments are interpreted as preferences and not probabilities. 8. Conclusion Dung’s argumentation framework has been extended to incorporate the strength of defeat relations [27,28,18]. While the notion of strength remains abstract in these works, i.e. their origin is not known, we can imagine different ways to derive it. On the other hand, Dung’s argumentation framework has been instantiated with preferences such that the defeat relation is computed from an attack relation and a Boolean preference relation over the set of arguments. Valued preference relations are generalization of Boolean preference relations. They quantify preferences between pairwise objects. Therefore a natural instantiation of argumentation framework with varied-strength defeat relation is to compute the latter from a defeat relation and a valued preference relation: the larger the preference between two arguments, the larger the defeat. Extensions of a VPAF A, ;, P  are based on conflict-freeness (either α -conflict-freeness, or conflict-freeness w.r.t. ;) and defense where the intensity of the defeat for the defender than be larger than that for the original attacker. First of all, based on some examples, we have proposed some properties that argumentation frameworks with valuedpreference relation shall satisfy. In particular, we provide an interesting particular case of valued-preference relation derived from a Boolean one . We have shown that preference-based argumentation frameworks need to be refined. In particular, it is important in the construction of the defeat relation to distinguish between preference, inverse preference and indifference/incomparability. This yields to a ternary preference relation. It already provides significant improvements from computing extension directly with a PAF. Then we show that the use of a valued-preference relation reduces the number of situations where the defense is valid, compared to Boolean preferences. The admissible extensions of A, ;, P  are also admissible extensions of A, ;, . Moreover, when the preference is derived from a valuation on arguments, the situations of defense are exactly the same in both the Boolean and valued cases if and only if the valued preference relation is Boolean. The same holds for admissible extensions. We consider another instantiation of VPAF where the valued preference relation is constructed from a valuation w on the arguments. The different possible constructions of P from w are obtained from a scalar preference function p. From the analysis of an example, we have derived a condition on p, which implies that the degree of preference P (a, b) of a over b is zero when w (a)  w (b). The simplest expression of P satisfying this condition is based on Łukasiewicz implication. A more general class is the set of preference P that correspond to the Łukasiewicz implication up to an increasing bijection. We provide a characterization of this class of valued-preference relations. References [1] L. Amgoud, C. Cayrol, Inferring from inconsistency in preference-based argumentation frameworks, Int. J. Approx. Reason. 29 (2) (2002) 125–169. [2] L. Amgoud, C. Cayrol, D. LeBerre, Comparing arguments using preference orderings for argument-based reasoning, in: 8th International Conference on Tools with Artificial Intelligence, ICTAI’96, Toulouse, 1996, pp. 400–403. [3] L. Amgoud, S. Vesic, Repairing preference-based argumentation frameworks, in: 21st International Joint Conference on Artificial Intelligence, IJCAI’09, Pasadena, 2009, pp. 665–670.

JID:IJA AID:7683 /FLA

24

[m3G; v 1.121; Prn:8/01/2014; 13:23] P.24 (1-24)

S. Kaci, C. Labreuche / International Journal of Approximate Reasoning ••• (••••) •••–•••

[4] L. Amgoud, S. Vesic, Two roles of preferences in argumentation frameworks, in: 11th European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty, ECSQARU’11, Belfast, 2011, pp. 86–97. ´ [5] M. Baczynski, B. Jayaram, Fuzzy Implications, Studies in Fuzziness and Soft Computing, vol. 231, Springer, Berlin, 2008. [6] C.A. Bana e Costa, J.M. De Corte, J.C. Vansnick, MACBETH, Int. J. Inf. Technol. Deci. Mak. 11 (2) (2012) 359–387. [7] H. Barringer, D.M. Gabbay, J. Woods, Temporal dynamics of support and attack networks: From argumentation to zoology, in: Mechanizing Mathematical Reasoning, in: LNCS, vol. 2605, Springer, 2005, pp. 59–98. [8] T.J.M. Bench-Capon, Persuasion in practical argument using value-based argumentation frameworks, J. Log. Comput. 13 (3) (2003) 429–448. [9] P. Besnard, A. Hunter, Elements of Argumentation, The MIT Press, 2008, http://www-mitpress.mit.edu/. [10] K.R. Canini, B. Suh, P.L. Pirolli, Finding relevant sources in Twitter based on content and social structure, in: Workshop on Machine Learning for Social Computing, Neural Information Processing Systems (NIPS) Conference, 2010. [11] C. Castillo, M. Mendoza, B. Poblete, Information credibility on Twitter, in: Proceedings of the 20th International Conference on World Wide Web, WWW’11, ACM, New York, NY, USA, 2011, pp. 675–684. [12] C. Cayrol, C. Devred, M.C. Lagasquie-Schiex, Acceptability semantics accounting for strength of attacks in argumentation, in: 19th European Conference on Artificial Intelligence, ECAI’10, Lisbon, 2010, pp. 995–996. [13] S. Coste-Marquis, S. Konieczny, P. Marquis, Mohand Akli Ouali, Weighted attacks in argumentation frameworks, in: 13th International Conference on Principles of Knowledge Representation and Reasoning, KR’12, Rome, 2012. [14] N. Diakopoulos, M. de Choudhury, M. Naaman, Finding and assessing social media information sources in the context of journalism, in: Proceedings of the 2012 ACM Annual Conference on Human Factors in Computing Systems, CHI’12, ACM, New York, NY, USA, 2012, pp. 2451–2460. [15] E. Diecidue, P. Wakker, On the intuition of rank-dependent utility, J. Risk Uncertain. 23 (2001) 281–298. [16] Y. Dimopoulos, P. Moraitis, L. Amgoud, Extending argumentation to make good decisions, in: First International Conference on Algorithmic Decision Theory, ADT’09, Venice, 2009, pp. 225–236. [17] P.M. Dung, On the acceptability of arguments and its fundamental role in non-monotonic reasoning, logic programming and n-person games, Artif. Intell. 77 (1995) 321–357. [18] P.E. Dunne, A. Hunter, P. McBurney, S. Parsons, M. Wooldridgel, Weighted argument systems: Basic definitions, algorithms, and complexity results, Artif. Intell. 175 (2) (2011) 457–486. [19] J. Fodor, M. Roubens, Fuzzy Preference Modelling and Multi-Criteria Decision Aid, Kluwer Academic Publishers, 1994. [20] A. Hunter, A probabilistic approach to modelling uncertain logical arguments, Int. J. Approx. Reason. 1 (2013) 47–81. [21] S. Kaci, Refined preference-based argumentation frameworks, in: 3rd International Conference on Computational Models of Argument, COMMA’10, Desenzano del Garda, 2010, pp. 299–310. [22] S. Kaci, C. Labreuche, Argumentation framework with fuzzy preference relations, in: 13th International Conference on Information Processing and Management of Uncertainty, IPMU’10, Dortmund, 2010, pp. 554–563. [23] S. Kaci, C. Labreuche, Preference-based argumentation framework with varied-preference intensity, in: 19th European Conference on Artificial Intelligence, ECAI’10, Lisbon, 2010, pp. 1003–1004. [24] S. Kaci, C. Labreuche, Arguing with valued preference relations, in: 11th European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty, ECSQARU’11, Belfast, 2011, pp. 62–73. [25] S. Kaci, L. van der Torre, Preference-based argumentation: Arguments supporting multiple values, Int. J. Approx. Reason. 48 (2008) 730–751. [26] D.H. Krantz, R.D. Luce, P. Suppes, A. Tversky, Foundations of Measurement, vol. 1: Additive and Polynomial Representations, Academic Press, 1971. [27] D.C. Martínez, A.J. García, G.R. Simari, An abstract argumentation framework with varied-strength attacks, in: 11th International Conference on Principles of Knowledge Representation and Reasoning, KR’08, Sydney, 2008, pp. 135–144. [28] D.C. Martínez, A.J. García, G.R. Simari, Strong and weak forms of abstract argument defense, in: 2nd International Conference on Computational Models of Argument, COMMA’12, Toulouse, 2008, pp. 216–227. [29] M. Ringel Morris, S. Counts, A. Roseway, A. Hoff, J. Schwarz, Tweeting is believing? understanding microblog credibility perceptions, in: Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work, CSCW’12, ACM, New York, NY, USA, 2012, pp. 441–450. [30] F. Pichon, Ch. Labreuche, B. Duqueroie, Th. Delavallade, Approche multidimensionnelle pour l’évaluation de la fiabilité des sources d’information, in: Ph. Capet, Th. Delavallade (Eds.), L’évaluation de l’information: confiance et défiance, Lavoisier, Paris, 2013, pp. 147–172. [31] J.L. Pollock, How to reason defeasibly, Artif. Intell. 57 (1) (1992) 1–42. [32] G.R. Simari, R.P. Loui, A mathematical treatment of defeasible reasoning and its implementation, Artif. Intell. 53 (1992) 125–157. [33] P. Smets, P. Magrez, Implication in fuzzy logic, Int. J. Approx. Reason. 1 (1987) 327–347. [34] G. Vreeswijk, Defeasible dialectics: a controversy-oriented approach towards defeasible argumentation, J. Log. Comput. 3 (3) (1993) 317–334. [35] P. Wakker, Additive Representations of Preferences, Kluwer Academic Publishers, 1989. [36] P. Wakker, A behavioral foundation for fuzzy measures, Fuzzy Sets Syst. 37 (1990) 327–350.