Argument-based learning communities

Argument-based learning communities

Knowledge-Based Systems 22 (2009) 316–323 Contents lists available at ScienceDirect Knowledge-Based Systems journal homepage: www.elsevier.com/locat...

674KB Sizes 6 Downloads 110 Views

Knowledge-Based Systems 22 (2009) 316–323

Contents lists available at ScienceDirect

Knowledge-Based Systems journal homepage: www.elsevier.com/locate/knosys

Argument-based learning communities G.S. Mahalakshmi *, T.V. Geetha Department of Computer Science and Engineering, College of Engg., Anna University, Guindy, Chennai – 600 025, Tamilnadu, India

a r t i c l e

i n f o

Article history: Available online 5 April 2009 Keywords: Knowledge-sharing Ontology Learning Argumentation Fallacy Logic

a b s t r a c t This paper presents the idea of automated learning in virtual communities through ‘‘tarka”-based argumentative discussions. Prior to argumentative discussion, both the discussing agents will have various degrees of belief about the subject of discussion, in their respective knowledge bases. To attain valid knowledge about a particular subject, both the participants voluntarily undergo a tarka process where arguments and counter-arguments are interchanged, pertaining to several knowledge-sharing criteria. At the end of discussion, gathered valid beliefs about the subject of discussion are generated as definitions from the knowledge base, to demonstrate the value of knowledge shared (and learned) via argumentation. Ó 2009 Elsevier B.V. All rights reserved.

1. Introduction Artificial society is an agent-based computational model designed for a specific purpose. The actions performed by the agents are controlled or determined by the purpose for which they have been evolved. Intelligent agents can be autonomous and can perform tasks with built-in learning and reasoning capabilities. They communicate with one another to exchange information. These agents (or artificial life entities) are often used to simulate the real-time social behavior of humans and are said to possess intelligence that emerges out of the social behaviors. In this paper, we focus on learning in artificial knowledge communities through systematic reasoning performed over exchanged arguments across the community. The argumentation phenomenon is a kind of persuasion [32] with the objective of knowledge-sharing. Persuasion aims at convincing others with one’s own belief about the subject of discussion. In other words, it can also be referred to as ‘Deriving Inferences for the sake of convincing others’. This form of inferencing follows the five-membered syllogism of Tarka sastra [30]. In this paper, we have attempted to simulate the social behavior of learning across a group of knowledge-sharing agents, who are capable of argumentative discussions. During argumentation, the mechanism of sharing valid knowledge is not readily available as an argument or counter-argument; rather, the valid beliefs about the subject of discussion is said to evolve as definitions throughout the argumentation process. For this to happen, the knowledgesharing volunteers taking part in argumentative discussion should * Corresponding author. Tel.: +91 44 22203340. E-mail addresses: [email protected], [email protected] (G.S. Mahalakshmi), [email protected] (T.V. Geetha). 0950-7051/$ - see front matter Ó 2009 Elsevier B.V. All rights reserved. doi:10.1016/j.knosys.2008.10.013

adhere to certain rules for knowledge-sharing [28]. These policies mainly regulate the argumentation community towards obtaining a definite conclusion. The community is governed and controlled by the defined interaction protocol whenever the autonomous systems initiate or respond to discussions. The novelty of the approach is that the agents within the system can learn and adapt to domain knowledge to evolve new information over time. The systems reason and derive inferences out of the discussion and update those to their respective knowledge base. For agents to share knowledge they need to communicate using a common language, which embeds syntax, semantics and pragmatics. To enable all this happen, a systematic structure of discussion elements and an enhanced knowledge representation system is required to interact with the dynamically categorized knowledge base [4]. In this paper, we adopt to the ontological categorization of world knowledge as per Nyaya sastra, the famous Indian Philosophy [30]. Besides classification and categorization, to suit effective knowledge acquisition from the arguments exchanged, the rational arguments, have to be interpreted in a systematic fashion which gives rise to Indian logic-based argument representations called Indianised Logics [12], in this paper, re-christened as Nyaya Logics. Nyaya Logics [12] is a new, flexible argument representation formalism which embeds the world knowledge as like (but more efficient than) description logics [1,2]. The descriptive and inference-specific representation of arguments/counter-arguments in Nyaya Logics [12] captures every concept and relation elements from the arguments and constructs the component knowledge base. At the start of argumentative discussion, agents are grouped according to their domain interest [18]. The agents update the

317

G.S. Mahalakshmi, T.V. Geetha / Knowledge-Based Systems 22 (2009) 316–323

knowledge base with the obtained information only if they trust the opponent from where the information was obtained. Every entity of artificial knowledge society combines various reasoning capabilities to take part in discussion and update their internal knowledge base respectively after every argument exchange. As these systems will be working concurrently, cooperation between them is essential to the effectiveness of each node and the system as a whole. The main contribution of this work is (1) the NORM model for ontology representation (2) formal definitions of concept and relation in NORM (3) the protocol for Indian logic-based knowledge sharing by argumentative reasoning.

2. Related work Argumentation has created a greater impact on the field of artificial intelligence. Argumentation schemes represent patterns of deductive and inductive reasoning in some instances, but sometimes capture stereotypical patterns of human reasoning, especially defeasible ones under conditions of uncertainty and lack of knowledge [33]. This influence does not only model the ways of rational behavior between human agents, but, it also concentrates on applying it to the virtual context i.e. artificial agents governed by rules regulating the procedure of argumentation [34]. Virtual communities shall use argument techniques for distributed knowledge-sharing. Research in virtual communities show an increase in the need to share knowledge across a distributed environment in present web-based lifestyle. Extensive analysis to virtual community literature reveals that every work dated today in virtual community must have been inspired from Lave and Wenger’s [10] theory of Communities of Practice. Within a virtual community, the shared knowledge can either be ‘hard’ knowledge or ‘soft’ knowledge [9]. This view has led to attempts to extract knowledge from one group of ‘experts’ so that it can be used by another less skilled group. Here arises the need for knowledgesharing. According to Walton, an argument necessarily involves a dialogue, because it requires two parties, proponent and respondent [35]. Therefore, artificial agents are widely used to simulate participation in virtual communities. The argument and its reply need to be evaluated as a pair of moves, on a balance of considerations in a dialogue setting that allows new evidence to come in at a later point [33]. Presences of fallacies [32] weaken the argument and inhibit inferences. To make the argument more logical or stronger, these fallacies should be avoided while constructing the argument. However, validity of knowledge shared is always a question. Incentive-based and trust-based mechanisms exist for such scenarios like direct rewards, increased reputation, internal satisfaction or reciprocity [6,23,24]. Perceptual, deliberative, operational, and meta-cognitive means of attention management for virtual community members have also been attempted [27]. Interestingly, knowledge management in virtual communities through discourse analysis has been explored [5]. A generic framework for dialogue games [22], protocols, or dialogue systems, for argumentation, persuasion or debate have been discussed extensively [7,11,25,26,32]. The component of dialogue games are a combination of moves and counter-moves. Moves are classified as logical moves and dialogue moves with a stress on levels of commitment and the strategy problem. A meaning representation language using ontology with reified events and actions, is probably the innovative attempt to support the requirements of multi-modal cooperative dialogue. However, introduction of ontologies to AI systems have a great impact on deciding the performance and accuracy of operation of most systems. The fundamental reason should be the structure of information deposited or embedded in the ontologies. As ontolo-

gies grow in size, information content of ontology increases while performance decreases with their increasing volumes. Instead, there should be some representative mechanisms for construction, maintenance and use of ontologies which is pretty close to natural way of storing information by humans. And such representation should promote cognition and aid to better inferences in artificially intelligent environments. In this paper, we propose the NORM model for ontology representation which follows from Indian philosophy and is more close to natural way of organizing world knowledge. With NORM, the methodology with which the items of knowledge base are represented is convincing, so that, there is no mismap of world knowledge into the knowledge base. Quick, as well as complete knowledge representation formalisms play a good role in finding the defects of arguments, which promote better learning in knowledge communities. The following section describes the classification recommendations [1] of Nyaya sastra, and interprets the elements of arguments into specially enhanced knowledge representation formalisms called Nyaya Logics.

3. NORM model for cognitive knowledge representation According to NORM, a node in the ontology is composed of an enriched concept which is related implicitly to its member qualities and explicitly to other peer concepts, by means of relations. A node of Nyaya-based ontology has the following structure (refer Fig. 1). Every concept of the world knowledge shall be thoroughly classified as per NORM structure. The abstract and domain concepts form a strict classification hierarchy. The traits of the top-level concepts are applicable down the hierarchy. An ontological classification OT is defined as the collection of all concepts under the given domain boundary

OT ¼

n X

ci

ð1Þ

i¼1

An ontological commitment OD is defined as the collection of all constraints defined by operator set Op over concepts C of ontological classification OT under the given domain boundary

OD ¼ OD ðOp; OT Þ — collection of constraints

ð2Þ

Definition 1. [Abstract concept] A concept is an abstract entity which embeds several qualifying attributes of its own. The attributes are bound to the concept by relation existence. An attribute is a sub-property of a concept/relation which is used to define the concept set. Attributes are optionally supported by values. An abstract concept c is a 7-tuple

c  ðO; Q ; V; Re ; Ri ; Rt ; Rg Þ

(a)

ð3Þ

(b)

(c)

Fig. 1. NORM model for cognitive knowledge representation (a) ontology with concepts as nodes and external relations as edges (b) a concept with qualities as nodes, internal relations as thin edges, tangential relations as dotted edges (c) a quality with values as nodes, grouping relations as edges [21].

318

G.S. Mahalakshmi, T.V. Geetha / Knowledge-Based Systems 22 (2009) 316–323

RAG  ðr; r q ; r con ; r cat ; r cf Þ

where

c ¼ fc1 ; c2 ; . . . ; cn g

ð4Þ

ð5Þ

Q is a non-empty set of attributes,

Q ¼ fq1 ; q2 ; q3 ; . . . ; qm g

e

R #O  O

ð8Þ

D # fis-a; has-a; part-ofg

ð9Þ

ð10Þ

ð11Þ

is a set of grouping relations between the member attribute and the values owned by it. Definition 2 (Abstract quality). An abstract quality Q associated with an abstract concept C under an ontological classification OT is a 4-tuple which is defined as:

Q  ðQ con ; V; Re ; Ri Þ

ð12Þ

where Qcon is a set of constraints,

Q con # fQ m ; Q o ; Q e ; Q x g

ð13Þ

i.e. mandatory, optional, exceptional and exclusive respectively. Q is a non-empty set of attributes, Q = {q1, q2, q3, . . . , qm}. V is a set of values, V = {v1, v2, v3, . . . , vp}. Re  Rt # Q  Q is a set of external relations; i.e. relations between qualities. Ri  Rg # Q  V is a set of internal relations; i.e. relations between a quality and the values owned by it. Definition 3 (Concept in argument). A concept in the argumentation framework is defined as a combination of abstract concept with other categorical properties of concept existence in argument gaming

C AG  ðc; C con ; C Cat ; C Cf Þ

ð14Þ

where c is the abstract concept in Definition 1. Ccon is the constraint set under which concept C is said to exist;

C con # OD

ð15Þ

CCat is the category of concept in the procedural argumentation scenario; the category can be of three types;

C Cat # fC S ; C OI ; C R g

ICi is the set of invariable concomitance relations, where

D is the set of direct relations, where

is a set of tangential relations; i.e. relations between the member attributes of a given concept.

Rg # Q  V

ð19Þ

IC i # fsymmetric; þ IC;  IC; neutralg

is a set of internal relations; i.e. relations between a concept and its member attributes

Rt # Q  Q

rq is the set of attributes of the abstract relation,

ð7Þ

is a set of external relations; i.e. relations between a pair of concepts

Ri # O  Q

ð18Þ

rq # fIC i ; D; X; Xpg ð6Þ

V is a set of values,

V ¼ fv1 ; v2 ; v3 ; . . . ; vp g

where r is the abstract relation [refer Definition 1],

r # fRe ; Ri ; Rt ; Rg g

O is a set of object of the concept,

O ¼ fo1 ; o2 ; o3 ; . . . ; on g

ð17Þ

ð16Þ

CCf is the confidence factor (a numeric value) associated with every abstract concept in the knowledge base. Definition 4 (Relation in argument). A relation in the argumentation framework is defined as a combination of abstract relation with other categorical properties of relation existence in argument gaming

ð20Þ

ð21Þ

[Note: For convenience, direct relations are notated by r in rest of this paper]. X is the set of exclusive relations, where

xi # X=ðX # rÞ ^ ðX – /Þ

ð22Þ

Xp is the set of exceptional relations, where

xpi # Xp=ððXp # rÞ ^ ðXp – /Þ;

ðxpi ½V ¼ trueÞ

ð23Þ

for some element ck # c and false for other elements of c; rcon is the constraint set under which relation r is said to exist;

rcon #freflexive;symmetric;anti-symmetric;asymmetric;transitiveg ð24Þ

rcat # fRSOI ; RSR ; RROI g

ð25Þ

rcf is the confidence factor (a numeric value) associated with every abstract relation in the knowledge base. Definition 5 (Argument A). An argument is a set of propositions related to each other in such a way that all but at least one of them (the premise) are supported to provide support for the remaining (the conclusion). An argument A over argumentation framework AF is defined as a tuple

A ¼ hAid ; f ðc; rÞ; Astate ; Astatus ; Astr i

ð26Þ

where

f ðc; rÞ ¼ ccat  r cat

ð27Þ

is a function of argument concepts and relations; Aid is the argument index; Astate, the state of argument; Astate # {premise, inference, conclusion}; Astatus, the defeat status of arguments; Astatus # {defeated, undefeated, ambiguous, undetermined} and Astr, the strength or conclusive force of the argument. Thus, the four-dimensional concept–role–quality–value system provides better classification of the input information and helps widely in inference purposes. By utilizing the above philosophical recommendations, arguments and counter-arguments are exchanged between the knowledge-sharing entities about the subject of discussion. As the argumentation process continues, invalid beliefs about the subject of discussion are cleared through defect finding mechanisms. At the end of discussion, both the arguers would have learnt a considerable amount of knowledge related to the subject of discussion. This is generated as definitions by the knowledge-sharing volunteers as they reach a definite conclusion.

4. Argument games 4.1. ‘Tarka’ and argument gaming Argument gaming [15] is the procedural and continuous exchange of non-deterministic arguments between two participating

G.S. Mahalakshmi, T.V. Geetha / Knowledge-Based Systems 22 (2009) 316–323

knowledge-sharing entities for spontaneous decision making in resolving knowledge inconsistencies. The entire argumentation scenario follows the five-membered inference pattern of Indian Philosophy [28,30]. The five-members of the inferential syllogism are: statement, reason, example, application and conclusion. Statement is an argument where things to be proved are stated; reason is the supporting evidence which strengthens the proof; example is a similar case that has occurred prior to the statement; the idea of example can be derived and applied to the statement which shall be concluded towards the end of discussion. 4.1.1. Running example Consider an argumentation scenario, where two persons are arguing about the existence of ‘snake’ in a dark room. The sample arguments exchanged are given below: Arguer

Counter-arguer

I guess this would be a snake How? It appears to be a rope It is long and is glossy Even a rope can be long and glossy Snakes generally lie in a curling circular pattern Need not be! Somebody might have put a rope like that! (jumps over it) See it is not moving . . . Yes, not moving . . . I have seen a snake in my childhood; snakes normally stay like this when they have eaten ‘rats’. This also looks like that . . . Therefore this should be a snake ‘This is a snake’ (statement). ‘Because it is long and glossy, lies in curling circular pattern’ (reason). ‘Since whatever is long, glossy and curly, i.e. the snake in my childhood’ (example). ‘This is also like that’ (application). ‘Therefore, this is a snake’ (conclusion). The idea of generating a suitable counter-argument relies on defect exploration [14] of the previous argument. Argument with defects is considered fallacious; Fallacious arguments get weaker rewards when compared to other valid conclusive arguments [33]. The argument and its reply have to be evaluated as a pair of moves, thereby allowing new evidence to come in at a later stage of argument scenario. As the discussion continues, beliefs get revised or updated. Thus, the process of argumentation for knowledge-sharing [13,16,17] in artificial knowledge societies provide room for belief revision and updating, as and when new evidences are brought in for discussion. These revised beliefs are later summarised as definitions towards the end of discussion. For the above example, arguer generates learned definitions about ‘rope’ and counter-arguer generates learned definitions about ‘snake’, along with other prior beliefs. By following the above system of inference, argumentative reasoning can be seen as a methodology which introduces inferences through arguments for the sake of convincing the opponent. Arguments and counter-arguments along with the intermediate inferences thus generated are updated into the knowledge base. Since this is a continuous process, some conclusion has to be attained in finite steps. Therefore, we assume that if there is no counter-

319

argument generated as response, the rational discussion comes to a halt. 4.2. The process of argumentation The objective for every KS agent is to maximize the expected sum of rewards [21] it receives which is a direct measure of quantity and nature of holes or defects obtained for every argument exchange, through which the knowledge-sharing system [16] is expected to evolve in learning the right knowledge by discussion. Defect exploration consists of initially analyzing a given input argument and later highlighting the argument’s defects or holes in terms of concept and relation elements of argument [14]. These defects, otherwise called reason fallacies, are connected with inferential reasoning. A fallacy when exposed is a good reply to an opponent, whose argument is thus pointed out to be inefficient. Overcoming these fallacies or defects is called ‘‘removing the holes” from the submitted argument. An argument is futile when the reverse of what it seeks to prove is established for certain by another proof [8]. Refutation can be defined as pointing out the defects or fallacies in the statements of the opponent, which causes defeat to the argumentator [20]. The result of every refutation that is reflected at the argument status does not count much for calculation of rewards, because the main objective is only to provide a forum for exchange of opinions and clarification of perspectives [3]. The reward analysis phase not only takes care of the immediate argument–counter-argument pair, but also the history of arguments exchanged in that discussion. The evaluation of actions [29] made by the gaming agent along with the non-monotonic nature of knowledge obtained which is cascaded implicitly in the knowledge base, helps to improve the gaming agent’s ability to behave optimally in future to achieve the valid knowledge by directing the discussion towards a definite and valid conclusion [15]. The evaluation should be normalized so that, duplication is avoided. The knowledge base is updated with the output counter-argument. The status of counterargument with reference to the submitted input argument is determined.

5. Protocol for knowledge-sharing by argumentative reasoning The state transition diagram for the knowledge-sharing protocol [19] is given in Fig. 2. Every message exchanged over the discussion forum consists of the community identifier, name of the agent, discussion forum identifier followed by the actual message. ‘Terminate’ and ‘Quit’ kind of messages also include the reason for such action upon the opposite knowledge peer. The messages in the interaction protocol are categorised as inward messages, outward messages and discussion messages (refer Table 1). Inward messages deal with the set of messages using which the volunteers enter the virtual community. Outward messages are targets for KS volunteers to ‘exit’ from the community. Discussion messages are used for actual argument interaction between the KS entities. The team of KS entities are grouped by trust. Every entity is expected to generate a counter-argument in opposition if it is not convinced with the proposed argument. Every argument/counterargument is generated with a degree of confidence, an indicative measure of how confident the entity has generated the argument/counter-argument. To participate in the discussions of virtual community, any KS volunteer either requires an invitation from the member of the discussion forum or is allowed to initiate a brand new discussion forum if that domain is not currently under discussion. At the same time, an outside volunteer may even reject the invitation to join a discussion, which is reflected in terms of the

320

G.S. Mahalakshmi, T.V. Geetha / Knowledge-Based Systems 22 (2009) 316–323

Fig. 2. Interaction protocol.

Table 1 Message formats. hinvitei – hcomm_id, agent_name, invitationi hjoini – hcomm_id, agent_name, domain listi hrejecti – hcomm_id, agent_name, reasoni hharvesti – hcomm_id, agent_name, relation(s), concept(s), qualityi hsignificant_analysisi – hcomm_id, concept(s)i hsearch_KBi – hcomm_id, elementOfArgi hexplore_defectsi – hcomm_id, elementOfArgi hrefutei – hcomm_id, elementOfArg, holesInArgi hreward assignmenti – hcomm_id, trust, conf, elementsOfArg, InfoContenti hwaiti – hcomm_id, agent_name, discus_id, reasoni hlisteni – hcomm_id, agent_name, domaini hquiti – hcomm_id, agent_name, discus_id, reasoni hterminatei – hcomm_id, agent_name, reasoni

Inward messages Arguing messages

Outward messages

trust value assigned to that outsider. After joining the community, every KS entity is given some buffer time to make itself available for actual discussion. After initialisation, every entity is expected to wait until it obtains the attention of other members of the discussion forum, through invoke messages. The start of discussion is through a propose message by which an argument is proposed over the discussion forum. All the listening members of the forum receive the proposed argument. From the listening state, the members switch to argue state, harvest the concepts of the proposed argument, analyse the significance of proposed argument, identify the defects/update the respective knowledge base, refute and construct the appropriate counterarguments in return [13]. If the concept is not of much significance to the participating entity or if the argument is proposed by a nontrusted entity, it can quit the discussion. If the generated counterargument is found significant (in terms of knowledge gain) to the entity, it proceeds further by updating the new knowledge gained or does a defect exploration.

Defect exploration is a state wherein the participating entity finds defect in the proposed argument to generate a corresponding counter argument. Refutation is like arguing against the proponent with valid inferences. If the information contained in the proponent’s argument is agreeable, the members display an ‘agree’ message or else ‘disagree’ message. The discussion continues until the knowledge volunteer offers a voluntary exit from the discussion by an ‘exit’ message, which means that, the volunteer has no interest in further discussion about the current subject (which may even be the inability of the volunteer to assess the presence of defects in the input arguments, due to lack of knowledge). Alternatively, any volunteer wishing to exit (in the middle of argumentation) the community offers a ‘quit’ message to quit the discussion. Updation of knowledge base results in a significant increase in the knowledge of the respondents (the counter-argument generators). Also during harvesting and significant analysis, the knowledge base of the respondents gets refreshed. As the knowledge base gets refreshed and added, they assign a reward and thereby trust (using current trust and performance in last transaction) value is reflected for proponent. The trust values are updated to the trust store. Reputation is then calculated as an indirect measure of the transmission of beliefs about how the agents are evaluated with regard to a socially desirable conduct of knowledge-sharing. 6. Results The knowledge base consisted of 78 Indian logic concepts (enriched with qualities and other special attributes as recommended by Nyaya classification system [30]) and 149 relations on a whole, comprising domains like bird, animal, geography, dairy, metal and nature. A more realistic implementation of argumentative discussion was carried out with two knowledge volunteers Agent 1 and Agent 2, discussing about the occurrence of fire over the mountain region on perceiving the smoke on that particular area. Agent 1 has concepts related to ‘nature’ domain, but it lacks information about

G.S. Mahalakshmi, T.V. Geetha / Knowledge-Based Systems 22 (2009) 316–323

‘trees’. Agent 2 has also some concepts related to ‘nature’ domain but it lacks information about ‘falls’. Argument proposed by Agent 1 is analysed for defects at Agent 2. Based on the defects analysed, the defect value is computed which contributes to identifying parts of knowledge in the argument which form the source of generation of counter-argument. The concepts and relations are assembled in a particular format (i.e. in natural language sentences) and the counter-argument is constructed and let out to continue further discussion. (Note: We have not concentrated on natural language generation aspect while generating counter-arguments; instead, we have various structures of training sets of counter-arguments, based on which the new counter-arguments are constructed.) At every argument exchange, defects are analysed out of the submitted arguments at both the agents; i.e. Agent 2 does defect analysis on the arguments proposed by Agent 1 and vice versa. When an argument is proposed and suppose, if the information is not found in the knowledge base the maximum weight of a concept/relation in the knowledge base is given as a defect value. Generally, the defect value is maximum when the knowledge base is refreshed (and a defect is found) on a larger scale. The levels of knowledge refreshed in the knowledge base also contribute to the analysis of defects in arguments. The sample splitting of argument elements is summarised in Table 2. Ideally, if there is no defect gain, the reward tends to ‘infinity’. But if there is incomplete information present in one’s own knowledge base, obviously the counter-argument carries some form of ‘assertive’ statements for which the complete information is expected from the other end. In such cases of ‘infinite’ rewards, we analyse the counter-argument that has generated the reward, if it is assertive, we just allow the discussion to continue; or else, it shall be assumed that there is no defect found with the counterargument and that the discussion has been concluded.

321

6.1. Example scenario Let us assume Agent 2 is proposing the first argument, and starts the discussion. Agent 2 has ‘invariable’ relation existing from ‘smoke’ to ‘fire’. Agent 1 does not have any relation between the respective concepts. (However, both the concepts exist in Agent 1’s knowledge base). Therefore, while arguing about guessing ‘fire’ in the ‘mountain’ by perceiving ‘smoke’ over the ‘mountain’, Agent 1 denies the opinion of Agent 2 by stating that there cannot be any relation existing from ‘smoke’ to ‘fire’ [refer Arg. 6 in Fig. 3]. The reason is that Agent 1 is ignorant of such things existing as part of the world knowledge. In response, [refer Arg. 7 in Fig. 3] Agent 2 dismisses the denial of Agent 1 stating that, there is an invariable relation existing at that location in the ontology, which is universally true according to the definition of common sense knowledge. (Invariable concomitance is one of the prime recommendations in philosophical debate. If any argument is attacked by a counterargument by using such relations, the corresponding argument shall be dismissed [30,31].) Agent 1 agrees to Agent 2 at the end of discussion. This agreement to Agent 2 is depicted via learned components at Agent 1’s knowledge base. The components learned through discussion, (i.e. components of ontology revised, and components updated afresh) are collectively generated as definitions of expanded knowledge at the end of discussion (Fig. 3). The measurement of knowledge gained through discussion is shown in Fig. 4. For the first argument from Agent 2, Agent 1 has some serious difficulties in interpreting it. Agent 1 is not able to track what Agent 2 is speaking about. i.e. Agent 1 has no idea of what is meant by the concept ‘mountain’. Therefore, this is measured as defect gain for Arg. 1 (Fig. 4a). The reward for Arg. 1 is zero since ‘mountain’, the subject of the argument is not known to agent 1 (Fig. 5a). The corresponding doubts are raised as counter-argument

Table 2 Sample splitting of argument elements. Argument

Argument elements

Relations identified

Mountain has fire due_to smoke Tree cause fire due_to lightening Smoke coexist with falls

Subject: mountain; Object of Inference :fire; Reason: smoke Subject: tree; Object of Inference: fire; Reason: lightening Subject: smoke; Object of Inference: Falls; Reason: not stated

Contact–contact (mountain, fire); invariable (smoke, fire) Causal (tree, fire) causal (lightening,fire) Locative (falls, smoke)

Fig. 3. Sample argument ‘‘Mountain has fire due to smoke” which shows Agent 1 also agrees to occurrence of ‘fire’ at the end of discussion.

322

G.S. Mahalakshmi, T.V. Geetha / Knowledge-Based Systems 22 (2009) 316–323

Fig. 4. Defect weight and knowledge gain (a) in Agent 1 and (b) in Agent 2.

Fig. 5. Reward graph (a) in Agent 1 and (b) in Agent 2.

(Arg. 2) to Agent 2. Agent 2 responds to Arg. 2. With respect to Arg. 3, Agent 1 could gain some knowledge from Agent 2, which is shown by a rise in knowledge gain in Fig. 4a. Obviously, the reward for Arg. 3 also increases (Fig. 5a). But there are also some serious defects. i.e. Agent 1 has no connection with concepts ‘tree’ and ‘fire’; whereas Agent 2 states through Argument 3, that, ‘‘tree’ causes ‘fire”. This doubt is measured as defect in Arg. 3 in Fig. 4a, (Note: the rise in defects between Arg. 1 and Arg. 3) and the corresponding counter-argument is generated as Argument 4. For Agent 2, the knowledge gain from both Arg. 2 and Arg. 4 from Agent 1, is an absolute zero. The reason is, both the arguments (2 and 4) are assertive arguments from which no knowledge could be gained. Since reward is a direct measure of knowledge gain, rewards for arguments (2 and 4) are found to be zero (Fig. 5b). Arg. 5 states the ‘causal relationship’ between concepts ‘tree’ and ‘fire’ mentioning that ‘lightening’ is the reason behind such cause. In other words, this argument instructs to Agent 1 about the ‘cause’ and ‘effect’ of ‘fire’. (Note: Before receiving this argument, while generating the Arg. 4, Agent 1 had the concepts ‘tree’ and ‘fire’ but with a ‘has-a’ relationship among them. Also, ‘lightening’ was present in the knowledge base of Agent 1 but with no relationship with ‘tree’ and ‘fire’). This learning about the ‘cause-effect’ of concepts is reflected at the knowledge gain (Fig. 4a). The knowledge gain for Arg. 5 is higher than other previous arguments in Fig. 4a. Also, the reward is higher for Arg. 5 (Fig. 5a). During Arg. 6, Agent 1 denies the existence of ‘fire’ in ‘mountain’ by objecting to the existence of any relation between reason ‘smoke’ and ‘fire’. But, ideally, ‘smoke’ is invariably related to ‘fire’. Agent 2 has this information whereas Agent 1 is ignorant about such common sense knowledge behind ‘smoke’. Therefore, at this situation, the Arg. 6 generated by Agent 1 is found to be associated with large number of defects which is measured as defect gain by Agent 2 in Fig. 4b. This is reflected in reward at Fig. 5b. The defect measurement is higher than any other previous defects because of its category ‘invariable’. Any argument which attacks a universally known invariableness is found to be futile, therefore, Agent 2 ends the discussion stating the ‘invariable’ relation between ‘smoke’ and

‘fire’ via Arg. 7. The discussion comes to the conclusion that from ‘smoke’ one can conclude the occurrence of ‘fire’ in the ‘mountain’. The reward for Arg. 7 is infinite in Agent 1 since it learns a good quantum of flawless knowledge from Agent 2 (Fig. 5a). However, there may be more beliefs about the subject of discussion which might not be revealed through arguments during discussion. These beliefs remain as such without any change because the arguing agent has not exposed its belief to the opponent during argumentation. At the end of discussion, both the agents attempt to generate convincing definitions [29] about the subject, which is composed of valid beliefs tested through argumentative discussion with the presumed beliefs in their own knowledge bases. 7. Conclusion The aim of this paper is to demonstrate the learning of new knowledge by argumentative reasoning in artificial knowledge societies. The key idea is to have volunteers that share unknown information (or concept) to interested volunteers in a rational conversation environment. Every participating member performs reasoning and inferencing over the knowledge gained. The entire methodology of argumentative discussion and knowledge representation is inspired from Indian Philosophy. The objective is to utilize the presence of defects of the submitted argument for further generation of counter-arguments, to eventually arrive at more expanded knowledge about the subject of discussion. In future, we have plans to work on derivation of collective intelligence, and, analyse the introduction of episode sharing and it’s reflection on group behaviors in artificial knowledge societies. References [1] G. Aghila, G.S. Mahalakshmi, T.V. Geetha, KRIL – a knowledge representation system based on Nyaya Shastra using extended description logics, VIVEK Journal 15 (3) (2003). ISSN: 0970-1618. [2] Borgida Alex, Ronald J. Brachman, Deborah L. McGuiness, Lori Alperin Resnick, CLASSIC: a structural data model for objects, in: Proceedings of the 1989 ACM SIGMOD International Conference on Management of Data, 1989, pp. 59–67.

G.S. Mahalakshmi, T.V. Geetha / Knowledge-Based Systems 22 (2009) 316–323 [3] Esther Solomon, Indian Dialectics: Methods of Philosophical Discussion, vol. 2, Inst. of Learning and Research, Gujarat Vidya Sabha, Ahmedabad, India, 1976. [4] Gradinarov, Phenomenology and Indian Epistemology – Studies in NyayaVaisesika Transcendental Logic and Atomism, Sophia Indological Series (1990). [5] K. Hafeez, F. Alghatas, Knowledge management in a virtual community of practice using discourse analysis, The Electronic Journal of Knowledge Management 5 (1) (2007) 29–42. [6] H. Hall, Social Exchange for Knowledge Exchange at Managing Knowledge: Conversations and Critiques, University of Leicester Management Centre, 2001. pp. 10–11. [7] C.L. Hamblin, Fallacies, Methuen, London, 1970. [8] Robert Ballantyne James, Lectures on the Nyaya Philosophy – Embracing the Text of Tarka Samgraha, Presbyterian Mission Press, Allahabad, India, 1849. [9] [9] C. Kimble, P. Hildreth, P. Wright, Communities of practice: going virtual, in: Knowledge Management and Business Model Innovation, Idea Group Publishing, Hershey (USA)/London (UK), 2001, pp. 220–234 (Chapter 13). [10] J. Lave, E. Wenger, Situated Learning. Legitimate Peripheral Participation, Cambridge University Press, Cambridge, 1991. [11] J. Mackenzie, Question begging in noncumulative systems, Journal of Philosophical Logic 8 (1979) 117–133. [12] G.S. Mahalakshmi, T.V. Geetha, A mathematical model for argument procedures based on Indian Philosophy, in: Proc. of International Conf. Artificial Intelligence and Applications (AIA 2006) as part of the 24th IASTED International Multi-conference Applied Informatics (AI 2006), Innsbruck, Austria, 2006. [13] G.S. Mahalakshmi, T.V. Geetha, Architecture of Indian-logic based procedural argumentation system for knowledge sharing, in: Proceedings of IEEE SMC United Kingdom & Republic of Ireland Chapter Conference on Advances in Cybernetic Systems (AICS 2006), Sheffield Hallam University, UK, 2006. [14] G.S. Mahalakshmi, T.V. Geetha, Navya-Nyaya approach to defect exploration in argument gaming for knowledge sharing, in: Proc. of International Conf. on Logic, Navya-Nyaya & Applications – a Homage to Bimal Krishna Matilal (ICLNNA ‘07), Jadavpur University, Calcutta, India, 2007. [15] G.S. Mahalakshmi, T.V. Geetha, An algorithm for knowledge sharing by procedural argument representations based on Indian Philosophy, International Journal of Computer, Mathematical Sciences and Applications, in press. [16] G.S. Mahalakshmi, T.V. Geetha, I-KARe – a rational approach to knowledge acquisition and reasoning using Indian logic based knowledge models, in: Proceedings of the Third Indian International Conference on Artificial Intelligence (IICAI ‘07), Pune, India, 2007, pp. 1206–1222. [17] G.S. Mahalakshmi, T.V. Geetha, The logic of reasoning by procedural argumentation for knowledge sharing, in: Proceedings of the International Conference on Computational Intelligence and Multimedia Applications – 2007 (ICCIMA ’07), Sivakasi, India, IEEE CS Press, 2007. [18] G.S. Mahalakshmi, T.V. Geetha, Gurukulam-reasoning based learning system using extended description logics, International Journal of Computer Science and Applications (IJCSA) 5 (1) (2007) 14–32 (special issue on new trends on AI techniques for educational technologies, Technomathematics Research Foundation).

323

[19] G.S. Mahalakshmi, T.V. Geetha, An Indian logic-based knowledge-sharing architecture for virtual knowledge communities, International Journal of Networking and Virtual Organisation (IJNVO) (2008) (special edition: Virtual Learning and Knowledge Sharing Inderscience Publishers). [20] G.S. Mahalakshmi, T.V. Geetha, Modeling uncertainty in refutation selection – a POMDP based approach, Journal of Uncertain Systems (2008) (special issue on ‘‘Advances in uncertain theory and its applications”). [21] G.S. Mahalakshmi, T.V. Geetha, Reasoning and evolution of consistent ontologies using NORM, IJAI, Indian Society for Development and Environment Research (ISDER), ISSN 0974-0635, 2(S09) 2008c, pp. 77–94 (Spring 2009). [22] N. Maudet, F. Evrard, A generic framework for dialogue game implementation, in: Proceedings of the Second Workshop on Formal Semantics and Pragmatics of Dialog, Universite Twente, The Netherlands, 1998. [23] T.G. Papaioannou, G.D. Stamoulis, An incentives’ mechanism promoting truthful feedback in peer-to-peer systems, in: Proceedings of the Fifth IEEE/ ACM International Symposium in Cluster Computing and the Grid, Cardiff, UK, 2005. [24] T.G. Papaioannou, George D. Stamoulis, Optimizing an incentives’ mechanism for truthful feedback in virtual communities, in: AP2PC 2005, 2005, pp. 1–15. [25] S. Parsons, A. Hunter, A review of uncertainty handling formalisms, in: Applications of Uncertainty Formalisms, Lecture Notes in Computer Science, vol. 1455, 1998. [26] H. Prakken, Formal systems for persuasion dialogue, The Knowledge Engineering Review 21 (2006) 163–188. [27] C. Roda, T. Nabeth, Attention management in virtual community environments, in: Proceeding Journée de recherche de l’AIM (Association Information et Management) ‘‘Innovation et Systèmes d’Information”, 2006. [28] Sarvepalli Radhakrishnan, Charles A. Moore (Eds.), A Source Book in Indian Philosophy – Chapter: The Method of Vada Debate, Princeton University Press, 1989, pp. 361–365. [29] V. Suganya, G.S. Mahalakshmi, T.V. Geetha, Generating conceptual definitions from Indian logic based argumentation, in: CORE – Ninth Conference on Computing, Mexico, 2008, pp. 28–30 (special issue Journal Research in Computing Science, ISSN: 1870-4069). [30] Virupakshananda Swami, Tarka Samgraha, Sri Ramakrishna Math, Madras, India, 1994. [31] Wada Toshihiro, Invariable Concomitance in Navya-Nyaya’, Sri Garib Dass Oriental Series No. 101, Indological and Oriental Publishers, New Delhi, India,1990, ISBN: 81-7030-227-7. [32] D.N. Walton, E.C.W. Krabbe, Commitment in Dialogue: Basic Concepts of Interpersonal Reasoning, SUNY Press, 1995, pp. 165–167. [33] Doughlas Walton, Justification of argumentation schemes, The Australasian Journal of Logic 3 (2005) 13. [34] D. Walton, D.M. Godden, The impact of argumentation on artificial intelligence, in: Peter Houtlosser, Agnes van Rees (Eds.), Considering Pragma-Dialectics, Mahwah, Erlbaum, New Jersey, 2006, pp. 287–299. [35] Douglas Walton, David M. Godden, Informal logic and the dialectical approach to argument, reason reclaimed, in: H.V. Hansen, R.C. Pinto (Eds.), Newport News, Vale Press, Virginia, 2007, pp. 3–17.