A new adaptive trust and reputation model for Mobile Agent Systems

A new adaptive trust and reputation model for Mobile Agent Systems

Accepted Manuscript A new adaptive trust and reputation model for Mobile Agent Systems Dina Shehada, Chan Yeob Yeun, M. Jamal Zemerly, Mahmoud Al-Quta...

1MB Sizes 1 Downloads 49 Views

Accepted Manuscript A new adaptive trust and reputation model for Mobile Agent Systems Dina Shehada, Chan Yeob Yeun, M. Jamal Zemerly, Mahmoud Al-Qutayri, Yousof AlHammadi, Jiankun Hu PII:

S1084-8045(18)30299-6

DOI:

10.1016/j.jnca.2018.09.011

Reference:

YJNCA 2211

To appear in:

Journal of Network and Computer Applications

Received Date: 3 September 2017 Revised Date:

17 May 2018

Accepted Date: 18 September 2018

Please cite this article as: Shehada, D., Yeun, C.Y., Jamal Zemerly, M., Al-Qutayri, M., Al-Hammadi, Y., Hu, J., A new adaptive trust and reputation model for Mobile Agent Systems, Journal of Network and Computer Applications (2018), doi: https://doi.org/10.1016/j.jnca.2018.09.011. This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

ACCEPTED MANUSCRIPT

RI PT

A New Adaptive Trust and Reputation Model for Mobile Agent Systems Dina Shehadaa , Chan Yeob Yeuna , M. Jamal Zemerlya , Mahmoud Al-Qutayria , Yousof Al-Hammadia , Jiankun Hub a

M AN U

SC

b

Khalifa University od Science and Technology, Department Electrical and Computer Engineering, Abu Dhabi, PO BOX 127788, UAE University of New South Wales at the Australian Defence Force Academy (UNSW@ADFA), School of Engineering and Information Technology, Canberra PO BOX 7916, Australia

Abstract

EP

TE

D

Mobile agents (MAs) are being widely used in distributed applications development. The motivation behind the interest in MAs is derived from the various advantages they offer, such as, autonomous behavior, mobility and intelligence. Also, their small size and requirement of a low bandwidth are other attractive features. However, the dynamic behavior of agents and hosts in Mobile Agent Systems (MASs) has posed a challenging problem. Maintaining good performance is important for MASs to guarantee the quality of provided services. To address both of these issues we propose a new adaptive trust and reputation model for MASs. The proposed model provides users with the means to assess service providers and decision making basis on who to interact with. It combines direct and indirect witnesses’ experience evaluations. It also assesses the honesty of witnesses to filter out false evaluations. In addition, new “Incentive and Penalty” and “Second Chance” approaches are incorporated into the model to motivate an honest behavior and accommodate changes in the system. A testbed is conducted to show how the system adapts to changes in witnesses’ behavior. Also a framework for comparison is also developed to evaluate and compare the proposed model and compare it to other existing models found in the literature.

AC C

Keywords: Social Networks, Trust and Reputation, Mobile Agent Security, Dynamic Behaviors, Network Simulation, Evaluation Framework

1. Introduction

Trust and reputation evaluation is a human social behavior performed on a daily basis. Before purchasing a specific product, we collect information to judge how efficient this product is. Collected information comes from different sources such as, sales person, personal opinion, friends, and TV advertisements, etc. Collected information is combined together to form a final judgment about the product in order to make the decision whether to purchase it or not. The same concept

Email addresses: [email protected] (Dina Shehada), [email protected] (Chan Yeob Yeun), [email protected] (M. Jamal Zemerly), [email protected] (Mahmoud Al-Qutayri), [email protected] (Yousof Al-Hammadi), [email protected] ( Jiankun Hu) Preprint submitted to Journal of Network and Computer Applications

September 20, 2018

ACCEPTED MANUSCRIPT

AC C

EP

TE

D

M AN U

SC

RI PT

is applied to network systems. Hosts collect information about each other in order to assess their reliability (Trust) level [12, 22, 20, 30]. Trust and reputation is widely used in Mobile Adhoc NETworks (MANETS) [36, 33], Vehicular Adhoc NETworks (VANETS) and peer to peer (P2P) networks [6, 27, 7, 2, 11, 26, 28]. Some authors used trust to evaluate trustworthiness and cost of resources to manage nodes grouping and provide Quality Of Service (QoS) [8]. Another field where trust and reputation is also very popular is in MASs. Researchers have proposed several protocols to address the security issues in MASs. Although, some of them provide good security features, most of them assume that a static list of Service Providers (SPs) is contacted. The list includes SPs who registered and verified themselves to the Certificate Authority (CA) [32, 14, 13, 35, 29]. The assumption is that these SPs are all trusted and maintain their honest behavior at all times. Nevertheless, in real applications there is no guarantee that SPs will maintain a static behavior at all times. SPs might decide to go from genuine to malicious or the other way around. Therefore, using such protocols to provide security for MASs is not sufficient due to the dynamic behavior of hosts and agents. Moreover, another concern hindering most service applications is the overhead on user devices due to contacting all available SPs. A proposed solution for addressing both issues would be, to dynamically choose a subset of available SPs to contact. The subset contains the most trusted SPs that are expected to provide better services. This can be achieved by incorporating the concept of trust and reputation into MASs. The solution will enhance the security level and performance. It provides users (owners) with a way to assess SPs and update their contact list. In this paper, we proposed a new adaptive trust and reputation model for MASs. It provides users with the means to assess SPs to make a decision on who to interact with. Both evaluator and witnesses’ experiences are used in the evaluation. It also assesses the honesty of witnesses to filter out false evaluations. New dynamic adaptive weights that are dependent on the frequency of interaction, number of interactions, and honesty of witnesses are used to combine the different evaluations. Moreover, witnesses are motivated to provide truthful information through the use of incentives and penalties. Another important feature that distinguishes the proposed trust model from conventional models is the use of “Second Chance” approach to give malicious witnesses the chance to enhance their behavior. Incorporating this concept the model is able to cope with dynamic behavior of agents. In the next section, we review some of the proposed trust and reputation models for MASs. The main features, limitations and drawbacks will be highlighted. The rest of the paper is structured as follows, Section 3 introduces the proposed trust and reputation model. In Section 4, results of simulation testbed are discussed. In Section 5, a framework for comparison is proposed to compare the new and related models. Finally, Section 6 concludes the paper. 2. Related Work of Trust and Reputation Models for MASs In this section, we review some of the previous trust and reputation models that exist in the literature. The models are evaluated based on their architecture, flexibility, ability to adapt with dynamic changes, consideration of dishonest information, consideration of number of interactions, and suitability to MASs. FIRE [15] is a trust and reputation model that assesses the agent performance according to four values. These values are, Interaction Trust (IT), Role based Trust (RT), Witness Trust (WT), and Certified Trust (CT). IT is calculated based on direct interactions between agents. The value 2

ACCEPTED MANUSCRIPT

AC C

EP

TE

D

M AN U

SC

RI PT

of IT depends on previous evaluations where each value is assigned a weight depending on the freshness of that evaluation. Recent evaluations are assigned higher weights. RT is calculated based on environment specific rules that are set by the agent system designer. RT values are assigned static values set by the designer. WT and RT both come from witness experience. WT are collected from random witnesses while CT are collected from chosen certified ones. The overall trust is calculated as weighted mean of the four trust values, where weights are assigned based on reliability of ratings and coefficients that users define to represent the importance of each trust value. FIRE model shows an improvement in the performance of agent systems. However, the model does not address the issue of the agent evaluation process or how coefficients of the four trust values are assigned. Another disadvantage, is that the model assumes that agents always provide honest evaluations and does not consider the case of false evaluations. The model only considers the recency of evaluation and does not consider the number of evaluations. In addition, FIRE is a static model that requires the user to provide the application with many static parameters to suit the application, putting a great limitation on the model [21]. Trust and Reputation model for Agent based Virtual OrganizationS (TRAVOS) [31] is a probabilistic based model where trust is defined as the probability of a successful interaction between two agents. Past direct personal experience of evaluator is used to calculate the trust between two agents. When lacking the interaction history, witness evaluations are used. TRAVOS handles the possibility of inaccurate witness evaluations by comparing evaluator’s own observation with observations of witnesses to be able to assess the level of reliability of beliefs of witnesses. This assessment is used to calculate the weight impact of witnesses beliefs. Although TRAVOS takes into consideration the possibility of inaccurate evaluations of witnesses, it does not take into account, the agents’ dynamic behavior, and it works on the assumption that agents are static. In real systems, honest agents can become dishonest and vice versa. In such a case, TRAVOS is not capable of providing accurate trust evaluation. Moreover, time and number of evaluations are not taken into consideration as well [21]. Rosaci model [23] is based on agents reputation (REP) and reliability (REL). REL is the agent service reliability that evaluates the agent based on the efficiency of the provided services. While REL is calculated based on the evaluator’s direct experience, REP is calculated based on witnesses evaluations in the system. Both are combined with dynamic weights adapted based on the number of interactions between agents, the reliability of recommendations provided by witnesses, and the percentage of agents that the evaluating agent has contacted with respect to the overall community. An extention to the model proposed in [23], authors in [5] extended the model to add certified agents that will provide trusted evaluations. In their work they conducted an experiment to promote the use of certified agents to provide honest evaluations and their work showed that proposed certifying agents approach fortified the reputation [17, 4, 25]. The authors in [22, 24] proposed ReGret which is a decentralized trust and reputation model that calculates the trust of an agent based on evaluator’s direct experience and its reputation in the system. Reputation of an agent is represented by three evaluations which are, witness reputation, neighborhood reputation, and system reputation. Therefore, the same limitation as in the FIRE model of being domain specific occurs here as well. ReGret takes into account the possibility of untruthful information from lying witnesses by evaluating their credibility. Similar to TRAVOS, ReGret also assumes static behavior of agents. Jurca and Faltings reputation model proposed in [18] assesses the trustworthiness of the agent based on its reputation in the system. To gather reputation information, centralized agents exist in the network in order to collect reputation reports from agents. A payment method is proposed to motivate agents to provide honest evaluations. Each agent has a certain amount of money that is 3

ACCEPTED MANUSCRIPT

AC C

EP

TE

D

M AN U

SC

RI PT

increased or decreased according to the honesty of its evaluation. As deceitful agents continue to lie, they continue to lose their money and therefore, will no longer be able to purchase reputation reports. Although this method encourages agents into being honest, however, it assumes that all agents’ evaluations have the same weight as reports are aggregated through averaging. Although the model penalizes lying witnesses by losing their money, we argue that this method alone is not enough. A malicious agent who intends to mess with the system might be willing to provide inaccurate evaluations and lose all of its money. Therefore, there is a need to improve the proposed honesty evaluation method. Other agents in the system need to be able to identify lying witnesses and take actions [21, 22]. Reputation is also used in Yu and Singh’s model [37]. What makes this approach different from the preceding ones, is the use of referral techniques to pass agents beliefs between each other. When an agent wants to evaluate another agent it requests neighboring agents to share their experiences. If agents have past experiences with the evaluatee, then they will send back their evaluations. Otherwise, they will refer the request to their neighbors and so on. This process continues until the evaluating agent has enough information to base its judgment on. Each agent is evaluated based on its honesty and its ability to refer requests to trustful agents. The challenge in this model lies in deciding the number of evaluations that are adequate for an agent to evaluate another agent. Also, as agents pass their beliefs through the referral system, the network might be overwhelmed [21, 22]. In Zuo and Liu’s model [38] they proposed an opinion-based model to calculate the reputation of a host. Opinion-based structure represented in three values (belief, disbelief and uncertainty) are aggregated using static weights set by user to reflect a host’s reputation based on users’ feedback. The model has a decentralized structure. However, inaccurate malicious feedback values are not handled in this model and users are not motivated to provide honest values. Furthermore, time of evaluations is not also taken into the consideration in the aggregation process. In addition to the security protocol proposed in [10], Geetha and Jayakumar also proposed a trust and reputation model. The overall trust is calculated as the sum of direct and indirect evaluations. The model is able to cope with false evaluations as long as the number of malicious hosts is less than half of the total number of hosts in the network. Although the protocol has a decentralized approach, the model assigns the same static weight for both direct and reputation evaluations and does not consider the freshness of evaluations. Also, if any host decides to join or leave the network, all hosts need to be informed and all routing tables have to be updated to incorporate the change. Comprehensive Reputation Model (CRM) [19] has two stages: the online and the offline evaluation. In the online stage, the trustworthiness of an agent is first estimated based on the evaluator’s direct experience. After that the direct evaluation is combined with the reputation evaluation calculated based on the experience of a number of trust and referred agents. Trust agents are chosen by the evaluator and referred agents are chosen by the agent being evaluated. Weights assigned to calculate the trust depend on three factors: the importance of a transaction type, the value of the transaction itself, and the time factor that reflects the freshness of the interaction. The importance value is used to differentiate between different types of interactions which give higher weights to some interactions over the others. The other factor is used to differentiate between interactions of the same type such as to give higher weights to interactions with higher values. In the offline stage, the evaluator updates its list of trusted agents. The offline stage is expensive therefore, it is only carried out in cases of the bad performance of evaluating agents, low quality trust values, or after a certain amount of time has passed. However, only the time of evaluation is considered and no motivation is provided to agents to share or act truthfully. 4

ACCEPTED MANUSCRIPT

AC C

EP

TE

D

M AN U

SC

RI PT

Basheer et al. model proposed in [3] evaluates the level of confidence an agent has in another agent based on local confidence (LC) and global confidence (GC) values. LC is calculated based on direct experience. On the other hand, GC is calculated based on the experience of other agents (witnesses). Each confidence is calculated based on three values: the importance of the local or global confidences (I), a trust value (T) and a certainty value (C). The overall confidence level is the sum of the LC and the GC. The model has a decentralized architecture and has the ability to evaluate the confidence level of an agent and the certainty of the evaluations. The weight, assigned for the confidence value is a static parameter. Although the system evaluates the level of confidence of an agent, it does not consider the case of dishonest agents or false evaluation of witnesses. In Arvazhi and Zhang [1] they proposed an evaluation model to filter dishonest evaluations. The system is made up of a group of buyer and seller agents. An active buyer needs to filter out dishonest evaluations coming from other buyer witnesses called advisors. This method filters out dishonest advisers by bi-clustering them according to their honesty based on different criteria. They are clustered based on the similarity of ratings between the advisors and the active buyer and the correlation of their ratings on the various criteria. All combinations of grouping the different criteria are used to add and remove advisors to/from the cluster. Advisors who lie about a certain criteria and are honest about others are detected. Also, the deviation from majority of the votes of the advisors is used to filter out dishonest advisors. The order of where a new criterion is added in each iteration affects the chosen advisors, and therefore, a possible solution that properly decides the order is proposed [16, 34]. In Beta-based Trust and Reputation Evaluation System (BTRES)[9], the system hosts’ behavior is monitored and beta distribution is used to reflect the reputation level of a node. Direct trust is the statistical expectation of a node reputation value. The total trust is composed of both direct and indirect values. In the model, recent data has higher weights, however static weight are used to combine direct and indirect trust. The model has a decentralized structure. On the other hand, it does not consider dishonest evaluations or provide any kind of incentives or penalty system to cooperating hosts (witnesses). Despite the advantages of the aforementioned models, they are impaired by some hidden assumptions such as: centralized structure, static weights, assumption of witness honesty, assumption of witnesses static behavior and the lack of an incentive mechanism to encourage honest behavior. In order to overcome such issues, we propose a new adaptive trust and reputation model that incorporates new concepts of “Incentives and Penalty” and “Second Chance” to provide an enhanced evaluation method. As will be shown later in Section 5, our proposed model provides an enhancement over the models reviewed in this section. In Section 5 a detailed comparison is presented. 3. Proposed Trust and Reputation Model

In this section we propose a new adaptive trust and reputation model that overcomes the limitations of the existing models. It provides MASs with a suitable adaptive evaluation technique. The proposed model not only relies on the evaluator direct experience but also, the indirect witnesses’ experience (reputation) with evaluatee. It calculates adaptive weights to combine the different evaluations. Dynamic weights are dependent on the frequency of interaction, number of interactions, and honesty of witnesses. Honesty of witnesses is evaluated after every interaction. In this model we also introduce the new approach “Incentive and Penalty”. Each witness has a balance Evaluation units (Eunits) that are used to purchase information from other witnesses. 5

SC

RI PT

ACCEPTED MANUSCRIPT

M AN U

Figure 1: Scenario of Trust evaluation community

EP

TE

D

Eunits balance is increased for honest witnesses while dishonest witnesses are penalized by deducting Eunits from their balance. The amount of increase or deduction depends on the level of improvement or degradation in the honesty of the witness. Moreover, we incorporate another new approach “Second Chance” to provide the system with the ability to adapt with changes in the behavior of agents. Misbehaving dishonest witnesses are given the chance to improve and change their malicious behavior to honest. This will give them the opportunity to re-join the system and participate in the evaluation process. All of these factors make the proposed trust model a suitable and unique solution for trust evaluation. Next, detailed explanation of the proposed trust and reputation model is provided. In the following subsections we explain the proposed trust model evaluation process. First we discuss the calculation and update of direct evaluations. After that calculation of witnesses evaluations (reputation) is explained. Then, both direct evaluation and reputation are combined to calculate the overall trust. Witnesses’ honesty evaluation is discussed followed by explanation of “Incentive and Penalty” and “Second Chance” approach. Before discussing in details, we assume the following scenario shown in Figure 1. The system has N witnesses and a total of m service providers (S P) that evaluator i (EVi ) is trying to evaluate.

AC C

3.1. Direct Trust Direct Trust (DT ) is an evaluation of a party based on evaluator’s own direct experience with the evaluatee. After each interaction, the evaluator evaluates the behavior of service provider and gives it a Direct Evaluation value (DE) that is between 0 and 1, representing completely malicious and completely trusted respectively. We chose the range between 0 and 1 because it is commonly used in trust evaluation models. It can also be represented as a percentage reflecting trustworthiness of an evaluatee (0%-100%). DT value reflects an overall judgment for evaluatee based on how trustful it has been with the evaluator. If the evaluator had more than one interaction with the evaluatee then all history values of DEs are aggregated to be used to calculate DT. Many models that exist in the literature calculate the final DT by taking the average of DEs [22], however, we argue that this would not provide a good judgment because of the lack of time relevance. Time relevance is important because if direct evaluations (interactions) are separated by a long period of time, then due to the changes in the behavior in MASs, these evaluations might not reflect the current behavior status of the evaluatee anymore. Other models calculate DT by combining the different direct evaluations and assigning more weight to recent ones over others [15, 22]. This method might 6

ACCEPTED MANUSCRIPT

M AN U

SC

RI PT

seem promising because it takes into consideration the dynamic change in behavior of evaluatees. However, we argue again that this is also not enough. Judgment based on the time relevance only does not take into consideration the number of interactions the evaluator had with the evaluatee. The number of interactions is also important because having more experience indicates having a better knowledge about the evaluatee. As a solution for this we propose a new method to aggregate the direct evaluation based on frequency of interactions. The frequency of interactions is approximated by calculating the number of interactions that happened in the period of interaction between the evaluator and the evaluatee. Every evaluator in the system keeps track of its interactions (DE) with evaluatees. Table 1 shows direct interactions EVi had with service provider j, (S P j at different time units t, where t is an integer number and 0 ≤ t and (-) denotes that no interaction happened at that time unit. Table 1: Direct evaluations of EVi with S P j at time unit t

DE(i, j)t 0.7 0.65 . 0.6 0.66

D

t tf t f −1 t f −2 t f −3 . t1 t0

σ(i, j) =

EP

TE

Assume that (EVi ) interacted with (S P j ) at time unit t f , and evaluated it with DE(i, j)t f of 0.7. To combine the current DE at t f and the history of DEs between EVi and S P j , we first define σ(i, j) in Equation (1) that represents the frequency of interaction between EVi and S P j : M(i, j) ∆t(i, j)

(1)

AC C

where M(i, j) is the number of direct interactions EVi had with S P j and ∆t(i, j) is the period of time passed from the first until the latest interaction EVi had with S P j in the history. According to Table 1, ∆t(i, j) = t f −1 − t0 . σ has a value that is between 0 and 1, 0 ≤ σ(i, j) ≤ 1. It can not exceed 1 and that is because EVi cannot have more than one interaction with S P j in the same time unit. Because σ(i, j) represents the frequency of interaction, if no interaction history exists then σ(i, j) is set to 0. As will be explained next, this σ(i, j) is used to assign dynamic weights to calculate DT (i, j). In order to calculate the final DT (i, j), the current recent DE( i, j) at t f is combined with the value of DEs in the history. Equation (2) shows the formula used to calculate DT (i, j) where, DE(i, j)t f is the current and most recent DE and DE(i, j)history is the average of DE values from t0 until t f −1 as shown in Equation (3). Xi j is the number of interactions EVi had with S P j . Weight w(i, j) is dynamically assigned based on the value of σ(i, j) and is calculated in Equation (4) DT (i, j) = (1 − w(i, j)) × DE(i, j)t f + w(i, j) × DE(i, j)history 7

(2)

ACCEPTED MANUSCRIPT

∑t∈[t0 ,tt f ] DE(i, j)t Xi j

w(i, j) = 1 − e

−σ(i, j) λ

(3)

RI PT

DE(i, j)history =

(4)

⎧ Case 1: 0.7 ≺ λ ≤ 1 ⎪ ⎪ ⎪ ⎪ λ = ⎨Case 2: 0.4 ≺ λ ≤ 0.7 ⎪ ⎪ ⎪ ⎪ ⎩Case 3: 0 ≺ λ ≤ 0.4

M AN U

SC

w has a value between 0 and 1, 0 ≤ w(i, j) ≤ 1. When EVi does not have any history with S P j , σ(i, j) is set to 0 and therefore, w(i, j) = 1 − e0 = 0, assigning full weight to current recent DE(i, j)t f . If EVi has a good amount of experience with S P j then more weight is assigned to the history evaluations. λ is a static value chosen by the application designer. It is assigned based on the type of the application. If the application has a high average number of interactions then λ is assigned to a high value, which is represented in Case 1 below. On the other hand, if the application has a small number of interactions then λ is assigned to a low value, which is represented by Case 3. The value of λ is also between 0 and 1, ( 0 ≺ λ ≤ 1). λ provides some judgment on the value of σ(i, j), in other words, λ evaluates how good the value of σ(i, j) relative to the average number of interactions of an application is. In summary, the value of λ can be assigned as follows: for applications with high number of interactions for applications with average number of interactions for applications with low number of interactions

EP

TE

D

Figure 2 shows the effect of the change in λ. Let us take for example the value σ = 0.4. When λ = 1, λ = 0.8, λ = 0.5 and λ = 0.2, the value of w is approximately 0.3, 0.37, 0.53 and 0.85 respectively. So, for the same σ, different weights are assigned in the different applications. At σ value of 0.4, a high interactions application (λ = 0.8) is assigned a lower weight compared with an application with low interactions (λ = 0.2). What we are trying to achieve here, is to provide a sense of resource assessment or appreciation. In applications with low interactions, the resource information is less compared to other applications with high interactions and therefore the value of σ is valued more, which is similar to the situation where water resources are valued more in the desert compared to the cities.

AC C

3.2. Reputation In the previous subsection we discussed how the evaluator calculates DT based on its own experience. In this subsection, another evaluation value is used to judge the behavior of the evaluatee S P, using other witnesses’ experience. In the scenario in Figure 1, the system has N witnesses. EVi will purchase information from witnesses and calculate the reputation REP(i, j), which is the reputation of S P j in the system calculated by EVi . To communicate with honest witnesses we define a confidence value Con f (i, j, k), which represents the confidence of EVi in honesty of information provided by Witness k (Witk ) about S P j . The confidence has a value between 0 and 1, 0 ≤ Con f (i, j, k) ≤ 1. It measures the honesty of witnesses and is updated after every interaction. The evaluation of witnesses honesty and update of their confidence will be discussed in details in subsection 3.4. N j is the number of witnesses with honest information about S P j in the system, N jhonest is a matrix that includes witnesses with honest information about S P j , and I(k, j) is the number of interactions Witk had with S P j . Witk is considered to be honest if it has a confidence value that is higher than a threshold (thrcon f ), (thrcon f ≺ Con f (i, j, k)). Equation (5) is used to calculate the total reputation of S P j through weighted sum of witnesses 8

TE

D

M AN U

SC

RI PT

ACCEPTED MANUSCRIPT

Figure 2: Change in weight assigned for history for different values of λ

AC C

EP

DT s values. The reputation value is between 0 and 1, 0 ≤ REP(i, j) ≤ 1. Witnesses are assigned dynamic weights according to their experience with the evaluatee compared to other witnesses. Witnesses with more experience (higher number of interactions) are assigned higher weights and their evaluation is considered more important. REP(i, j) =



DT (k, j) ∗

k∈N jhonest

I(k, j) Nj ∑k′ ∈N jhonest

I(k′ , j)

(5)

3.3. Overall trust In this subsection we explain how the final overall value of trust is calculated. The final trust is calculated based on the DT of evaluator in addition to the total reputation of evaluatee. Equation (6) shows how the final trust of S P j evaluated by EVi , (T rust(i, j)), is calculated. Equation (6) combines direct trust of EVi and reputation of S P j . φ( j) is a dynamic weight representing the average of the confidence of honest witnesses information about S P j , see Equation (7). φ( j) has a value between thrcon f and 1, thrcon f ≺ φ( j) ≤ 1. This is because only information coming from honest witnesses is aggregated to calculate the reputation. φ( j) is assigned to the reputation value to reflect the honesty of witnesses information about S P j . The higher the confidence in 9

ACCEPTED MANUSCRIPT

RI PT

their honesty the higher the value of φ will be. According to Equation (6) the final trust is the sum of two values with a range between 0 and 1. Therefore, the final trust value will be between 0 and 2, 0 ≤ T rust(i, j) ≤ 2. T rust(i, j) = DT (i, j) + φ( j) × REP(i, j) φ( j) =

∑k∈N jhonest Con f (i, j, k) Nj

(6) (7)

M AN U

SC

The final value of T rust(i, j) is used to evaluate the overall trust level of S P j . The evaluator evaluates the different available service providers and calculates the total trust of each one of them. The evaluator then makes the decision of whether to interact with the evaluatee or not. If the value of the trust is higher than the threshold (thrtrust ), it communicates with the service provider. The threshold value thrtrust is set by the system user, where 1 ≺ thrtrust ≤ 2. Of course the higher the value of the threshold the more certain the evaluator will be about the trustworthiness of a service provider.

AC C

EP

TE

D

3.4. Confidence and Witness Honesty Evaluation Earlier we mentioned that a confidence value Con f (i, j, k) is defined representing the confidence of EVi in honesty of information provided by Witk about S P j and that 0 ≤ Con f (i, j, k) ≤ 1. Honest witnesses are defined as witnesses with a Con f that is above a threshold value (thrcon f ), the threshold value is usually, 0.6 ≺ thrcon f ≤ 1 indicating that honest witnesses are witnesses that the evaluator trusts with a value higher than 60%. After calculating the T rust for all service providers, the evaluator interacts with certain trusted service providers and then evaluates each one of them based on the current interaction. Witnesses honesty is evaluated according to two concepts, the first is the deviation of the witness information from the evaluators current actual evaluation and the second is the deviation of the witness information from other honest witnesses’ information in the system. Assume that EVi communicated with S P j and evaluated it with DEactual (i, j), now EVi needs to update its confidence in all witnesses information about S P j . For Witk , EVi will calculate two confidence values as shown in Equations (8) and (9). The first equation measures the deviation of DT (k, j) which is the direct trust of Witk about S P j from the evaluator direct actual evaluation DEactual (i, j). The higher the difference is, the less the value of Con f1 (i, j, k) will be. The second equation measures the deviation of DT (k, j) from average of DT values of all the other honest witnesses in the system (DT avg ( j)). DT avg ( j) calculation formula is shown in (10). Because DE and DT have a value between 0 and 1, the confidence values also have the same range 0 ≤ Con f1 (i, j, k) ≤ 1, 0 ≤ Con f2 (i, j, k) ≤ 1. Con f1 (i, j, k) = 1 − ∣DEactual (i, j) − DT (k, j)∣

(8)

Con f2 (i, j, k) = 1 − ∣DT avg ( j) − DT (k, j)∣

(9)

DT avg ( j) =

∑k∈N jhonest DT (k, j) Nj

(10)

The final confidence of Witk is calculated in Equation (11) as the weighted sum of the two confidence values from Equation (8) and (9). The confidence of Witk is being evaluated by the evaluator. Therefore, the range of weight µ is set to be 0.5 ≺ µ ≤ 1 to assign more weight to 10

ACCEPTED MANUSCRIPT

RI PT

confidence calculated based on evaluator own experience. Based on the confidence evaluation a judgment on honesty of each witness can be approximated. Con f (i, j, k) = µ × Con f1 (i, j, k) + (1 − µ) × Con f2 (i, j, k)

(11)

M AN U

SC

3.5. Incentive and Penalty Approach A major challenge in trust is to motivate witnesses not only to share information but also to share them honestly upon request. Trust and reputation models are based on the social network and willingness of witnesses to participate and pass their experience to others. Whenever the social network is broken then the whole trust evaluation approach becomes non functional. As a result, incorporating incentive and penalty mechanism is a necessity in our trust model. Witnesses are motivated to share honest information by providing incentives (reward) for honest witnesses and penalizing dishonest ones. In the proposed trust and reputation model every witness in the system has a balance of (Eunits) used to purchase information from other witnesses. Eunits(k) is the balance of Eunits for Witk . Giving incentives is rewarding honest witnesses with an amount of Eunits with every honest share of information. While a penalty is losing Eunits as dishonest witnesses continue to lie. Lying witnesses will eventually run out of Eunits and will no longer be able to get any information from other witnesses to assess their evaluatees. To explain the incentive and penalty mechanism in details, we define another term called difference (Di f f (i, j, k)) which represents the improvement or degradation in honesty of Witk about S P j , and is calculated by EVi as follows:

D

Di f f (i, j, k) = Con fnew (i, j, k) − Con fold (i, j, k)

(12)

EP

TE

Con fnew (i, j, k) is the recent new confidence value and Con fold (i, j, k) is the previous confidence value. Di f f (i, j, k) is the difference between these two values. The difference value has a range between -1 and 1, −1 ≤ Di f f (i, j, k) ≤ 1. Having a Di f f (i, j, k) of -1, shows complete degradation in Witk behavior i.e. moving from complete honest to complete dishonest. Having a Di f f (i, j, k) of 1 shows complete improvement in Witk behavior i.e. moving from complete dishonest to complete honest behavior. EVi calculates this difference value for all the witnesses and then, gives incentives for improving witnesses and penalizes the ones degrading in their honesty. In the proposed model we have six cases that are eligible to either gain or lose Eunits. The Cases are as follows:

AC C

⎧ Case 1: if Con f (i, j, k) ≻ thrcon f and Di f f (i, j, k) ≻ 0 , ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ then Eunits(k) = Eunits(k) + (1 + Di f f (i, j, k)) ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ Case 2: if Con f (i, j, k) = 1 and Di f f (i, j, k) = 0 , ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ then Eunits(k) = Eunits(k) + 1 ⎪ ⎪ ⎪ ⎪ ⎪ Case 3: if Con f (i, j, k) ≤ thrcon f and Di f f (i, j, k) ≻ 0 , ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ then Eunits(k) = Eunits(k) + Di f f (i, j, k) ⎪ ⎨ ⎪ Case 4: if Con f (i, j, k) ≻ thrcon f and Di f f (i, j, k) ≺ 0 , ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ then Eunits(k) = Eunits(k) + Di f f (i, j, k) ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ Case 5: if Con f (i, j, k) ≤ thrcon f and Di f f (i, j, k) ≺ 0 , ⎪ ⎪ ⎪ ⎪ ⎪ then Eunits(k) = Eunits(k) + (Di f f (i, j, k) − 1) ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ Case 6: if Con f (i, j, k) = 0 and Di f f (i, j, k) = 0 , ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ then Eunitsnew (k) = Eunits(k) − 1 11

ACCEPTED MANUSCRIPT

M AN U

SC

RI PT

There are three cases where a witness is rewarded with Eunits which are Case 1, Case 2 and Case 3. In Case 1, Witk has a confidence above the threshold and shows improvement in its honesty, therefore it gains one Eunit in addition to the amount of improvement Di f f . In Case 2, Witk reached the maximum confidence, it is completely honest and therefore there is no possibility to improve, in this case, as Witk maintains its honest behavior, it gains one Eunit. In Case 3, Witk has a confidence that is below the threshold, however it is showing an improvement in its honesty, in this case the amount of its improvement Di f f is added to its Eunits balance. On the other hand, the other three cases: Case 4, Case 5 and Case 6, cover the penalty of dishonest witnesses. In Case 4, Witk has a confidence that is higher than the threshold, but showing degradation in its honesty, in this case the amount of degradation Di f f is deducted from its Eunits balance. In Case 5, Witk has a low confidence below the threshold and it is showing degradation in its honesty, then one unit in addition to the amount of degradation Di f f is deducted form its Eunits balance. In Case 6, Witk is extremely malicious it has the minimum confidence and maintaining its malicious dishonest behavior, in this case one Eunit is deducted from its balance.

TE

D

3.6. Second Chance approach Taking into consideration the possibility for behavior enhancement for witnesses is important because it provides the system with the flexibility to adapt to dynamic changes in behavior. For example, one witness that is known to be dishonest with a confidence below the threshold can change its behavior and start improving. Such improvement can be shown in Case 3 discussed in the earlier subsection. In such a case, we want to give this witness a chance to enhance and re-join the system to participate in evaluation events. To be able to do that, we define a value called promotion (pro). The pro value is a promotion added to the witness confidence in case of improvement in its honesty level. So if EVi notices improvement in Witk , it will calculate the promotion pro(i, j, k) value and therefore, the final confidence value given to Witk is shown in Equation 13: (13)

EP

Con f (i, j, k) = Con f (i, j, k) + pro(i, j, k)

AC C

pro is dependent on Di f f or the amount of improvement. Equation (14) shows the formula used to calculate the promotion pro(i, j, k) given by EVi to Witk because of its improvement in honesty of information about S P j , 0 ≺ pro(i, j, k) ≺ 1 − thrcon f . The promotion value is bounded by the 1−thrcon f which is the maximum value that can be added to the witness confidence without exceeding the maximum confidence value which is 1. γ is an adaptive value greater than 0, γ ≻ 0, used for two main reasons. The first reason is to limit the value of promotion given to witnesses, in other words it is used to set an upper bound for promotions. pro(i, j, k) = e

Di f f (i, j,k) γ

−1

(14)

If β is the maximum promotion the user decides to give for maximum behavior improvement at Di f f (i, j, k) = thrcon f and 0 ≺ β ≺ 1 − thrcon f . Then, calculating the value of γ to give an upper bound for the promotion will be as follows: β=e

thrcon f γ

− 1, therefore γ = 12

thrcon f ln(1 + β)

(15)

TE

D

M AN U

SC

RI PT

ACCEPTED MANUSCRIPT

Figure 3: The choice of gamma values for different β values

AC C

EP

Figure 3 shows the different values of γ assigned for different β values at thrtrust = 0.6. The other use of γ, is that it works as a penalizing factor that decreases the value of promotion given to a specific witness that started to behave dishonestly after being given a second chance. The value γ is chosen in such a way to make it harder for these witnesses to re-build their confidence and get back to the system. If a witness was given another chance to enhance and it managed to build its confidence to reach the threshold value and suddenly it returned to its malicious behavior then, its γ value will increase with a value of α. As shown in Equation (16), α is also 0 ≺ α. In the proposed trust model α represents the number of times the witnesses behaved in a malicious way after rebuilding their confidence (given another chance). Figure 4 shows the decrease in promotion given to a witness as the value of γ increases due to its malicious behavior. γnew = γold + α

(16)

3.7. Features and Limitations of proposed model In this section we discuss some of the strengthening features as well as, some assumptions and limitations of the proposed model. Some of the good feature offered by the proposed model 13

TE

D

M AN U

SC

RI PT

ACCEPTED MANUSCRIPT

are:

EP

Figure 4: Decrease of promotion given to witness as the value of γ increases due to its malicious behavior

• the model is able to cope with changes in the behavior of witnesses in the system.

AC C

• It provides users with the means to assess service providers to make a decision on who to interact with. • It uses both direct and reputation values to provide a more thorough judgment. • It uses adaptive weights that are dependent on the frequency of interaction, number of interactions and honesty of witnesses are used to combine the different evaluations. • It assesses the honesty of witnesses after every interaction to filter inaccurate evaluations. • Unlike other model, it does not rely on majority of witnesses being honest to filter inaccurate evaluations. It uses evaluator and the honest witnesses evaluations to filer dishonest witnesses and • It promotes witnesses honesty through developed approach “Incentives and Penalty”.According to the proposed approach witnesses gain or lose Eunits accord14

ACCEPTED MANUSCRIPT

RI PT

ing to their current behavior status and amount of improvement or degradation in their honesty level. • It provides malicious lying witnesses the chance to enhance their behavior gradually in order to be reconsidered in the evaluation. This is achieved through the unique developed “Second Chance” approach.

SC

By incorporating these approaches, the model is able to adapt to the dynamic changes in witnesses behavior. The proposed model is also limited by some assumptions. The assumptions and limitations can be summarized in several points that we discuss next:

M AN U

• In the proposed model we calculate the direct evaluation values of all witnesses in the system. of course, we assume that evaluator has at least two interactions with the evaluated service provider in its history. • To calculate the total trust of evaluatee, we assume that the system has some honest witnesses who have history of interactions with evaluated service provider. • At system start up all witnesses have the same balance of units and they all have the same confidence level. It takes our model one round to figure honest and dishonest witnesses and update their units accordingly.

4. Simulation

TE

D

• Although service providers have dynamic behavior i.e. they can change from genuine to malicious or the other way around, but, we assume that they maintain the same behavior with all users or witnesses regardless of their identity i.e. While they are malicious, they act malicious with all users and while they are genuine (trusted) they act genuine with all.

AC C

EP

A testbed simulation is designed to evaluate the proposed trust and reputation model. The testbed simulates interactions between different users and service providers. The proposed model is used to evaluate the different available service providers and make the decision on who to interact with. The simulated system scenario is assumed to have six evaluators and 25 service providers. In the scenario, a user is evaluating the different service providers to choose the best five to interact with. Simulation results show how using our model, the system is able to adapt to changes in the witnesses behavior. At the starting point, it is assumed that all witnesses are honest. However after some time Wit1 decides to become malicious and lies about the provided information. Using the proposed model, the malicious witness is detected and eliminated from the evaluation process. After a while Wit1 returns to its genuine behavior and starts improving. As a result, Wit1 is given a chance to rebuild its confidence as it maintains its genuine behavior. It gradually builds its confidence until it reaches the confidence threshold. At that point Wit1 is given a second chance to participate in the evaluations process again. Simulation results and figures are discussed in details next. In Table 2, the different parameters used in the simulation are shown. We assume that the system user wants to communicate with witnesses that are trusted with a percentage higher than 70% (thrcon f = 0.7). Moreover, service providers are considered to be trusted if they have a T rust value higher than 60% (thrtrust = 1.2). The simulated application is assumed to have average number of interactions therefore, λ = 0.4. The evaluation process is conducted 50 times. 15

ACCEPTED MANUSCRIPT

Value 1.2 0.7 0.7 0.05 0.4 50

SC

Parameter thrtrust thrcon f µ β λ Number of rounds

RI PT

Table 2: Simulation parameters

EP

TE

D

M AN U

Figure 5 shows the choice of witnesses in each round. At the beginning, all witnesses are honest and therefore, all of them participate in the evaluations process. At round 3, Wit1 starts lying and as a result, it is eliminated in the next round (round 4). At round 15, Wit1 starts telling the truth again and maintains its honest behavior. Due to the “Second Chance” approach Wit1 is able to gradually build its confidence until it returns to the evaluation process, which is achieved at round 30. Figure 6 shows the change in Eunit balance for each witness due to the incorporation of “Incentives and Penalty” approach. As shown, honest witnesses gained more Eunits as they maintained their honest behavior. On the other hand, once Wit1 started lying, it started losing Eunits from its balance. Once Wit1 became genuine, it started gaining some Eunits again. Figure 7 shows the change in confidence for Wit1 . The confidence of Wit1 is updated at round 4, where Wit1 is identified as a malicious lying witness. Its confidence values are degraded and therefore, excluded from the evaluation. After a while, Wit1 starts telling the truth as a result, therefore, its confidence is gradually built, as it maintains its genuine behavior. At round 30, its confidence values are higher than thrcon f , and Wit1 is given another chance. In the next section, we provide a comparison framework to evaluate the proposed trust and reputation model and compare it with the other models that exist in the literature.

Table 3: Comparison of proposed trust and reviewed trust models Structure

Evaluation type

FIRE [15]

Decentralized

TRAVOS [31]

Decentralized

Rosaci [23, 5] ReGret [24] Jurca and Faltings [18] Yu and Singh [37] Geetha and Jayakumar [10] CRM [19] Basheer et al. model[3] Arvazhi and Zhang model [1] BTRES [9] Zuo and Liu model [38] Proposed Trust and Reputation Model

Decentralized Decentralized Centralized Decentralized Decentralized Decentralized Decentralized Decentralized Decentralized Decentralized

Direct and indirect Direct only unless not enough Direct and indirect Direct and indirect Indirect only Indirect only Direct and indirect Direct and indirect Direct and indirect Direct and indirect Direct and indirect Indirect only

Decentralized

Direct and indirect

AC C

Related Work

16

Handle inaccurate evaluations No

Weights

Incentives

Experience

Time

Static

No

No

Yes

Yes

Dynamic

No

No

No

Limited Yes No Yes Yes Yes No Yes No No

Dynamic Static Static Dynamic Static Dynamic Static Static Static Static

No No Yes No No No No No No No

Yes No No No No No No No No No

No No No No No Yes No Yes Yes No

Yes

Dynamic

Yes

Yes

Yes

AC C

EP

TE

D

M AN U

SC

RI PT

ACCEPTED MANUSCRIPT

Figure 5: Change in witnesses contacted in each round

5. Framework for Comparison of Proposed Trust Model and related work To evaluate the proposed model and compare it to existing reviewed work in the literature, we propose a comparison framework. The framework classifies the MASs based models according to the following: • Structure: to evaluate the model according to its structure, centralized or decentralized. MASs have a decentralized architecture therefore, models with a decentralized structure are more suitable to MASs. Moreover, with decentralized structures the issue of single point failure is not raised. • Types of evaluation: Some models use direct evaluations of the evaluator to make the decision. While others, rely on witnesses experience (reputation). Combining both eval17

AC C

EP

TE

D

M AN U

SC

RI PT

ACCEPTED MANUSCRIPT

Figure 6: Change in Eunit balance for each witness

uations adds flexibility to the system and provides a more generic and trusted evaluation technique.

• Ability to handle inaccurate evaluations: inaccurate evaluations oppose the system functionality and make the system less trusted. As a result there is a need to provide a mechanism that eliminates the effect of lying witnesses in the system. • Weight assignment: whether the model uses static or dynamic adaptive weights to combine the different evaluations. Due to the changing behavior in MASs, it is preferable to use dynamic weights that can be adjusted to adapt to the dynamic changes in the system. • Incentives: trust is based on willingness of witnesses to share their information honestly. 18

TE

D

M AN U

SC

RI PT

ACCEPTED MANUSCRIPT

Figure 7: Change in confidence of malicious Wit1 for different service providers

EP

Therefore, providing an incentive mechanism for witnesses to share their information is an important factor.

AC C

• Experience factor: it is logical to consider the amount of experience a witness or evaluator has with the evaluatee. Witnesses with more experience are considered to be of higher value. Therefore, considering the number of interactions is also an important factor. • Time factor: time factor is used to reflect the behavior of the evaluatee in different periods of time. Time adds a dynamic factor that adds flexibility to the system to adapt the evaluation system based on the recency of the evaluation.

Table 3 shows the evaluation of the proposed and the various models reviewed in section 2. The table compares between the different models according to the proposed framework. Most of the reviewed models have a decentralized structure except Jurca and Faltings model that relies on centralized agents to base its judgment. Also most of the models use both direct and indirect evaluations. Jurca and Faltings, Zuo and Liu, and Yu and Singh models are reputation based that rely on indirect experience only. Moreover, TRAVOS relies on evaluators direct experience unless, direct experience is not sufficient. 19

ACCEPTED MANUSCRIPT

AC C

EP

TE

D

M AN U

SC

RI PT

Many of the proposed protocols provide a mechanism to handle inaccurate witness evaluations. TRAVOS, ReGret, Yu and Singh and CRM for example use evaluator direct experience to evaluatee witnesses credibility to give them less weight or eliminate them from the evaluation process. On the other hand, Geetha and Jayakumar model handles inaccurate evaluations in a different manner. The model does not identify lying witnesses but it has the ability to tolerate inaccurate evaluations under the assumption that the number of liars will not exceed half the number of nodes in the system. In Arvazhi and Zhang model [1] witnesses are clustered based on the similarity of ratings, the correlation between the different criteria and the majority of the votes in the system. It is based on the assumption that majority of the witnesses are honest. By adding certified witnesses agent, the extension on Rosaci model provides a way control dishonest evaluations by contacting certified agents only.Therfore, we can say that Rosaci model provides a limited solution to handle inaccurate evaluations. While some of these techniques might seem promising, however, in our proposed model, a different more robust mechanism is used to identify lying witnesses and eliminate them from evaluation. Our model updates witnesses confidence according to evaluator own experience and experience of honest witnesses in the system. Moreover, unlike other models, the proposed model does not rely on the assumption that most witnesses are honest. Existing models in the literature either eliminate dishonest witnesses forever or provide a fast return for witnesses as soon as they start to act genuinely. Therefore, another important concept that distinguishes our proposed model from others, is giving witnesses a second chance to gradually build their confidence back after being eliminated. Witnesses will have to work hard to build their confidence and as a result, the case where a witness can maliciously manipulate the system by frequent change in the behavior is reduced. TRAVOS, Rosaci, Yu and Singh, CRM models and the proposed trust and reputation model use dynamic adaptive weights to combine the different evaluations. Moreover, in addition to our model, Jurca and Faltings model are the only ones that provide an incentive mechanism for sharing information. However, unlike Jurca and Faltings incentive mechanism that treats all witnesses in the same manner i.e. give the same amount of incentive to every honest witness, in our proposed model we differentiate between six different cases where witnesses are eligible for either an incentive or a penalty. Moreover, the amount of incentive or penalty given depends on the current status of the witness and the amount of improvement or degradation in its behavior. As discussed before the number of interactions is an important factor that reflects, the experience a witness has with the evaluatee. Rosaci trust model is the only model that takes the number of interactions into consideration. It uses the number of interactions to dynamically adjust witnesses weights. Moreover, FIRE, CRM, Arvazhi and Zhang, and BTRES models consider the freshness of evaluations and give recent evaluations higher weights. Earlier, we discussed the importance of combining the time and number of evaluations to provide a better judgment. Using time or number of evaluations on their own does not provide a solid evaluation process, as a result, in the proposed model we combine both the time and experience by defining frequency of interactions value σ. We measure the frequency of interactions σ by calculating the number of interactions that happened in a period of time. The proposed model, is the only model that combines both factors to provide a better evaluation process. Comparing the proposed model with the other models, we conclude that it provides a better evaluation process. It is the only one that uses frequency of interaction to combine evaluations. Its unique witnesses honesty evaluation process and “Second Chance” approach provide the system with the flexibility to adaptively cope with changes, overcoming limitations that previous models had. The “Incentives and Penalty” approach motivates the social network in the system by encouraging witnesses to share information honestly to gain Eunits and avoid being penal20

ACCEPTED MANUSCRIPT

RI PT

ized. We would like to highlight the fact that, although the compared related work models in addition to our model are all proposed for MASs, however, they can also be considered to be general purpose that can be applied to any system that offers different services and has interacting users. 6. Conclusions

TE

D

M AN U

SC

In this paper we proposed a new adaptive trust and reputation model for MASs that is able to cope with changes in the behavior of witnesses in the system. It provides users with the means to assess service providers in order to make a decision on who to interact with. The model uses both direct and reputation values to make the evaluation. Adaptive weights that are dependent on the frequency of interaction, number of interactions and honesty of witnesses are used to combine the different evaluations. It also assesses the honesty of witnesses after every interaction to filter inaccurate evaluations. An “Incentives and Penalty” approach is developed to motivate witnesses to share truthful information. According to the proposed approach witnesses gain or lose Eunits according to their current behavior status and amount of improvement or degradation in their honesty level. Another important approach that distinguishes the proposed model from existing ones is the use of “Second Chance” to give malicious lying witnesses the chance to enhance their behavior gradually in order to be reconsidered in the evaluation. By incorporating these approaches, the model is able to adapt to the dynamic changes in witnesses behavior. A testbed is conducted to show how our model adapts to changes in witnesses behavior. Moreover, a framework for comparison is also developed to evaluate the proposed model and compare it to other existing models in the literature. Acknowledgment

EP

The authors wish to acknowledge Information and Communication Technology Fund (ICT Fund), UAE for the continued support for the educational development and research. Thanks are also due to Ms. Phyllis Burns for proof reading the paper.

AC C

References

[1] Aravazhi Irissappane, A., Zhang, J., 2015. Filtering unfair ratings from dishonest advisors in multi-criteria emarkets: a biclustering-based approach. Autonomous Agents and Multi-Agent Systems, 1–30. URL http://dx.doi.org/10.1007/s10458-015-9314-4 [2] Bariah, L., Shehada, D., Salahat, E., Yeun, C. Y., Sept 2015. Recent advances in vanet security: A survey. In: 2015 IEEE 82nd Vehicular Technology Conference (VTC2015-Fall). pp. 1–7. [3] Basheer, G. S., Ahmad, M. S., Tang, A. Y., Graf, S., 2015. Certainty, Trust and Evidence: Towards an Integrative Model of Confidence in Multi-agent Systems. Computers in Human Behavior 45, 307–315. [4] Bijani, S., Robertson, D., Aspinall, D., 04 2018. Secure information sharing in social agent interactions using information flow analysis 70, 52–66. [5] Buccafurri, F., Comi, A., Lax, G., Rosaci, D., Jan 2016. Experimenting with certified reputation in a competitive multi-agent scenario. IEEE Intelligent Systems 31 (1), 48–55. [6] Buchegger, S., Le Boudec, J.-Y., 2002. Performance Analysis of the CONFIDANT Protocol. In: Proceedings of the 3rd ACM International Symposium on Mobile Ad Hoc Networking &Amp; Computing. MobiHoc ’02. ACM, New York, NY, USA, pp. 226–236. URL http://doi.acm.org/10.1145/513800.513828 [7] Buchegger, S., Le Boudec, J.-Y., 2004. A Robust Reputation System for Peer-To-Peer and Mobile Ad-Hoc Networks. In: P2PEcon 2004. No. LCA-CONF-2004-009.

21

ACCEPTED MANUSCRIPT

AC C

EP

TE

D

M AN U

SC

RI PT

[8] DeMeo, P., Messina, F., Rosaci, D., Sarn´e, G. M. L., Dec. 2015. An agent-oriented, trust-aware approach to improve the qos in dynamic grid federations. Concurr. Comput. : Pract. Exper. 27 (17), 5411–5435. URL https://doi.org/10.1002/cpe.3604 [9] Fang, W., Zhang, C., Shi, Z., Zhao, Q., Shan, L., 2016. Btres: Beta-based trust and reputation evaluation system for wireless sensor networks. Journal of Network and Computer Applications 59, 88 – 94. URL http://www.sciencedirect.com/science/article/pii/S108480451500140X [10] Geetha, G., Jayakumar, C., 2011. Data Security in Free Roaming Mobile Agents. In: Wyld, D., Wozniak, M., Chaki, N., Meghanathan, N., Nagamalai, D. (Eds.), Advances in Network Security and Applications. Vol. 196 of CCIS 196. Springer Berlin Heidelberg, pp. 472–482. [11] Gu, T., Pung, H. K., Zhang, D. Q., 2005. A service-oriented middleware for building context-aware services. Journal of Network and Computer Applications 28 (1), 1 – 18. URL http://www.sciencedirect.com/science/article/pii/S1084804504000451 [12] Guinnane, T. W., 2005. Trust: A Concept Too Many. Jahrbuch f¨ur Wirtschaftsgeschichte/Economic History Yearbook 46 (1), 77–92. [13] Han, K., Mun, H., Shon, T., Yeun, C. Y., Park, J. J., 2012. Secure and Efficient Public Key Management in Next Generation Mobile Networks. Personal and Ubiquitous Computing 16 (6), 677–685. [14] Han, K., Yeun, C. Y., Shon, T., Park, J., Kim, K., 2011. A Scalable and Efficient Key Escrow Model for Lawful Interception of IDBC-based Secure Communication. International Journal of Communication Systems 24 (4), 461– 472. [15] Huynh, T. D., Jennings, N. R., Shadbolt, N. R., 2006. An Integrated Trust and Reputation Model for Open Multiagent Systems. Autonomous Agents and Multi-Agent Systems 13 (2), 119–154. [16] Irissappane, A. A., Jiang, S., Zhang, J., 2014. A biclustering-based approach to filter dishonest advisors in multicriteria e-marketplaces. In: Proceedings of the 2014 International Conference on Autonomous Agents and Multiagent Systems. AAMAS ’14. International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, pp. 1385–1386. URL http://dl.acm.org/citation.cfm?id=2617388.2617485 [17] Jung, Y., Kim, M., Masoumzadeh, A., Joshi, J. B., 2012. A Survey of Security Issue in Multi-agent Systems. Artificial Intelligence Review 37 (3), 239–260. [18] Jurca, R., Faltings, B., 2003. An Incentive Compatible Reputation Mechanism. In: Proceedings of the Second International Joint Conference on Autonomous Agents and Multiagent Systems. AAMAS ’03. ACM, New York, NY, USA, pp. 1026–1027. URL http://doi.acm.org/10.1145/860575.860778 [19] Khosravifar, B., Bentahar, J., Gomrokchi, M., Alam, R., 2012. CRM: An Efficient Trust and Reputation Model for Agent Computing. Knowledge-Based Systems 30, 1–16. [20] Lin, C., Varadharajan, V., 2010. MobileTrust: a Trust Enhanced Security Architecture for Mobile Agent Systems. International Journal of Information Security 9 (3), 153–178. [21] Noorian, Z., Ulieru, M., 2010. The State of the art in Trust and Reputation Systems: A Framework for Comparison. Journal of theoretical and applied electronic commerce research 5 (2), 97–117. [22] Pinyol, I., Sabater-Mir, J., Dellunde, P., Paolucci, M., 2012. Reputation-based Decisions for Logic-based Cognitive Agents. Autonomous Agents and Multi-Agent Systems 24 (1), 175–216. [23] Rosaci, D., 2012. Trust Measures for Competitive Agents. Knowledge-Based Systems 28, 38–46. [24] Sabater, J., Sierra, C., Dec. 2001. Social ReGreT, a Reputation Model Based on Social Relations. SIGecom Exch. 3 (1), 44–56. URL http://doi.acm.org/10.1145/844331.844337 [25] Sadaoui, S., Wang, X., 08 2016. A dynamic stage-based fraud monitoring framework of multiple live auctions, 1–17. [26] Shehada, D., Yeun, C. Y., Jamal Zemerly, M., Al-Qutayri, M., Hammadi, Y. A., 2018. Secure Mobile Agent Protocol for Vehicular Communication Systems in Smart Cities. Springer Singapore, Singapore, pp. 251–271. URL https://doi.org/10.1007/978-981-10-1741-4_17 [27] Shehada, D., Yeun, C. Y., Zemerly, M. J., Al-Qutayri, M., Al Hammadi, Y., 2015. A secure mobile agent protocol for vehicular communication systems. In: Innovations in Information Technology (IIT), 2015 11th International Conference on. IEEE, pp. 92–97. [28] Shehada, D., Yeun, C. Y., Zemerly, M. J., Al Qutayri, M., Al Hammadi, Y., Damiani, E., Hu, J., 2017. Brosmap: A novel broadcast based secure mobile agent protocol for distributed service applications. Security and Communication Networks 2017. [29] Subashini, S., Kavitha, V., 2011. A survey on security issues in service delivery models of cloud computing. Journal of Network and Computer Applications 34 (1), 1 – 11. URL http://www.sciencedirect.com/science/article/pii/S1084804510001281 [30] Sun, D., Chang, G., Sun, L., Wang, X., 2011. Surveying and analyzing security, privacy and trust issues in cloud

22

ACCEPTED MANUSCRIPT

[34]

[35]

[36] [37]

AC C

EP

TE

D

[38]

RI PT

[33]

SC

[32]

M AN U

[31]

computing environments. Procedia Engineering 15, 2852 – 2856, cEIS 2011. URL http://www.sciencedirect.com/science/article/pii/S1877705811020388 Teacy, W., Patel, J., Jennings, N., Luck, M., 2006. TRAVOS: Trust and Reputation in the Context of Inaccurate Information Sources. Autonomous Agents and Multi-Agent Systems 12 (2), 183–198. URL http://dx.doi.org/10.1007/s10458-006-5952-x Third Generation Partnership (3GPP) (2010) TS 33.221 v10.0.0, 2010. Generic Authentication Architecture (GAA); Support for Subscriber Certificates. Tu, C.-H., 2000. On-line learning migration: from social learning theory to social presence theory in a cmc environment. Journal of Network and Computer Applications 23 (1), 27 – 37. URL http://www.sciencedirect.com/science/article/pii/S1084804599900991 Wang, D., Muller, T., Liu, Y., Zhang, J., Sept 2014. Towards robust and effective trust management for security: A survey. In: Trust, Security and Privacy in Computing and Communications (TrustCom), 2014 IEEE 13th International Conference on. pp. 511–518. Yeun, C. Y., 1999. Digital Signature with Message Recovery and Authenticated Encryption (Signcryption) - A Comparison. In: Walker, M. (Ed.), Cryptography and Coding. Vol. 1746 of LNCS 1746. Springer Berlin Heidelberg, pp. 307–312. URL http://dx.doi.org/10.1007/3\-540-46665\-7\_35 Yeun, C. Y., Han, K., Vo, D. L., Kim, K., 2008. Secure authenticated group key agreement protocol in the manet environment. information security technical report 13 (3), 158–164. Yu, B., Singh, M. P., 2002. An Evidential Model of Distributed Reputation Management. In: In Proceedings of First International Joint Conference on Autonomous Agents and Multiagent Systems. ACM Press, pp. 294–301. Zuo, Y., Liu, J., 2017. A reputation-based model for mobile agent migration for information search and retrieval. International Journal of Information Management 37 (5), 357 – 366. URL http://www.sciencedirect.com/science/article/pii/S0268401216304947

23

RI PT

ACCEPTED MANUSCRIPT

EP

TE D

M AN U

SC

Dina Shehada received her MSc. by Research in Engineering degree from Department of Electrical and Computer Engineering from Khalifa University in 2016. Earlier she received her BSc. In Computer Engineering from the University of Sharjah in 2013. In 2013, she joined Khalifa University as a Research Assistant and worked on android mobile application development and signal processing. Currently, she works as a Research Associate in the Department of Electrical and Computer Engineering and an active member of Information Security Research Center (ISRC) at Khalifa University. Her research interests include mobile agents’ security, trust evaluation in social networks, formal verification methods, network and information security and signal and image processing

AC C

Chan Yeob Yeun received his MSc. and Ph.D. in Information Security from Royal Holloway, University of London, in 1996 and in 2000, respectively. After that, he joined Toshiba TRL in Bristol. Then, he became a Vice President at LG Electronics, Mobile Handset R&D Center in 2005. He was responsible for developing the Mobile TV technologies and its security. He left LG Electronics in 2007 and joined at KAIST-ICC, Korea until August 2008 and moved to Khalifa University. He is currently serving as an Associate Professor of Electrical and Computer Department and an active member of Information Security Research Center. He currently enjoys lecturing MSc. in Information Security Courses at Khalifa University. He has published 31 journal papers, 75 conference papers, 2 book chapters and 10 international patent applications. He is also serving as several Editorial Board members of International Journals and a steering committee member of ICITST conference series and he is a senior member of the IEEE.

RI PT

ACCEPTED MANUSCRIPT

AC C

EP

TE D

M AN U

SC

Dr. Mohamed Jamal Zemerly obtained his M.Sc. and Ph.D. in 1986 and 1989 from University College Cardiff, Wales, and The University of Birmingham, England, respectively. Since then, he worked at various UK universities, such as UCL, Warwick and Westminster, and then moved to Khalifa University in summer 2000, where he is currently an Associate Professor in the Electrical and Computer Engineering Department. He was the Computer Engineering Program Chair from Jan 2010 to Jan 2015. Dr. Zemerly’s research interests are in Ubiquitous Computing, Augmented Reality, Image Processing and Computer Vision, Context Aware Mobile Systems and Information Security. Dr. Zemerly has published more than 100 journal and conference papers as well as 8 book chapters. He is also the co-editor-in-chief of IJRFIDSC journal, Infonomics Society, and was a co-program chair of the ICITST conference for the years 2011 and 2012 and a co-chair of the same conference for 2015-2016. Dr. Zemerly is also a senior member of the IEEE.

Dr. Yousof Al-Hammadi is currently a Director of Graduate Studies and Assistant Professor at the Electrical & Computer Engineering Department, Khalifa University of Science, Technology & Research (KUSTAR), Abu Dhabi, United Arab Emirates. He received his Bachelor degree in Computer Engineering from KUSTAR (previously known as Etisalat College of Engineering), UAE in 2000, MSc degree in Telecommunications Engineering from the University of Melbourne, Australia in 2003, and PhD degree in Computer Science and Information Technology from the University of Nottingham, UK in 2009. His main research interests are in the area of Information security which includes Intrusion Detection, Botnet/Bots Detection, Viruses/Worms Detection, Artificial Immune Systems, Machine learning, RFID Security and Mobile Security.

RI PT

ACCEPTED MANUSCRIPT

M AN U

SC

Prof. Mahmoud Al-Qutayri is currently an Associate Dean of Graduate Studies at the Electrical & Computer Engineering Department, Khalifa University of Science, Technology & Research (KUSTAR), Abu Dhabi, United Arab Emirates. He received his BSc .in Electrical Engineering from Concordia University, Canad in 1984, MSc. in Electronic Engineering from the University of Manchester, UK in 1987, and PhD in Electronic Engineering from the University of Bath, UK in 1992. . His main research interests are in the area of reconfigurable system design and applications, design and test of mixed-signal integrated circuits, wireless sensor network and application and embedded systems design and security as well as distributed computing and mobile agents.

AC C

EP

TE D

Jiankun Hu is a Professor at the School of Engineering and IT, University of New South Wales, Canberra (also called Australian Defence Force Academy), Australia. He received the B.E. degree from Hunan University, Changsha, China, in 1983, the Ph.D. degree in control engineering from the Harbin Institute of Technology, Harbin, China, in 1993, and the master’s degree in computer science and software engineering from Monash University, Clayton, VIC, Australia, in 2000. He was with Ruhr University, Bochum, Germany, where he was a prestigious German Alexander von Humboldt Fellow from 1995 to 1996, a Research Fellow with the Delft University of the Netherlands, Delft, The Netherlands, from 1997 to 1998, and a Research Fellow with the University of Melbourne, Parkville, VIC, Australia, from 1998 to 1999. His main research interest is in the field of cyber security, including biometrics security, where he has authored many papers in high-quality conferences and journals, including the IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE. He has served in the editorial board of up to seven international journals including IEEE transactions on Information Forensics and Security, and served as the Security Symposium Chair of the IEEE flagship conferences of the IEEE ICC and the IEEE Globecom. He has received seven Australian Research Council (ARC) Grants, and served at the prestigious Panel of Mathematics, Information and Computing Sciences, ARC Excellence in Research for Australia Evaluation Committee.