Accepted Manuscript
Incentives-based preferences and mobility-aware task assignment in participatory sensing systems Rim Ben Messaoud, Yacine Ghamri-Doudane, Dmitri Botvich PII: DOI: Reference:
S0140-3664(17)30366-3 10.1016/j.comcom.2017.10.015 COMCOM 5589
To appear in:
Computer Communications
Received date: Revised date: Accepted date:
26 March 2017 10 October 2017 17 October 2017
Please cite this article as: Rim Ben Messaoud, Yacine Ghamri-Doudane, Dmitri Botvich, Incentivesbased preferences and mobility-aware task assignment in participatory sensing systems, Computer Communications (2017), doi: 10.1016/j.comcom.2017.10.015
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
ACCEPTED MANUSCRIPT
Incentives-based preferences and mobility-aware task assignment in participatory sensing systems
CR IP T
Rim Ben Messaouda,b , Yacine Ghamri-Doudaneb , Dmitri Botvichc a Universit´ e
Paris-Est, 5 Boulevard Descartes, 77420 Champs-sur-Marne, France of La Rochelle, 23 Avenue Albert Einstein, 17000 La Rochelle, France c TSSG, Waterford Institute of Technology, Waterford, Ireland
b University
Abstract
AN US
Participatory Sensing (PS) systems rely essentially on users’ willingness to ded-
icate their devices’ resources (energy, processing time..) to contribute highquality data about various phenomena. In this paper, we study the critical issue of participants’ recruitment in PS systems in the aim of minimizing the overall sensing time. First, we design the users’ arrival and acceptance/rejection mod-
M
els. Further, we introduce two variants of task assignment mechanisms; without and with incentives. In the former model, we enhance our proposed scheme, P-MATA, for preferences and mobility-aware task assignment, by introducing
ED
a logit regressing-based preferences model. Thus, we estimate the users’ acceptance probabilities as function of the number and loads of sensing tasks. We incorporate rewards as a third attribute in the second variant of assignment
PT
scheme and propose two different incentivizing policies to study their impact on enhancing users’ acceptance. Incentives are either task priority-based or data
CE
quality-based. All proposed algorithms adopt a greedy-based selection strategy and address the minimization of the average makespan of all sensing tasks. We conduct extensive performance evaluation based on real traces while varying
AC
the number of tasks and associated workloads. Results proved that incentivizing participants has intensified their commitment by achieving lower average ∗ Corresponding
author. Tel +33 0546458760 Email addresses:
[email protected] (Rim Ben Messaoud),
[email protected] (Yacine Ghamri-Doudane),
[email protected] (Dmitri Botvich)
Preprint submitted to Journal of LATEX Templates
October 20, 2017
ACCEPTED MANUSCRIPT
makespan and higher number of delegated tasks. Moreover, quality-based incentivizing mechanism realized the better performance while minimizing the dedicated budget.
CR IP T
Keywords: Participatory Sensing; task assignment; incentives mechanisms; preferences-aware.
1. Introduction
The proliferation of smart sensor-equipped devices carried by the crowd en-
AN US
hances a new paradigm of data collection and sharing defined as Mobile Crowd-
Sensing (MCS) [1]. This emergent sensing paradigm is conducted either in an 5
opportunistic or participatory way [1–3], where users are unaware or actively involved in the sensing process, respectively. So far, various potential applications have been developed for different purposes, such as urban traffic [4, 5] or environment monitoring [6, 7] and health care [3] to name a few.
10
M
An open issue in mobile crowdsensing is how to recruit participants to collect sensor data, especially in participatory tasks. This paradigm is undoubtedly consuming users’ devices physical resources such as energy, computing and
ED
storage [2]. Moreover, participants are dedicating their time and even human intelligence to perform sensing tasks. This can be voluntarily or for some re-
15
PT
warding mechanisms, called as “incentives” [8, 9]. Nevertheless, adequate task assignment algorithms are highly required to ensure participants’ commitment and achieve an efficient sensing process. In this context, several works have
CE
been introduced in the aim of optimizing the overall sensing process in terms of energy resources [10], data-quality [11, 12] or dedicated rewards [13, 14].
AC
Though, most of the presented assignment schemes follow a centralized ap-
20
proach, where a central unit is responsible to designate participants to sense based on their historical characteristics. Consequently, users may receive timeoverlapping tasks assignments from different crowdsensing platforms, which may exceed their processing capabilities. In addition, sensing tasks can be associated or not with rewards which may encourage users to participate in some kind
2
AN US
CR IP T
ACCEPTED MANUSCRIPT
Figure 1: The hybrid assignment scenario
25
of tasks and ignore others. This induces a loss of tasks and a low data-quality
M
especially in terms of temporal accuracy. Furthermore, current incentives mechanisms [8, 9, 13–20] are usually determined “a priori” and not adapted to the
ED
progress of the assignment process in terms of users’ resources or data samples’ quality which leads to unwilling users to participate to crowdsensing. To address these issues, we consider, in this paper, an additional distributed
30
PT
crowdsensing assignment phase as introduced in our previous work [21]. This requires identifying users, denoted as “requesters”, by the MCS platform to
CE
carry a higher number of tasks to be delegated to encountered participants. The latter conduct their assigned tasks, e.g. collecting location-tagged temper-
35
ature measurements, then upload the collected data via data cellular or Wi-Fi
AC
networks depending on data time-sensitivity as illustrated in Figure 1. Furthermore, we introduce two variants of assignment schemes; without and with incentives. This is to investigate the impact of rewards and resources on the commitment level of participants. Particularly, we aim to optimize the overall
40
process by minimizing the average makespan of all sensing campaigns. For both
3
ACCEPTED MANUSCRIPT
variants, we propose greedy-based offline and online algorithms as advocated in Sections 4 and 5. More specifically, our major contributions here are as follows: 1. We develop a new preferences model based on the regression logit model [22]
CR IP T
to elaborate the dependency of users acceptance towards a certain assignment on a certain utility function measured by different attributes.
45
2. We adopt this new preferences model to extend our previous assignment policy [21] as the no-incentives based solution, P-MATA+ , where the choice model depends mainly on the current workload of users.
3. We advocate an incentives-based assignment variant, IP-MATA+ , by dedicating a budget, B, to enhance participants’ contributions.
AN US
50
4. We investigate two different incentives policies; task-priority based and data-quality based. The former accounts for tasks heterogeneity while the latter targets rewarding users function of the quality of their contributions. 5. We evaluate our assignment strategies with real-traces based simulations while varying the requesters’ selection policies, the number of tasks to be
M
55
assigned and their associated workloads and we show that our schemes perform well in terms of assigned tasks and overall makespan.
ED
The rest of this paper is organized as follows. In Section 2, we list the prior work on distributed crowdsensing and incentivizing mechanisms. We illustrate in Section 3 the system overview in terms of users’ arrival and preferences mod-
PT
60
els and formulate our problem statement. Sections 4 and 5 illustrate the two developed variants of assignment: P-MATA+ and IP-MATA+ . In Section 6,
CE
we describe our simulation settings, then evaluate the performance of our assignment variants while discussing different possible scenarios. Conclusions and future work are illustrated in Section 7.
AC
65
2. Related Work 2.1. Distributed Participatory Sensing By far, several assignment schemes have been proposed in participatory sensing. Yet, few works have tackled this issue in a distributed based approach. 4
ACCEPTED MANUSCRIPT
For example, Cheung et al. [13] have introduced an asynchronous and dis-
70
tributed task selection (ADTS) algorithm to help users plan their tasks selection on their own. Accordingly, participants designate in a non-cooperative game
CR IP T
paths that maximize their profit. Nevertheless, authors did not investigate the processing time of tasks. Note that users, even though being rewarded for their 75
contributions, may be reluctant to perform long sensing campaigns that use up their devices batteries. In this context, Xiao et al. [23] studied the task assign-
ment problem in Mobile Social Networks (MSN) with the aim to minimize the
average sensing and processing time. Authors investigated users’ encountering
80
AN US
based on their historical traces. In light of this, data queriers can recruit participants to perform sensing tasks. The proposed methods are formulated as an offline (FTA) and online (NTA) assignment strategies. However, the proposed algorithms have considered only time-dependent crowdsensing scheme, while in fact the location of collected samples matters as much. Hence, an assignment scheme should be based on users’ mobility in terms of different locations rather than estimating only their meetings.
M
85
However, these works did not consider the issue of participants’ sensing pref-
ED
erences, i.e., the ability to accept or reject the assignment strategy. Such ability has been very recently discussed by authors in [14] in order to select the workers who maximize an expected sum of service quality. The proposed framework, Crowdlet, is based on dynamic programming which enhances distributed self-
PT
90
organized mobile crowdsourcing. Yet, the time cost of conducting such quality-
CE
aware sensing was not investigated. From this perspective, we introduced in a previous work [21] a comparable assignment while taking into account both users’ mobility and sensing preferences in the aim of minimizing the overall time of sensing tasks. Nevertheless, the estimation of participants’ acceptance has
AC
95
been limited to their previous behavior and not updated respecting the current assignment characteristics such as the type of tasks or the associated workload.
5
ACCEPTED MANUSCRIPT
2.2. Incentives in Participatory Sensing Several incentives mechanisms [13–20] have been introduced for participa100
tory sensing. Zhang et al. [8] and Jaimes et al. [9] have surveyed these mecha-
CR IP T
nisms and distinguished monetary and non-monetary incentives. Yet, monetary rewards are claimed to be the main intuitive incentives form [9].
Yang et al [24, 25] considered a platform-centric incentive model, where the
reward is proportionally shared by participants in a Stackelberg game, and a 105
user-centric incentive model, where participants in the auction bid for tasks and get paid no lower than the submitted bids. Following this approach, a
AN US
kind of auctions, named reverse auction, used in the negotiating phase between the MCS platform and participants was also widely developed in literature [16,
19, 26]. Lee and Hoh [16] designed a Reverse Auction based Dynamic Price 110
incentives mechanism with virtual participation credit (RADP-VPC) that aims at minimizing the MCS platform cost. The idea of these works is to select among bidders the set who maximizes the social welfare. However, participants
M
who set low bids are usually those with low quality contributions which may result in non accurate data sensing process. In contrast, Koutsopoulos [26] introduced a quality-aware incentive mecha-
ED
115
nism based on Vickery Clarck Groves reverse auction where the platform estimates the users’ participation level, i.e., data-quality, based on their posted costs
PT
then selects those who minimize the overall payment. Similarly, authors in [17– 20] have developed quality-aware incentivizing mechanisms. Jin et al. [17, 18] introduced the QoI as a metric into the design of reverse auction mechanism
CE
120
for MCS systems while considering also privacy-preserving mechanisms. Other works introduced rewards as function of data quality [19, 20], i.e., the platform
AC
publishes tasks and offers rewards to users based on their contributions’ quality.
125
However, the above designed incentives mechanisms are usually introduced
in the central MCS platform which requires a global knowledge of participants’ bids and potential data quality and results in an important communication overhead. Thus, we propose in this work to offer incentives among participants in a distributed way as adopted by authors in [13, 14]. Notably, we introduce 6
ACCEPTED MANUSCRIPT
rewards as an attribute in a preferences model which estimates participants’ 130
preferences towards a proposed assignment. A further detailed description of
3. System model and Problem formulation
CR IP T
this model is illustrated in the next section.
In this section, we first give the necessary preliminaries that describe the crowdsensing system we seize in this work. Accordingly, we formulate the prob135
lem and state our design objectives.
AN US
3.1. System overview
We recall the hybrid MCS scenario, illustrated in Figure 1, and split into two phases: a centralized and a distributed one. In the first phase, the MCS platform proceeds with a centralized task allocation, then designates some participants to 140
continue assigning tasks in a distributed fashion. In this work, we are interested
M
in the latter phase as a dynamic Participatory Sensing (PS) paradigm. Hence, we consider N mobile users in a crowdsensing area divided into subregions, denoted as compounds C = {k, k ∈ 1..nC }. These compounds are 145
ED
with different characteristics, thereby with various users mobility behavior. For instance, an eating area is rather dense during feeding times with a very low mobility whereas a shopping area is both a crowded and dynamic area. Accord-
PT
ingly, users move with various speed values between different compounds and can be at a given time with a probability qk in a compound k. For simplicity, let
CE
R = {r1 , r2 , . . . , rnr } be the set of requesters, i.e., the participants responsible
150
of the distributed assignment, and P = {p1 , p2 , . . . , pnp } is the set of regular participants, with nr + np ≤ N . A requester ri is carrying m sensing tasks to
AC
be assigned to encountered participants when encountered in a same compound k. To do so, let S = {s1 , s2 , . . . , sm } be the set of sensing tasks, which is het-
erogeneous in terms of associated workloads, {τ1 , τ2 , . . . , τm }, and type. Thus,
155
we associate different tasks with different weights, {α1 , α2 , . . . , αm }, depending on tasks type, i.e., sensing application. Without loss of generality, we assume
7
ACCEPTED MANUSCRIPT
Table 1: Table of Symbols
Description
N
Number of users
R = {r1 , r2 , . . . , rnr }
Set of requesters
P = {p1 , p2 , . . . , pnp }
Set of participants
C = {k, k ∈ 1..nC }
Set of compounds
S = {s1 , s2 , . . . , sm }
Set of sensing tasks
αl
Weight of the sensing task sl
τl
Load of the sensing task sl
λri ,pj
Contact rate between a requester and a participant
Ai,j,k
The mean time of inter-meeting between a requester
AN US
CR IP T
Symbol
and a participant
The probability of a user to be in a compound k
pa,i (sj )
The acceptance probability of a task sj by a user i
M (sl )
Average sensing and processing time of the task sl
Γ = {γ1 , γ2 , . . . , γn }
Assignment strategy of a requester ri
ED
M
qk
that the inter-meeting time of a requester with a user is enough for exchanging
PT
tasks. In order to better estimate this time, we investigate hereafter the arrival model of users.
3.1.1. Users’ arrival model
CE
160
The task-oriented participatory sensing is critically depending on users’ mo-
bility. Thus, we base our work on a widely-used mobility model in Mobile
AC
Social Networks (MSNs), [27, 28]. We assume that the inter-meeting time between a requester ri and a participant pj follows an exponential distribution with a contact rate parameter, λri ,pj . As a result, the inter-encounter time of
a requester with two participants consecutively follows also an exponential disP tribution with parameter λri = pj ∈P λri ,pj , i.e., the arrival of participants to 8
ACCEPTED MANUSCRIPT
a requester follows a Poisson process. Moreover, we suppose that λri ,pj can be derived from the historical encounters between a requester and each participant per time unit as stated in [14, 23]. For simplicity, we consider the fact that
CR IP T
users can meet only while being in the same compound. Hence, we examine the probability of each user to be in a certain compound k and we model the inter-meeting time, denoted as Ai,j,k , by an exponential distribution with rate parameter F(ri ,pj ,k) = qk (ri )qk (pj )λri ,pj , as follows: Z ∞ 1 1 F(ri ,pj ,k) te−F(ri ,pj ,k) t dt = Ai,j,k = = F q (r )q (p k i k j )λri ,pj (ri ,pj ,k) 0
(1)
AN US
where qk (ri ), qk (pj ) are the probabilities of a requester ri and a participant pj being in a compound k, respectively. 3.1.2. Users’ preferences model
The aforementioned Poisson process models the arrival of different partic165
ipants to a requester but does not take into account their preferences as they
M
can accept or reject the proposed assignment. Given such ability, we need to estimate the mean time to meet a participant who will accept the proposed as-
ED
signment. This time can be defined differently based on the adopted variant of the preferences model; without or with incentives. Accordingly, we distinguish 170
two type of potential met participants as follows.
PT
Definition 1. A “volunteer-positive” participant is a potential encountered user
CE
who is estimated to accept the proposed assignment with no perceived rewards. Definition 2. A “positive” participant is a potential encountered user who re-
AC
quires rewards for performing sensing campaigns.
175
The aforementioned participants are capable of selecting among their assign-
ment the tasks they like to perform. They can reject also tasks if they estimate their current workload exceeds their devices processing capacities. Respectively, we detail hereafter the new proposed user preferences model.
9
ACCEPTED MANUSCRIPT
Discrete choice model. Different from our previous work [21], we do not assume here knowing users acceptance probabilities, pa,j , from their historical records. Thus, we present a Discrete Choice Model to characterize on which
CR IP T
basis participants can select tasks among the proposed assignment. This model has been proposed first by Faridani et al. [29] to estimate workers selection of different tasks according to a perceived utility. Naturally, users target to maximize their utilities based on their perceptions to the tasks attributes such
as the hourly load, reward or type. Under this model, we design the user acceptance probability, pa,j (sl ), as the probability that the utility of the current
AN US
task sl exceeds all other assigned tasks utilities.
pa,j (sl ) = P r(Ul > max Ui ). i6=l
(2)
The utility, Ul , perceived when performing a sensing task can be expressed under the Conditional Logit Model [90,91] [22, 29] as follows:
(3)
M
Ul = βl zl + l .
It is worth noting that zl designate all observable attributes of task sl and l
ED
accounts for non observable ones. In this model [29], a task utility is assumed to be linearly correlated with all observed attributes by a shared coefficient vector β,i.e., each attribute zl is associated with a coefficient βl . While parameters l
PT
are assumed to be independent from each other and follow the Gumbel distribution [30]. Based on such assumptions, we derive the expression of a task sl
AC
CE
acceptance probability as a Multinomial Logit Distribution [22]:
180
exp(βzl ) . pa,j (sl ) = Pm i=1 exp(βzi )
(4)
In this work, we set positive coefficients βl for desirable task attributes such
as reward or type and negative ones for undesirable attributes like task workload and the number of already accepted tasks. This is to highlight the attractiveness of the first two attributes since they maximize the utility of a task. On the contrary, the last attributes are associated with negative coefficients to model the burden of carrying heavy workload for a participant. 10
ACCEPTED MANUSCRIPT
185
Probabilistic inter-meeting time. Based on the predefined probability pa,j (sl ), the task acceptance can be modeled as a Bernoulli process. That is, for each participant pj , the set of answers are associated with a random variable X in {0, 1},
CR IP T
where X = 1 with probability pa,j and X = 0 with probability pr,j = 1 − pa,j . Accordingly, for a requester encountering np participants, the number of users 190
who have accepted to participate can be generalized to a Binomial distribution,
B(np , pa,j ). Thus, the arrival model becomes a composition of the Poisson process and the binomial distribution which leads to a Poisson distribution with parameter (Θk = pa,j × F(ri ,pj ,k) ) [31]. With all this in mind, we proceed to 195
in a compound k as follows:
AN US
compute the inter-meeting time between any requester ri and a participant pj
Lemma 1. The mean time, for a requester ri , till meeting a “positive” or a “volunteer-positive” participant, pj , within n time slots is: Πi,j,k =
n hX
plr,j pa,j
M
l=1
i0 1 . qk (ri )qk (pj )λri ,pj
Proof 1. We assume that during the assignment phase, a requester ri is likely to meet any participant pj more than once. However, the latter accepts the as-
ED
signed tasks with pa,j . Suppose that a participant accepted tasks within n meetings, thus, the mean time of meetings this “positive” participant can be expressed 200
as: pa,j Ai,j,k + pr,j pa,j Ai,j,k × 2 + . . . + pn−1 r,j pa,j Ai,j,k × n = pa,j Ai,j,k (1 + 2pr,j +
PT
3p2r,j + . . . + npn−1 r,j ). In this expression, each term of the sum can be a derivative
CE
of plr,j with l ∈ {1, . . . , n}. Hence, we denote by [.]0 the derivative operator and h i0 hP i0 n l model Πi,j,k as pa,j Ai,j,k pr,j + p2r,j + . . . + pnr,j = pa,j Ai,j,k l=1 pr,j . Fi-
nally, we substitute Ai,j,k with its corresponding expression from Equation (1) to obtain the expression of Πi,j,k .
AC
205
3.2. Problem Formulation In this section, we recall the predefined mean time of meeting a “positive”
or a “volunteer-positive” participant of Lemma (1) in order to formulate the average makespan expression, our objective function to be minimized. Then,
210
we detail it for different task assignment variants. 11
ACCEPTED MANUSCRIPT
3.2.1. Average Makespan expression We consider the above scenario of a requester ri carrying m sensing tasks to be assigned to encountered participants. Hence, we define as makespan of
CR IP T
a task sl ∈ S the time of being assigned and processed and we denote it as M (sl ). Note that we exclude here the reporting phase since we assume it is
instantaneous. Thus, for every assignment policy Γ, we compute the average makespan of all assigned tasks to different participants [23] by: 1 X AM (Γ) = M (sl )|Γ . m sl ∈S
(5)
AN US
Note that a task assignment strategy Γ is a set of assignments per encoun-
tered participant, i.e., Γ = {γ1 , γ2 , . . . , γn }, with n ≤ np is the number of encountered participants by a requester. Besides, a task is assigned only to one 215
participant, i.e., γi ∩ γj = ∅ ∀pi , pj ∈ P . Finally, if a participant pj ∈ P has not received or accepted any task for a given period of assignment from encountered requesters, then γj = ∅.
M
In the following, we advocate how to minimize the average makespan of all tasks expressed in Equation (5) for each requester in two variants of scenarios: without and with incentives.
ED
220
3.2.2. No-incentives based Assignment First, we suppose that all participants are willing to perform sensing cam-
PT
paigns with no perceived rewards as presented by Definition 1. However, participants can accept or reject their assignment. This depends on their estimation of
CE
a task utility in terms of its associated workload, τl , and the number of already accepted tasks, nacc . Accordingly, we derive a “volunteer-positive” participant
AC
acceptance probability, from Equation (4), as follows. exp(β1 τl + β2 nacc ) pa,j (sl ) = Pm i=1 exp(β1 τi + β2 nacc )
(6)
where β1 and β2 are negative coefficients associated with no desirable attributes. We exploit this new preferences model to extend our previous work [21],
as the no-incentives based variant, that we denote as P-MATA+ , and we in225
vestigate it in two different assignment modes: offline and online. The former 12
ACCEPTED MANUSCRIPT
mode indicates that a requester, and based on the expected arrival time of a “volunteer-positive” participant, decides his assignment strategy that we will denote in the rest of this paper as ΓP F . In the online mode, a requester starts
230
CR IP T
the assignment process only when encountering a participant. If the encountered user is among the selected ones, i.e., γi 6= ∅, he receives his assignment. Otherwise, the requester relaunches his assignment process in a next encounter. This latter strategy results on a dynamic assignment denoted as ΓP N . 3.2.3. Incentives-based Assignment
AN US
As a second variant of assignment schemes, we introduce incentivizing rewards in order to study their impact on users’ commitment to participatory sensing campaigns. Therefore, we incorporate rewards as a third attribute in the formulation of a task acceptance probability. The task reward, denoted as
Rl , is a desirable attribute thereby associated with a positive coefficient, β3 > 0. The corresponding acceptance probability is then expressed as follows:
M
exp(β1 τl + β2 nacc + β3 Rl ) pa,j (sl ) = Pm i=1 exp(β1 τi + β2 nacc + β3 Ri )
(7)
We define this variant as the Incentives-based Preferences and Mobilityaware Task Assignment, IP-MATA+ . As for the first variant, we develop
ED
235
here also online and offline assignment strategies that we denote by ΓIP N and
PT
ΓIP F , respectively. It is worth noting that we propose two different incentivizing policies throughout this work; task priority-based and data quality-based. We
CE
study such policies in Section 5 in order to identify the most efficient one.
240
4. Extended Preferences and Mobility-Aware Task Assignment (P-
AC
MATA+ ) In this section, we present the no-incentives based assignment scheme, P-
MATA+ . This scheme refers mainly to the introduced discrete choice model expressed in Equation (4) to tackle users’ preferences in terms of assigned tasks
245
workloads and number. Accordingly, P-MATA+ foresees “volunteer-positive” participants encountering mean time as detailed in Lemma (1) and decides to 13
ACCEPTED MANUSCRIPT
whom to delegate tasks. Similar to our preceding work [21], we investigate this assignment variant considering two operation modes: offline and online.
CR IP T
4.1. Offline Mode (P-MATAF+ ) The first mode, named as P-MATAF+ , suggests that each requester needs
250
to run the assignment phase only in the beginning of a sensing period. The
resulted assignment, denoted as ΓP F + , is to be delegated to selected participants when encountering them. Each requester estimates the next time slot in which he will meet a “volunteer-positive” participant based on the computed
acceptance probabilities, pa,j ; ∀pj ∈ P , derived from the no-incentives prefer-
AN US
255
ences model of Equation (6). Accordingly, the average makespan is determined as the sum of workloads to each participant plus the elapsed time before the first acceptance as detailed in the following Theorem.
Theorem 1. The average makespan of m tasks in an offline preferences and mobility-aware task assignment strategy, AM (ΓP F + ), is expressed as follows: i0 1 1 X X h X l pr,j pa,j + τl . m j=1 qk (ri )qk (pj )λri ,pj t
M
n
AM (ΓP F + ) =
l∈γj
l=1
260
ED
Proof 2. Let γj be the set of tasks to be assigned to participant pj ; with τl is the workload of the task sl ∈ γj . The makespan of all tasks is the sum of workloads
PT
plus the passed time before the first acceptance. The latter factor can be deduced Pt 1 ]0 where x is the number of time from Lemma 1 as [ x=1 pxr,j pa,j qk (ri )qk (p j )λr ,p i
j
slots to get an acceptance from a participant and [.]0 is the derivative operator.
CE
We generalize this expression for all n met participants to consider all m tasks held by a requester, so we obtain the formula of Theorem 1.
AC
265
In the aim of minimizing the above formulated average makespan AM (ΓP F + ),
we propose a greedy-based offline solution illustrated in Algorithm 1. The basic idea is that each requester needs to compute an Expected Sensing Time, (EST), for each participant pj ∈ P . This factor includes the inter-encounter
270
time to meet a “volunteer-positive” participant pj in a certain compound k, i.e., Πi,j,k |k , plus the sum of proposed tasks loads. Therefore, we consider that all 14
ACCEPTED MANUSCRIPT
Algorithm 1 P-MATAF+ Assignment Algorithm Require: Set of sensing tasks S = {s1 , s2 , . . . , sm : τ1 ≤ τ2 ≤ . . . ≤ τm }, Ensure: Assignment strategy ΓP F + = {γ1 , γ2 , . . . , γn } 1:
for sl ∈ S do
2:
mine ← ∞
3:
for k ∈ C do
5: 6: 7:
for pj ∈ P do
1 τl +β2 nacc ) pa,j (sl ) = Pmexp(β exp(β1 τt +β2 nacc ) . Pt=1 n 1 x Πi,j,k = x=1 (1 − pa,j (sl )) pa,j (sl ) qk (ri )qk (pj )λr
ESTk,j = Πi,j,k + τl
8:
end for
9:
[mink , jk ] ← argmin(EST |k ) if mink ≤ mine then
11:
mine ← mink
12:
jmin ← jk
13:
end if
M
10:
i ,pj
0
AN US
4:
CR IP T
Participants P = {p1 , p2 , . . . , pnp }, Matrix of Expected Sensing Time EST .
end for
15:
Assign the task: γjmin = γjmin + {sl }
16:
Update EST: ESTk,jmin = ESTk,jmin + τl ; ∀k ∈ C
17:
Update the set of tasks: S = S \ {sl }
PT
ED
14:
end for
19:
Return ΓP F +
CE
18:
tasks held by each requester are sorted in an ascending way, ∀si , sj ∈ S, if i ≤ j
AC
then τi ≤ τj . Also, we initialize all expected sensing time for all participants to their inter-meeting time with the current requester, ESTj = Πi,j,k , ∀pj ∈ P .
275
Next, each task sl ∈ S, is assigned to the participant with the smallest ESTj before updating his expected sensing time ESTj = Πi,j,k + τl . Note that the inter-meeting time ESTj varies depending on the compound since users have different mobility behavior in each compound. For example, a 15
ACCEPTED MANUSCRIPT
user may stay a long period in a compound representing his work/housing area 280
but spends less time in another compound modeling food area. Consequently, a requester may meet a participant only in certain compounds and more than
CR IP T
once. Thus, we compute the k possible inter-meetings between each requester and the rest of the users. The result can be mapped into a k × np matrix where
each row presents an EST in a compound; EST |k = [EST1 , EST2 , . . . , ESTnp ].
In this offline mode, the preferences of a participant pj is only considered in
285
the pre-assignment phase. Particularly, a requester estimates the behavior of
each participant based on Equation (6). Further, when meeting the designated
AN US
participants, the requester only transmits to each one his assignment γj without waiting for his response which may result in a non perceived rejection. 290
4.2. Online Mode (P-MATAN+ )
The second mode is an online strategy, P-MATAN+ . The principle here is that each requester starts his assignment process only when he encounters a
M
participant. The detailed steps of this mode are illustrated by Algorithm 2 and can be designed as three phases:
ED
Pre-selection. This step is conducted by each requester ri encountering a participant pj in a certain compound k. The latter needs to update his ESTj in case he is processing tasks assigned from previous met requesters. As a result,
PT
the expected sensing time is merged to an Instant Sensing Time, (IST), which
CE
considers only the rest of workload held by the current met participant. X
l∈γj
τl − Tj,γj , ∀j ∈ 1 . . . np .
(8)
where Tj,γj = tc − ts,γj is the time elapsed since the participant has started
AC
295
ISTj =
performing his previous assignment γj , tc is the current time and ts,γj is the
starting time of γj . After receiving the participant’s IST, the requester starts an assignment
strategy comparable to the offline method while setting ESTj as ISTj for the 300
encountered participant pj to obtain a possible assignment ΓP N as detailed in 16
ACCEPTED MANUSCRIPT
Algorithm 2 P-MATAN+ Assignment Algorithm Require: Set of sensing tasks S = {s1 , s2 , . . . , sm : τ1 ≤ τ2 ≤ . . . ≤ τm }, Ensure: Assignment strategy ΓP N + = {γ1 , γ2 , . . . , γn }
CR IP T
Participants P = {p1 , p2 , . . . , pnp }, Matrix of Expected Sensing Time EST . When the requester meets a participant pj in a compound k ∈ C
2:
Set ESTk,j = ISTk,j
3:
P = P \ {pj }
4:
for sl ∈ S do
5:
mine ← ∞
6:
for k ∈ C do
AN US
1:
7:
[mink , jk ] ← argmin(ISTk,j + EST |k )
8:
if mink ≤ mine then mine ← mink
9:
jmin ← jk
10:
end if
12:
end for
13:
if (jmin = j) then
M
11:
Assign the task to this participant: γj = γj + {sl }
15:
Update EST: ESTk,j = ESTk,j + τl ; ∀k ∈ C
16:
Update the set of tasks: S = S \ {sl }
18:
Temporary assignment : γjmin = γjmin + {sl } Temporary Update of EST: ESTk,jmin = ESTk,jmin + τl ; ∀k ∈ C
end if
CE
19:
else
PT
17:
ED
14:
20:
end for
22:
Return ΓP N +
AC
21:
Algorithm 2. If the current participant is identified among the list of selected users, he receives the proposed tasks along with their workloads. As for the rest of selected users, this assignment is a temporary one.
17
ACCEPTED MANUSCRIPT
Participant Choice. Based on the proposed tasks, γj , and the previously as305
signed tasks, a participant pj computes his acceptance probability, pa,j , function of two main attributes: the current proposed workload τl and the number of
CR IP T
already accepted tasks nacc as expressed in Equation (6). This is done for every task separately, then answers are generated as a Bernoulli process B(n, pa ). For
each task sl ∈ γj , if the selected variable X = 1, then the participant will accept 310
to preform the corresponding task. Otherwise, the response is a rejection. The participant choice is modeled then as a boolean vector which contains answers X2 ... Xm
] and sent to the requester.
AN US
to all proposed tasks; Vans = [ X1
Final Selection. The requester receives the participant vector of answers and starts a process of verification of tasks confirmation. If the answer (element 315
of vector Vans ) X = 1, the requester removes the task with the corresponding index from his list of tasks S and consider it as assigned given he receives the participant’s confirmation. If X = 0, the requester holds back the corresponding
M
task, reassigns it in the next meeting to other participants and updates the assignment strategy ΓP N + . This is performed with no additional exchange with the encountered participant to avoid communication overhead.
ED
320
5. Incentives-based Preferences and Mobility-Aware Task Assignment
PT
(IP-MATA+ )
The prior presented no-incentives based assignment scheme delegates tasks
CE
to participants with no reward in return. Though, participants may be unwilling 325
to perform participatory sensing campaigns, especially if they receive an important workload. Thus, we introduce in this section incentives such as monetary
AC
rewards in the aim of encouraging users to contribute their data. We present first the different incentives policies and we detail further the offline and online modes of IP-MATA+ .
18
ACCEPTED MANUSCRIPT
330
5.1. Incentives policies We suppose that the sensing platform empowers each selected requester with a certain budget B to encourage encountered users to perform proposed sensing
CR IP T
tasks. Nevertheless, there are several policies based on which a requester can manage the available budget and proposes accordingly rewards. We identify, in 335
the following, two different incentivizing policies: task-priority based and data quality-based. 5.1.1. Priority-based Incentives
AN US
Recall that we consider in this work a set of heterogeneous sensing tasks S in
terms of type or involved sensors. As a consequence, some type of tasks can be perceived as more important or primary to be performed for a requester. Hence, depending on the type of a task sl , a requester may set different rewards. For instance, a requester may prioritize video streaming tasks rather than localization one. To describe this prioritization, we associate each task with a weight
M
αl . The higher is the value of αl , the more primary the task is. In this context, the pay-off (reward) offered can be proportional to the task weight compared to
ED
other proposed tasks. We introduce then an incentivizing policy that defines a
PT
reward/incentive as follows.
Ip (sl ) = P
αl k∈γj
αk
B(t)
(9)
where B(t) is the residual budget at the instant t initialized at B.
CE
This incentivizing policy may enhance participants’ to perform harder sens340
ing tasks, particularly by setting their associated weights to important values.
AC
5.1.2. Quality-based Incentives The second incentivizing policy is a data-quality based since we believe that
prioritizing tasks may not be the only criteria to optimize the task assignment in MCS systems. Indeed, for the same type of tasks, participants may have different quality of data samples depending on their sensor characteristics, location accuracy and/or sensing time-efficiency. Thus, we propose to account 19
ACCEPTED MANUSCRIPT
for the estimated quality of collected data. In this paper, we define as quality criteria the “timeliness”, which is a quality attribute in crowdsensing systems to describe the fact that a sensing measurement should be collected and uploaded
CR IP T
before a predefined deadline. We utilize this metric since we aim to minimize the average makespan, i.e., to minimize the time of sensing, and we recall the
utility function of [32] to evaluate the time-quality of the contributed data, by
normalizing the Expected Sensing Time (EST) of the current participant pj . The expression of the data-quality based reward is as follows: Iq (sl ) = U (ESTj ) P
αl αk
B(t)
(10)
AN US
k∈γj
where U (ESTj ) is the utility of this Instant Sensing Time compared to the minimum and the maximum EST received from other participants. This incentivizing policy jointly takes into account prioritizing and data345
quality of sensing tasks, which may positively impact the overall average makespan. Particularly, the reward allocated to each task is proportional to the execution
M
quality of the participant which aims at attracting high quality data contributes, i.e., lower EST, by offering to them higher amount of incentives.
ED
5.2. Offline mode (IP-MATAF+ ) Regardless of whether we incentivize participants on a task-priority basis or a
350
PT
data quality one, we develop hereafter our assignment modes for both rewarding policies. First, we adopt the same policy as in P-MATAF+ while incorporating the reward as a third attribute in the discrete choice model of Equation (7) to
CE
compute each participant’s acceptance probability. Accordingly, we derive the
355
mean arrival time of all “positive” participants and the corresponding average
AC
makespan AM (ΓIP F + ) from Theorem 1, with ΓIP F + is the incentives-based assignment strategy in an offline mode. We proceed to look for users who minimize the average makespan AM (ΓIP F + ) of all tasks in a crowdsensing support phase as follows:
360
• Estimate the current task reward Rl based on the selected incentives policy (task-priority/ data-quality). 20
ACCEPTED MANUSCRIPT
• Compute the acceptance probability of each participant pj while incorporating the reward Rl as a third attribute (Equation (7))
of potential “positive” participants.
365
CR IP T
• Compute the inter-encounter time Πi,j,k , as in Lemma (1), to get the list
• Set the Expected Sensing Time of all participants to Πi,j,k , ∀j ∈ 1 . . . n and look for the smallest ESTj .
• Select the corresponding participant and update his ESTjmin based on
AN US
the assigned tasks workloads τl .
• Update the set of assigned tasks S and the residual budget B(t) respecting
370
the selected incentive policy.
• Continue the selection as in Algorithm 1 until assigning all tasks carried by a requester ri .
375
M
Note that this mode is adopted by both pricing policies; the task priority-based and quality-based IP-MATA+ .
ED
5.3. Online mode (IP-MATAN+ ) IP-MATAN+ is the online mode for the Incentives-based task assignment
PT
presented in this paper. We respect the previous detailed assignment phases of Section 4. That means, requesters inquire encountered participants to update 380
their Instant Sensing Time (IST) and generate accordingly the list of selected
CE
users. If the current participant is selected, he receives his assignment. However, different from P-MATA+ , the requester needs to send both tasks workloads and weights. As a consequence, a participant acceptance probability depends mainly
AC
on the perceived reward that is computed from the proposed task type (weight)
385
and/or the participant IST. For instance, if the incentives are priority-based and the proposed tasks are with high weights values, αl , the associated rewards will be as high, thereby the proposed assignment is more probable to be accepted, especially, if the current participant has few completed tasks in the past. On
21
M
AN US
CR IP T
ACCEPTED MANUSCRIPT
ED
Figure 2: Participant choice model
the other hand, if incentives are quality-based and the participant is currently 390
performing “heavy” processing load, the estimated “timeliness”, i.e., U (ISTj ),
PT
can be low. Hence, the corresponding offered pay-off is low and the participant may reject any proposed assignment even if it is associated with high priority, αl . The aforementioned cases are summarized in the activity diagram of Figure 2.
CE
After receiving the participant choice, the requester proceeds to the last phase of
395
final selection. That is, for each task sl the requester verifies if the corresponding
AC
answer X = 1 or it is a rejection. Accordingly, the set of tasks S is updated and
so the available budget B(t). The above steps are repeatedly executed whenever encountering a participant until assigning all tasks.
22
ACCEPTED MANUSCRIPT
6. Performance Evaluation In order to evaluate the performance of the proposed task assignment schemes,
400
P-MATA+ and IP-MATA+ , and their different modes; offline and online, we
CR IP T
run extensive simulations. Hereafter, we detail the used traces, the simulation settings as well as the evaluation metrics and corresponding results. 6.1. Real Traces
We opt for real mobility traces to design our crowdsensing scenario. It
405
is worth noting that we look for human mobility traces providing (X,Y) or
AN US
GPS coordinates. Yet, such datasets are rarely published due to users’ privacy concerns. Hence, we refer to the available users’ traces within a campus [33].
These traces include daily GPS track logs from two university campuses; North 410
Carolina State University (NCSU) and Korea Advanced Institute of Science and Technology (KAIST). For the first trace, GPS readings are collected at every 10 seconds and recorded into a daily track log by 34 randomly selected
M
students who took a course in the computer science department. Thus, we divide the corresponding area into four major compounds where we consider the two densest as computer science labs and the two other as food and administrative
ED
415
area, respectively. As for KAIST trace, GPS readings are sampled each 10 seconds as well by 92 students living in the dormitory of the campus between
PT
2006 and 2007. Hence, we notice a higher density and lower speed compared to the first trace. We subdivide the KAIST area also into 4 compounds; two dormitory sections with the highest densities, one food area and one studying
CE
420
area. According to each trace characteristics, we estimate users probabilities to be in different compounds qk . Furthermore, we compute the inter-meeting time
AC
parameter between two users as λi = Ni /T , where Ni is the total number of encountering times with a distance set at 10m and T is the total duration.
425
6.2. Requester Selection We set the number of requesters to be ' 20% of the total number of users in a trace. As a result, the number of requesters in NCSU trace is equal to 23
ACCEPTED MANUSCRIPT
Table 2: Real Traces characteristics
Length
Requesters
Participants
Bad Requesters
NCSU
22 (h)
7
27
2
KAIST
24 (h)
20
72
0
CR IP T
Trace
7 while the one in KAIST trace is equal to 20, as detailed in Table 2. Also,
we investigate three different selection methods that a crowdsensing platform 430
can opt to identify requesters among the different registered users. We propose to designate requesters randomly, by considering those with highest estimated
AN US
number of meetings λri or among the fastest mobile users.
The selection of requesters is very important since it may highly impact the performance of our assignment scheme. For example, when selecting requesters 435
randomly, we had bad ones, i.e., requesters that rarely encounter other users due to their mobility behavior and consequently, they can not delegate some of their sensing campaigns. This was the case of 2 requesters among the randomly
ED
6.3. Simulation Settings
M
selected ones in the NCSU campus trace, as illustrated in Table 2.
To simulate the proposed distributed crowdsensing schemes, we utilize the
440
real-mobility traces of NCSU and KAIST campuses [33]. Therefore, we dedicate
PT
a first evaluation phase to check which requesters’ selection method is the most efficient to be adopted for the rest of the evaluation. Furthermore, we consider
CE
a set of sensing tasks S to be assigned by each requester to encountered partic445
ipants of which we vary the number and associated workloads to observe their impact on our proposed assignment schemes. The number of tasks is selected
AC
from {10, 20, 30, 40, 50} and the average workload of all tasks τ is selected from {1, 3, 5} (hours). Moreover, we consider that sensing tasks are heterogeneous and assign to them different weights, αl in [1, 10] and associate their attributes
450
zl in {load, number, reward} with random coefficient βl . Simulations are conducted under the network simulator ns-3.24. We run two different groups of simulations, first while varying the number of tasks 24
ACCEPTED MANUSCRIPT
30 20 10 0
10
40 30 20 10 0
20 30 40 50 Number of Tasks to be assigned
PMATAF+c PMATAN+c PMATAF+s PMATAN+s PMATAF+r PMATAN+r
(a) P-MATA+ for KAIST
10
CR IP T
40
50
PMATAF+c PMATAN+c PMATAF+s PMATAN+s PMATAF+r PMATAN+r
Average Number of Assigned Tasks
Average Number of Assigned Tasks
50
20 30 40 50 Number of Tasks to be assigned
(b) P-MATA+ for NCSU
AN US
Figure 3: The average number of assigned tasks by different requesters.
and setting the average workload to 1h and second while varying the average workload of tasks and fixing the number of tasks to 10. Within each group, 455
30 runs were performed for each trace (NCSU/ KAIST). Results obtained for all assignment schemes, P-MATAF+ , P-MATAN+ , IP-MATAF+ and IP-
6.4. Performance Analysis
M
MATAN+ are illustrated in Figures 3 to 7.
460
ED
First, we aim to highlight the more efficient requesters’ selection strategy among the proposed ones. Thus, we compare the number of assigned task achieved by different requesters selected, randomly, by-contact or by-speed. Af-
PT
ter designating the requesters’ selection policy for each trace we proceed with comparing the achieved makespan by each assignment scheme. Finally, we in-
CE
vestigate the efficiency of our incentives-based policies. 465
6.4.1. Average number of assigned tasks
AC
We run simulations with different number of tasks and an average workload
τ = 1h on the two real traces KAIST and NCSU. First, we plot in Figure 3 only the no-incentives based P-MATA+ results in terms of the number of assigned tasks by each requesters’ policy while varying m. To do so, we denote by “c”,
470
“s” and “r”, the results achieved by requesters selected by-contact, by-speed and randomly, respectively. 25
ACCEPTED MANUSCRIPT
0.7 0.6
0.9
0.5 0.4 0.3 0.2 0.1 0 0
0.8 0.7 0.6
PMATAF+c PMATAN+c PMATAF+s PMATAN+s PMATAF+r PMATAN+r
0.5 0.4 0.3 0.2 0.1
2
4 6 Number of assigned tasks
8
0 0
10
(a) P-MATA+ for KAIST
2
CR IP T
CDF of Assigned Tasks
0.8
1
PMATAF+c PMATAN+c PMATAF+s PMATAN+s PMATAF+r PMATAN+r
CDF of Assigned Tasks
1 0.9
4 6 Number of assigned tasks
8
10
(b) P-MATA+ for NCSU
AN US
Figure 4: The cdf of assigned tasks for each requesters’ selection policy, kS| = 10.
Figure 3 shows that the offline mode P-MATAF+ achieves the highest values of number of tasks for all requesters’ selection policies. However, for NCSU trace the number of requesters and users in general, 7 and 34, is smaller 475
than in KAIST trace which limits the availability of participants for sensing and
M
results on slightly lower number of assigned tasks. Furthermore, we investigate the distribution of the number of assigned tasks for different selections while setting the number of tasks to 10. The results in Figure 4 conform to our
480
ED
observations. That is, the offline modes for both traces outperform the online ones by assigning all and more than 65% of sensing tasks for KAIST trace and
PT
NCSU trace, respectively.
In addition, we remark that, for NCSU trace, selected requesters by-contact are the ones with the most important number of assigned tasks, especially in
CE
the online mode. For example, requesters by-contact have successfully assigned
485
more than 80% of tasks by P-MATAF+ and around 60% by P-MATAN+ .
Indeed, the NCSU trace is not very dense. Therefore, requesters with the most
AC
important number of contact, i.e. selected by-contact, are the one who encounter more participants and assign better tasks. Differently, for the KAIST trace, the selected requesters by-speed perform better when assigning at least 98% of all
490
tasks by P-MATAF+ and more than 60% by the online mode, P-MATAN+ . Hence, KAIST trace is a dense dormitory area, where users with highest number
26
ACCEPTED MANUSCRIPT
of contact are not mobile and may encounter the same participants always. Yet, those selected by-speed, i.e., the fastest, move around and meet different participants thereby assign tasks better.
CR IP T
These observations are confirmed by the the cumulative distributed function
495
(cdf) plot of Figure 4, where both offline and online modes perform better when
associated to the by-speed requesters’ selection strategy for KAIST trace and by-contact requesters for NCSU. Motivated by these results, we adopt for the rest of evaluations the by-contact identified requesters for NCSU trace and the by-speed ones for the KAIST trace.
AN US
500
6.4.2. Average makespan
In this part of the evaluation, we plot the average makespan values realized by both no-incentives and incentives-based assignment schemes, P-MATA+ and IP-MATA+ , respectively. In this aim, we vary first the number of tasks 505
and second the average workload. The results are illustrated in Figure 5 for
M
both traces.
Naturally, the average makespan is an increasing function of the number
ED
of tasks or the average workload. Results show that the online modes achieve better results since they consider the instant response of the encountered partic510
ipants and try accordingly to assign the maximum of tasks, especially in case of
PT
rejection. Moreover, we observe that, for both real traces, the realized makespan values of the two incentivizing policies of IP-MATA+ , i.e., the priority-based (p) and the quality based (q), are the lowest. Particularly, for the NCSU trace,
CE
the incentives-based scheme IP-MATA+ has realized lower values of makespan
515
by enhancing the limited number of available participants (np = 27) even with
AC
high workload (m = 50). However, for the KAIST trace, all schemes performed
similarly since there are more available users (np = 72), and even with noincentives all tasks are assigned and all schemes achieve good makespan values.
27
ACCEPTED MANUSCRIPT
Average makespan (h)
250 225 200 175
120
P−MATAF+ P−MATAN+ IP−MATAF+(p) IP−MATAN+(p) IP−MATAF+(q) IP−MATAN+(q)
110 100
150 125 100 75 50
90 80 70
P−MATAF+ P−MATAN+ IP−MATAF+(p) IP−MATAN+(p) IP−MATAF+(q) IP−MATAN+(q)
60 50 40 30 20
25
10
0 10
20 30 40 Number of Tasks to be assigned
0 10
50
20 30 40 Number of Tasks to be assigned
(a) KAIST
160 140 120
120
P−MATAF+ P−MATAN+ IP−MATAF+(p) IP−MATAN+(p) IP−MATAF+(q) IP−MATAN+(q)
100
Average makespan (h)
Average makespan (h)
180
100 80 60 40
P−MATAF+ P−MATAN+ IP−MATAF+(p) IP−MATAN+(p) IP−MATAF+(q) IP−MATAN+(q)
80 60 40 20
20 1
3 Average workload of tasks (h)
(c) KAIST
5
0
M
0
50
(b) NCSU
AN US
200
CR IP T
275
Average makespan (h)
300
1
3 Average workload of tasks (h)
5
(d) NCSU
ED
Figure 5: Average makespan while varying the number of tasks (a), (b) (τ = 1 h) and varying the average workload (c), (d) (m = 20)
6.4.3. Incentives policies performance
PT
520
The performance of the different incentives policies, the task priority-based reward (p) and the quality-based reward (q), is depicted in Figures 6 and 7.
CE
The expenditure efficiency of the achieved makespan over the budget spent
is shown in Figure 6. We observe that the offline mode, IP-MATAF+ , realizes comparable results for KAIST and NCSU traces. Indeed, in this mode the re-
AC
525
ward is based on estimated acceptance probability and not the current updated one, which may be not accurate enough. However, for the online modes, the incentives policies persuade participants to gradually perform tasks by adapting the offered reward which minimizes the overall makespan and exploits the resid-
530
ual budget in a better way. This can be clearly observed in Figure 6.a, as the
28
ACCEPTED MANUSCRIPT
100
200 150 100 50 0
IP−MATAF+(p) IP−MATAN+(p) IP−MATAF+(q) IP−MATAN+(q)
80 60
CR IP T
Average makespan (h)
250
120
IP−MATAF+(p) IP−MATAN+(p) IP−MATAF+(q) IP−MATAN+(q)
Average makespan (h)
300
40 20
1000
2000 3000 Budget spent
4000
0
5000
1000
(a) KAIST
2000 3000 Budget spent
4000
5000
(b) NCSU
1000
IP−MATAN+(p) IP−MATAN+(q)
900
900
700
Residual Budget
Residual Budget
1000
IP−MATAN+(p) IP−MATAN+(q)
800
800 600 500 400 300
700 600 500 400 300 200
100 200
400
600 800 Time(s)
M
200 0 0
AN US
Figure 6: Expenditure efficiency of makespan over spent Budget for both traces
1000 1200 1400
0 0
1
2
3
4 Time(s)
5
6
7
8 4
x 10
(b) NCSU
ED
(a) KAIST
100
Figure 7: Budget evolution function of time for both traces
PT
behavior of IP-MATAN+ under the quality-based reward policy and KAIST trace. In fact, both online incentives policies reach comparable makespan val-
CE
ues. Yet, the quality-based one achieves the same value with lower budget spent. For instance, IP-MATAN+ (p) realizes around 150h of makespan by spending
535
4000 units while IP-MATAN+ (q) achieves the same makespan when spend-
AC
ing only 3000 units. This difference is slightly remarked with NCSU trace due to the limited number of participants which obliges IP-MATAN+ (q) to offer
rewards to modest-quality contributors in order to assign all tasks.
540
Moreover, we plot the evolution of the residual budget over the time un-
der the online incentives scheme, IP-MATAN+ , in Figure 7 for both traces.
29
ACCEPTED MANUSCRIPT
That is, we select one requester and compare the budget evolution function of time under the two different incentives policies. It is shown that the curve is more sharp for the priority-based incentives policy, IP-MATAN+ (p), than
545
CR IP T
the quality-based, IP-MATAN+ (q). The budget is expended in a faster rate and is used up after a small period of time in contrast to IP-MATAN+ (q). This is explained by the fact that quality-proportional rewards are rather lower especially for participants offering high Expected Sensing Time values.
To conclude, the incentives-based schemes outperform the no-incentives ones
by encouraging more participants to accept extra sensing tasks, especially in online modes. More precisely, IP-MATA+ realizes comparable makespan values
AN US
550
with its two incentivizing policies. Nevertheless, the quality-based one reaches such values while reducing the spent budget and improving the achieved quality.
6.4.4. After-thoughts
After observing the performances of the various proposed schemes in this
M
555
work, we need to highlight the fact that the achieved results can be valid for
ED
other scenarios with a more important number of users (participants) from different groups (students, workers, drivers..). This is due to the fact that our solutions rely essentially on two main phases. On one hand, the pre-processing of users’ mobility behaviors, i.e., traces and area density, is highly important
PT
560
to designate requesters among the different participants registered within the crowdsensing platform. This is to identify users with the most important num-
CE
ber of encounters due to their mobility pattern, speed or position. That is, different traces can have different requesters’ type, as proved by our simulation results. Indeed, in the NCSU traces, users with the most important number of
AC
565
contact are the ones who assigned efficiently tasks, whereas for KAIST trace, requesters selected by-speed are the better. On the other hand, our assignment schemes are preferences-based, and the preferences model is a function of the tasks attributes (number, load, reward), and not the users’ group. As a conse-
570
quence, two users from two different groups, yet with the same number of tasks 30
ACCEPTED MANUSCRIPT
assigned, would accept or reject their assignment equally. Therefore, our preferences and mobility-aware task assignment solutions would perform as efficiently
CR IP T
when simulated with different mobility traces.
7. Conclusion
In this paper, we adopted an hybrid multi-task assignment in participatory
575
sensing systems. We introduced a “support” phase where requesters identified
by the central platform need to delegate sensing tasks to adequate encoun-
tered participants. We designed participants’ arrival model to requesters and
580
AN US
we formulated their preferences as a proactive regression logit model in order to estimate their acceptance probabilities towards received assignment. Accord-
ingly, we select those who minimize the average makespan of all sensing tasks under two different assignment schemes; without and with incentives. Furthermore, we advocated two incentives policies for the second variant of assign-
585
M
ment; priority-based and quality-based. All the proposed solutions have been presented in two modes: offline and online greedy-based algorithms. Finally, we showed through real traces-based simulations that the incentives-based pref-
ED
erences and mobility-aware variant outperforms the other modes. Specifically, the online mode, IP-MATAN+ , has achieved the minimum average makespan.
590
PT
In addition, when comparing the two incentives policies, the quality-based one has been proved to be more efficient in terms of budget expenditure.
CE
References References
AC
[1] R. K. Ganti, F. Ye, H. Lei, Mobile crowdsensing: current state and future
595
challenges, IEEE Communications Magazine 49 (11) (2011) 32–39.
[2] N. Lane, E. Miluzzo, H. Lu, D. Peebles, T. Choudhury, A. Campbell, A survey of mobile phone sensing, IEEE Communications Magazine 48 (9) (2010) 140–150. 31
ACCEPTED MANUSCRIPT
[3] J. Burke, D. Estrin, M. Hansen, A. Parker, N. Ramanathan, S. Reddy, M. B. Srivastava, Participatory sensing, in: Proc. WSW: Mobile Device Centric Sensor Networks and Applications, 2006, pp. 117–134.
600
CR IP T
[4] P. Mohan, V. N. Padmanabhan, R. Ramjee, Nericell: Rich monitoring
of road and traffic conditions using mobile smartphones, in: Proc. ACM Sensys, 2008, pp. 323–336.
[5] B. Hull, V. Bychkovsky, Y. Zhang, K. Chen, M. Goraczko, A. Miu, E. Shih,
H. Balakrishnan, S. Madden, Cartel: A distributed mobile sensor comput-
605
AN US
ing system, in: Proceedings of the 4th International Conference on Embedded Networked Sensor Systems, 2006, pp. 125–138.
[6] M. Mun, S. Reddy, K. Shilton, N. Yau, J. Burke, D. Estrin, M. Hansen, E. Howard, R. West, P. Boda, PEIR, the personal environmental impact report, as a platform for participatory sensing systems research, in: Proc.
610
M
MobiSys, 2009, pp. 55–68.
[7] P. Dutta, P. M. Aoki, N. Kumar, A. Mainwaring, C. Myers, W. Willett,
ED
A. Woodruff, Common sense: Participatory urban sensing using a network of handheld air quality monitors, in: Proceedings of the 7th ACM Conference on Embedded Networked Sensor Systems, 2009, pp. 349–350.
615
PT
[8] X. Zhang, Z. Yang, W. Sun, Y. Liu, S. Tang, K. Xing, X. Mao, Incentives for mobile crowd sensing: A survey, IEEE Communications Surveys Tutorials
CE
18 (1) (2016) 54–67. [9] L. G. Jaimes, I. J. Vergara-Laurens, A. Raij, A survey of incentive techniques for mobile crowd sensing, IEEE Internet of Things Journal 2 (5)
AC
620
(2015) 370–380.
[10] X. Sheng, J. Tang, X. Xiao, G. Xue, Leveraging GPS-less sensing scheduling for green mobile crowd sensing, IEEE Internet of Things Journal 1 (4) (2014) 328–336.
32
ACCEPTED MANUSCRIPT
625
[11] R. Ben Messaoud, Y. Ghamri-Doudane, QoI and energy-aware mobile sensing scheme: A tabu-search approach, in: IEEE 82nd VTC Fall, 2015, pp. 1–6.
CR IP T
[12] Z. Song, C. Liu, J. Wu, J. Ma, W. Wang, QoI-aware multitask-oriented dynamic participant selection with budget constraints, IEEE Transactions on Vehicular Technology 63 (9) (2014) 4618–4632.
630
[13] M. H. Cheung, R. Southwell, F. Hou, J. Huang, Distributed time-sensitive
task selection in mobile crowdsensing, in: Proc. ACM MobiHoc, 2015, pp.
AN US
157–166.
[14] L. Pu, X. Chen, J. Xu, X. Fu, Crowdlet: Optimal worker recruitment for self-organized mobile crowdsourcing, in: Proc. IEEE INFOCOM, 2016.
635
[15] S. Chen, M. Liu, X. Chen, A truthful double auction for two-sided het-
(2016) 31 – 42.
M
erogeneous mobile crowdsensing markets, Computer Communications 81
[16] J.-S. Lee, B. Hoh, Dynamic pricing incentive for participatory sensing, Pervasive and Mobile Computing 6 (6) (2010) 693 – 708.
ED
640
[17] H. Jin, L. Su, D. Chen, K. Nahrstedt, J. Xu, Quality of information aware
PT
incentive mechanisms for mobile crowd sensing systems, in: Proceedings of the 16th ACM International Symposium on Mobile Ad Hoc Networking
CE
and Computing, 2015, pp. 167–176. 645
[18] H. Jin, L. Su, H. Xiao, K. Nahrstedt, Inception: Incentivizing privacy-
AC
preserving data aggregation for mobile crowd sensing systems, in: Proceedings of the 17th ACM International Symposium on Mobile Ad Hoc Networking and Computing, 2016, pp. 341–350.
[19] Y. Wen, J. Shi, Q. Zhang, X. Tian, Z. Huang, H. Yu, Y. Cheng, X. Shen,
650
Quality-driven auction-based incentive mechanism for mobile crowd sensing, IEEE Transactions on Vehicular Technology 64 (9) (2015) 4203–4214.
33
ACCEPTED MANUSCRIPT
[20] D. Peng, F. Wu, G. Chen, Pay as how well you do: A quality based incentive mechanism for crowdsensing, in: Proceedings of the 16th ACM International Symposium on Mobile Ad Hoc Networking and Computing, 2015, pp. 177–186.
CR IP T
655
[21] R. Ben Messaoud, Y. Ghamri-Doudane, D. Botvich, Preference and Mobility-Aware Task Assignment in Participatory Sensing, in: Proceed-
ings of the 19th ACM International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems, 2016, pp. 93–101.
[22] D. McFadden, Conditional Logit Analysis of Qualitative Choice Behavior,
AN US
660
Frontiers in Econometrics (1974) 105–142.
[23] M. Xiao, J. Wu, L. Huang, Y. Wang, C. Liu, Multi-task assignment for crowdsensing in mobile social networks, in: Proc. IEEE INFOCOM, 2015, pp. 2227–2235.
[24] D. Yang, G. Xue, X. Fang, J. Tang, Crowdsourcing to smartphones: In-
M
665
centive mechanism design for mobile phone sensing, in: Proceedings of the
ED
18th Annual International Conference on Mobile Computing and Networking, 2012, pp. 173–184.
[25] D. Yang, G. Xue, X. Fang, J. Tang, Incentive mechanisms for crowdsensing: Crowdsourcing with smartphones, IEEE/ACM Transactions on Networking
PT
670
24 (3) (2016) 1732–1744.
CE
[26] I. Koutsopoulos, Optimal incentive-driven design of participatory sensing systems, in: 2013 Proceedings IEEE INFOCOM, 2013, pp. 1402–1410.
AC
[27] J. Wu, M. Xiao, L. Huang, Homing spread: Community home-based multi-
675
copy routing in mobile social networks, in: Proc. IEEE INFOCOM, 2013, pp. 2319–2327.
[28] W. Gao, Q. Li, B. Zhao, G. Cao, Multicasting in delay tolerant networks: A social network perspective, in: Proc. ACM MobiHoc, 2009, pp. 299–308.
34
ACCEPTED MANUSCRIPT
[29] S. Faridani, B. Hartmann, P. G. Ipeirotis, What’s the right price? pricing tasks for finishing on time, in: Proceedings of the 11th AAAI Conference
680
on Human Computation, 2011, pp. 26–31.
de l’institut Henri Poincar´e, 1935, pp. 115–158.
CR IP T
[30] E. Gumbel, Les valeurs extrˆemes des distributions statistiques, in: Annales
[31] K. F. Riley, Mathematical methods for physics and engineering: a comprehensive guide, Cambridge University Press, 2006.
685
[32] Q.-T. Nguyen-Vuong, Y. Ghamri-Doudane, N. Agoulmine, On utility mod-
AN US
els for access network selection in wireless heterogeneous networks, in:
IEEE Network Operations and Management Symposium, NOMS, 2008, pp. 144–151. 690
[33] I. Rhee, M. Shin, S. Hong, K. Lee, S. Kim, S. Chong, CRAWDAD dataset ncsu/mobilitymodels (v. 2009-07-23), Downloaded from
AC
CE
PT
ED
M
http://crawdad.org/ncsu/mobilitymodels/20090723/GPS (2009).
35