Future Generation Computer Systems (
)
–
Contents lists available at ScienceDirect
Future Generation Computer Systems journal homepage: www.elsevier.com/locate/fgcs
A coral-reefs and Game Theory-based approach for optimizing elastic cloud resource allocation Massimo Ficco a,∗ , Christian Esposito b , Francesco Palmieri b , Aniello Castiglione b a
Department of Industrial and Information Engineering, Second University of Naples, Via Roma 29, I-81031 Aversa (CE), Italy
b
Department of Computer Science, University of Salerno, Via Giovanni Paolo II, 132 I-84084 Fisciano (SA), Italy
highlights • Bio-inspired coral-reefs optimization paradigm to model cloud elasticity. • Game theory-based approach to identify the best cloud resource reallocation schema Fuzzy linguistic SLA formalization.
article
info
Article history: Received 15 January 2016 Received in revised form 16 May 2016 Accepted 22 May 2016 Available online xxxx Keywords: Cloud computing Elasticity Live-migration Coral-reefs optimization Game Theory Fuzzy linguistic SLA
abstract Elasticity is a key feature in cloud computing, which distinguishes this paradigm from other ones, such as cluster and grid computing. On the other hand, dynamic resource reallocation is one of the most important and complex issues in cloud scenarios, which can be expressed as a multi-objective optimization problem with the opposing objectives of maximizing demand satisfaction and minimizing costs and resource consumptions. In this paper, we propose a meta-heuristic approach for cloud resource allocation based on the bio-inspired coral-reefs optimization paradigm to model cloud elasticity in a cloud-data center, and on the classic Game Theory to optimize the resource reallocation schema with respect to cloud provider’s optimization objectives, as well as customer requirements, expressed through Service Level Agreements formalized by using a fuzzy linguistic method. © 2016 Elsevier B.V. All rights reserved.
1. Introduction In recent years, cloud computing has attracted attention from industry, government and academic worlds. An increasing amount of applications make extensive usage of cloud resources, according to an on-demand, self-service, and pay-by-use business model, with the progressive adoption of cloud-based services by many sectors of modern society. One of the main reasons of this success is the possibility of acquiring virtual resources in a dynamic and elastic way. In particular, the emerging virtualization technologies allow multiple virtual machines (VMs) to run concurrently on a single physical host, called host machine (HM). Each VM, in turn, hosts its operating system, middleware and applications, by using a partition of the underlying hardware resources
∗
Corresponding author. E-mail addresses:
[email protected] (M. Ficco),
[email protected] (C. Esposito),
[email protected] (F. Palmieri),
[email protected] (A. Castiglione). http://dx.doi.org/10.1016/j.future.2016.05.025 0167-739X/© 2016 Elsevier B.V. All rights reserved.
capacity (CPU power, memory, store capability and network bandwidth) [1]. Cloud elasticity is the key feature for implementing server consolidation strategies. It allows for on-demand migration and dynamic reallocation of VMs [2]. NIST defines elasticity as the ability for customers to quickly purchase on-demand and automatically release as many resources as needed, ideally giving to the user the feeling that the cloud resource capabilities are unlimited [3]. Specific elasticity control mechanisms are implemented to decide when and how to scale-up or scale-down virtual resources, in accordance with the provider’s own optimization objectives, as well as with user-defined settings and requirements formalized within the context of a specific Service Level Agreement (SLA) between the cloud provider and the customer. In order to do this, resource usage information, such as CPU load, free memory and network traffic volumes, for all the available HMs, has to be continuously collected and analyzed on-line. Three different techniques are used in the implementation of cloud elasticity solutions: replication, migration and resizing [4], resulting in different strategies and approaches for handling resource allocation within a cloud infrastructure.
2
M. Ficco et al. / Future Generation Computer Systems (
The dynamic resource allocation problem describes the decision of how many HMs are required overall to cope with the current demand, and how VMs are (re-)allocated to each HM in the individual time intervals. The optimization of resource allocation consists in minimizing the number of HMs necessary to host all the VMs associated to users’ demands. This is a typical multi-objective optimization task, where the most important goals are the minimization of the needed hardware resources and the satisfaction of SLAs contracted with the customers. This problem, widely simplified, is closely related to a commonly known NP-hard combinatorial optimization problem, the ‘‘bin packing’’ problem [5], where the items to be packed can be viewed as the VMs and the bins as the HM nodes with their multi-dimensional capacity represented by the available computing power, free memory, storage capability and network bandwidth. Consequently, our elastic cloud resource allocation problem, that is inherently more complex, will be NP-hard, and thus, a reasonable heuristic solution, achieving nearoptimal results is strongly desirable. In such a complex multi-objective scenario, traditional optimization methods are not able to efficiently provide good results, due to their inherent difficulties in exploring huge solution spaces. Hence, novel approaches based on bio-inspired meta-heuristic schemes, seem to be the most promising options to achieve a better trade-off between the complexity of the search process and the optimization of the solutions found. Such schemes rely on models and strategies borrowed from the observation of behaviors, and evolution mechanisms available in nature, where only the fittest individuals are able to survive in organizations characterized by a high degree of competition. In this direction, to model the elastic resource reallocation dynamics of a typical cloud environment, we adopted a Coral-Reefs Optimization (CRO) approach, which artificially simulates the reefs evolution processes, including corals’ reproduction and competition for the space in the reefs [6], which, by analogy, model the continuous demands for resource re-sizing, migration and replication, characterizing the operational life-cycle of VMs hosted within a cloud data center. This approach presents extremely interesting and promising features in achieving convergence towards global optima. On the other hand, cloud customers can be seen as noncooperative entities, whose fundamental interest is not the optimization of the overall system performance, but rather the maximization of their individual benefit. Therefore, a natural hint for carefully driving the VM allocation process comes from classic game theory. Accordingly, we used a game theory-based approach to identify the best VM reallocation schema with respect to the critical requirements specified by the cloud customers in their SLAs, formalized by using fuzzy linguistic label sets, and typically characterized by different and even conflicting performance objectives and optimization criteria. This approach is able to drive the complex interactions between the customers and the cloud provider towards a social optimum (i.e., an equilibrium point between their own demands and goals), by optimizing a global objective function that takes into account the individual interests of all the involved entities. The rest of the paper is organized as follows. In Section 2, the strategies and methods currently adopted to implement cloud elasticity capabilities are presented. Section 3 defines the problem of interest. The proposed approach for elastic resource reallocation is presented in Section 4. The related experimental performance evaluation results are shown in Section 5. Finally, Section 6 presents some concluding considerations and remarks. 2. Background and related work Elasticity represents a dynamic property of the cloud paradigm, which allows the system to scale on-demand within an operational
)
–
system context [7]. It enables applications to evolve, by following the users’ demand, without needing traditional ‘‘fork-lift’’ upgrades, and hence, introduces the degree of adaptiveness that is fundamental for modern big data-centric environments. Generally, Infrastructure as a Service (IaaS) architectures provide an elasticity controller, which is responsible for monitoring information like CPU load, memory and network traffic, and for making decisions on whether or not the virtual resources must be scaled or migrated, in accordance with user-defined rules and settings [8]. From a user perspective, elasticity allows perfect matching between instantaneous customer’s needs and resources available to him, by avoiding the unnecessary reservation of resources that are not needed for most of time, with obvious impacts on the overall system scalability and performance. From the provider perspective, elasticity ensures better use of computing resources, by providing economies of scale, and allowing a much larger number of users to be served simultaneously [4]. Moreover, it can be used to increase the local resources capacity, by simultaneously reducing operational costs and energy consumption [9–11]. Several strategies have been employed in order to support the implementation of elasticity capabilities, including resizing, replication, and migration:
• Resizing (vertical scaling): according to such strategy, CPU, memory and storage resources can be re-sized on a running virtual instance, that can be a virtual machine or a container. Some of the most significant implementation of resizing mechanisms are PRESS [12], ElasticVM [13], and Kingfisher [10]. • Replication (horizontal scaling): consists of adding/removing instances from users’ virtual environments. It is currently the most widely used technique for providing elasticity in cloud environments [14–16]. Several cloud platforms, such as Amazon and AzureWatch, offer auto-scaling and load-balancing capabilities in order to split the load between the various instances. Specific thresholds can be set by customers to add or remove instances depending upon the actual usage. • Live migration: consists in transferring the entire VM runtime status (CPU and memory pages) from the actual host machine to a new destination host, without preempting its execution [17,18]. It is performed in two steps, in which at first, a precopying of the VM status on the target host is performed (while the VM is still running); then, the execution of the source machine is stopped, a re-copy of the modified pages on the destination host is performed, and finally, the migrated VM is resumed. A cloud provider’s resource management facility must orchestrate the VM migration in order to simultaneously minimize resource usage and maximize SLA adherence [19]. Specifically, it must provide: – Cold-spots migration support: that is related to over-provisioned resources with low utilization (server sprawl). In order to reduce severe financial losses for cloud providers, live migration can be used to consolidate VMs (optimize VMs placement) with the aim of minimizing the number of powered-on hosts and saving energy [20]. When the amount of resource usage of a host drops below a given threshold, VMs are migrated to another host providing enough available resources capacity (the freed-up HMs are switched-off). – Load balancing migration support: VM migration can be used to dynamically move VMs among the available hosts with the aim of rebalancing the utilization of resources over them. A specific decision-making process is enabled when the load imbalance within the cloud (among heavily loaded hosts and lightly loaded ones) exceeds a given threshold [21,22]. Its objective could achieve equal residual resource capabilities across all the available HMs in order to optimize local resource allocations in presence of increasing demands.
M. Ficco et al. / Future Generation Computer Systems (
– Hot-spots mitigation support: if the HM does not own enough additional locally available resources to meet the SLA requirements of one of its VMs, the involved VM is migrated to another host whenever possible (hot-spot). Specifically, application-level monitoring of VMs must be adopted to detect when the expected performance of the VMs drops below a given threshold. In this case, in order to mitigate the hot-spots, additional resources are either added locally, or if the total resource usage of the host running the VM exceeds a predefined threshold, a decision-making process is enabled to migrate the VM to another HM [23,24]. From a cloud provider’s perspective, the migration process consists in alleviating hot-spots and improving load-balancing in order to meet SLAs of customers, whereas reducing cold-spots is necessary to reduce power consumption (to minimize the number of physical machines necessary to host all the VMs). The migration process addresses each of the previous objectives, but some constraints should be considered: the overhead of the migration process, the impact on applications during migration, the degree of improvement in the specific resource utilization performance goals [19,25,26]. 3. Definition of the resource allocation optimization problem In order to describe and evaluate the proposed approach, we defined a simplified model for resource management in the cloud computing environment. We consider a scenario, in which a cloud data-center is composed by several computing nodes, characterized by a set of parameters, such as CPU, memory, and storage capacity. Specifically, we denote H = (h1 , . . . , hn ) as the set of HMs in the data-center, and respectively, C = (c1 , . . . , cn ) their available (free) resource capacity, where cj is described in terms of CPU, RAM, network bandwidth, and storage, with cj = (κc ,j , λc ,j , µc ,j , νc ,j ) respectively. Moreover, we denote V = (v1 , . . . , vm ) as the set of VMs to be assigned to HM and R = (r1 , . . . , rm ) the set of required virtual resources, where ri = (κr ,i , λr ,i , µr ,i , νr ,i ) represents the amount of resources for running the VM i (in terms of CPU, RAM, bandwidth, and storage). At each step of the allocation process, a set of virtual machines W could be involved in the allocation process. Such VMs can be: (i) new VMs to be allocated in the cloud; (ii) already hosted VMs that require additional resources, and (iii) already hosted VMs that must be reallocated either to minimize the resource waste, or because of a hardware or software failure. Each involved VM requires an amount of resources for a time duration d. We assume κr ,i , λr ,i , µr ,i , νr ,i and dr ,i are random variables, each one characterized by a specific probability distribution function pdf (·). Specifically, the optimal allocation of the virtual resources required by each VM is a problem that can be addressed by using stochastic approaches. To address the heterogeneity of applications deployed in the cloud, several approaches (and cloud providers) consider application tasks as classified in different categories [27]. For each type of task, they evaluate the probability distribution function from the empirical data related to the usage of resources ri and the task duration di . Regarding live-migration, the optimization problem for cloud resource reallocation must address three important questions: 1. When to migrate a VM? 2. Which VMs must be migrated? 3. Which are the set of destination HMs for migration, replication, and placement of new VMs? 1. Migration of a VM in a data-center can be triggered: • periodically: for example, data centers may be heavily used in the morning, whereas they may be underloaded during the night;
)
–
3
• due to the overloaded condition of a HM (hot-spot); • due to the low utilization of a HM (cold-spot); • due to the imbalance in resource utilization levels of different HMs, leaded from VMs that change their resource requirements dynamically; • due to the addition/removal of VMs and HMs, which can affect the availability of the resources and may require a change in the overall VM placement/scheduling plan. 2. Which VM must be migrated?: Selecting one or more VMs for migration is a crucial decision. The migration process makes the involved VM unavailable for a certain amount of time, as well as consumes resources like network and CPU on source and destination HMs, thus, the performance of co-resident VMs are temporarily affected due to increased resource requirements during migration. Therefore, the aim of VM selection is to minimize the migration effort. However, different approaches can be adopted to select the candidate VM for migration. • The VM whose resource requirements cannot be locally fulfilled is selected for migration. • It may not always be efficient to select the overloaded VM. In particular, if such VM uses a large amount of memory, the time required for migration will be high. Therefore, the VM with smaller memory could be selected for migration, so that the memory freed can then be allocated to the other hosted VMs. • The VM whose SLA requirements are less critical is selected. 3. Where to locate the VM?: During migration, the destination HM should have enough resources available, so that, it can support the incoming migrating VM. However, considering the availability of resources at the destination could be not enough. For example, it should be considered how the performance of VMs that are already hosted on the destination HM will be affected by the migration, as well as it is necessary trying to achieve some consolidation by co-allocating VMs that have high memory sharing potential. In order to solve the described elastic resource reallocation problem by using Game Theory, we simulate the cloud elasticity in a cloud-data center through a Coral-Reefs bio-inspired metaheuristic scheme (CRO) [6]. The CRO scheme represents the reefs structure by using a grid, where each element of the grid (x, y) can host a single or multiple corals (representing components of the Game Theory-based solution). Therefore, in order to re-map our resource allocation problem within such a scheme, we assume that each component hx,y of the grid represents a HM of the set H. Thus, we can model the cloud resource allocation problem, by using a hypercube A = {ax,y,z }, in which, for each HM hx,y , the vector Ax,y,i = (ax,y,1 , . . . , ax,y,m ) represents the mapping of the VMs to hx,y , where ax,y,i is equal to 1, if the VM vi ∈ V has been assigned to HM hx,y , otherwise it is equal to 0. The same holds for the capacity set C represented by using a cubic matrix C, in which each component cx,y is a capacity vector of the HM hx,y equal to [κc ,x,y , λc ,x,y , µc ,x,y , νc ,x,y ]. The elements of the matrix C have the same size, if the HMs belong to a homogeneous cluster, otherwise, they have different dimensions (if the HMs are heterogeneous). Briefly, the resource allocation problem exhibits three objective functions: 1. resource minimization: minimize the total amount of datacenter resources used to execute the VMs; 2. resource balancing: allocate VMs on top of each host, so that, the usage rate of host’s resources is kept around a certain threshold; 3. SLA optimization: the allocation process has to be orchestrated, so that, the VMs associated to more critical SLA requirements are privileged.
4
M. Ficco et al. / Future Generation Computer Systems (
According to the above described schema, the allocation problem can be formulated in the following way. The minimum number k of physical hosts needed to satisfy the VMs resource requirements must be determined: k=
n m
ax,y,i ,
for each kind of resources (κ, λ, µ, ν).
(1)
x,y=1 i=1
The resource capacity cx,y on each involved HM hx,y must satisfy all the VMs assigned to it, so that, the sum of resource demands of all the VMs allocated on each HM hx,y should not exceed cx,y . Moreover, a small amount ϵ of free resources should be guaranteed in each machine, in order to offer a certain degree of vertical scaling to applications without requiring any migration. Thus, for each host hx,y ∈ HM, the physical resource constraints must be expressed as: m
ax,y,i ∗ ri < (cx,y − ϵ),
for 1 < x, y < n.
(2)
i =1
Finally, each VM vi is associated with a weight ωi , which represents the value of the criticality associated to the SLA requirements to be met. The allocation process has to optimize the VMs that have the highest weight ωi . 4. The meta-heuristic approach This section, first, introduces a fuzzy linguistic model for formalizing a generic SLA [28], established between the cloud customer and the cloud provider (Section 4.1). Then, the CRO algorithm, based on the coral-reefs behavior, is adopted to simulate the continuous requests of new cloud resources needed to trigger the resize, replication and migration processes characterizing cloud data center activities (Section 4.2). Last, in Section 4.3, we describe how to use a game theory-based approach to identify the best VM allocation scheme with respect to the requirements formalized in the previous section. 4.1. SLA formalization by using linguistic label sets When a customer gains access to cloud-based services according to a pay-by-use model, starting from its specific requirements, it fills out a SLA with the service provider, whose satisfaction is continuously monitored by the SLA management system [29]. Such a set of requirements formally documents the service(s) needed, performance expectations, responsibilities and limits between cloud service providers and their customers. At the moment, there is no standard that univocally determines the structure, format, syntax and semantic of the SLA requirements; neither the list of important criteria within a SLA has been established. Currently, the only available standard supporting both a formal representation of SLAs and a protocol for their automation is WS-Agreement (WSAG) [30], which was born in the context of GRID computing and has been widely adopted, in the context of many Cloud-oriented FP7 projects (e.g., SPECS, Contrail, mOSAIC, Optimis, Paasage), to represent SLAs in the Cloud environment. The interest in WSAG has led in the last years to the introduction of several extensions to the devised SLA lifecycle and to the development of techniques for its automation [31,32]. Nevertheless, WSAG only prescribes the main elements of the agreement and leaves many details to custom implementations. Therefore, disputes often arise on the structure of the agreements, and on how the provided quality of service is monitored against the agreed SLA requirements. The European Commission has issued a set of guidelines for SLAs in cloud computing [33],
)
–
developed by the Cloud Select Industry Group (CSIG), as a strategic way to help EU businesses in using the Cloud. Another well known initiative in the area of Cloud SLA standardization is being carried out by ISO/IEC JTC 1/SC38 on 19 086 Information Technology (Cloud Computing) Service Level Agreement (SLA) Framework and Terminology [34]. Following these initiatives, several projects have been recently funded by the EU to further investigate on the formalization and management of SLAs. In the last years, some efforts have been devoted both from the academic and industrial organizations towards the introduction and formalization of security-related parameters in the SLAs [35]. Among these, it is worth mentioning the SPECS project [36], which focuses on Security SLAs (i.e., SLAs containing security guarantees) and aims at building a framework for the development of secure cloud applications, according to the SLA life-cycle (negotiation, enforcement and monitoring). Despite these and other similar initiatives, the current debate is far from being concluded, and the need of a standardized structure, terminology and syntax for SLAs in clouds is still an open issue. Since, there is no common agreement on which properties to be considered in a SLA, we have decided to keep such properties generic, without discriminating if they are related to performance, dependability or security. Therefore, we assume a generic formulation of SLA contracting as a set of criteria, each representing one of the possible quality properties that a customer is interested to monitor and obtain, such as a performance grade, a reliability degree or security level. Correspondingly, we have a vector of n available QoS properties for a SLA binding between a jth customer, namely Cj , and a provider, for a given task, namely τCj : QoS (τCj ) = {q1 , q2 , . . . , qn }. We assume that each customer, before submitting a task, has provided to the cloud platform an agreed SLA binding in the form of a set of values, under which, the quality properties should not fall during the task execution. Specifically, if we indicate with fi the monitored value assumed by the ith quality property, at run time, we have that fi ≥ qi . The generality of our SLA formalization does not involve the number and type of the quality properties, but also the values that they can assume. Specifically, there are some properties that naturally may have numeric expressions, such as the performance grade indicated in terms of the latency in completing some particular actions, or the reliability degree (expressed in terms of the number of nines of reliability) of the cloud platform in completing the tasks. Such a quantitative expression of the quality properties is naturally obtained by the monitoring activities operated by the SLA management system, but it is difficult to be handled by human operators, especially if they are not experts in these contexts. Not all the operators may be aware of the meaning of nines for reliability, or are able to express an acceptable value of nines for their applications. Considering that SLAs are specified by humans, it is necessary to have a more human-friendly qualitative approach to the expression of these properties. For this reason, in this work, we have adopted linguistic labels, such as adjectives like LOW or HIGH, to express the value of a particular quality properties required by the cloud customer. In addition, we use the fuzzy linguistic modeling theory, where a fuzzy set is associated to a given linguistic label [37], taken from a properly-defined set of terms, such as the following one of five terms: S = s0 : N (NONE ), s1 : L(LOW ), s2 : M (MEDIUM ), s3 : H (HIGH ), s4 : P (PERFECT ).
(3)
In any linguistic approach to information modeling, in addition to the linguistic descriptors of the adopted terms, we need to define the associated semantics, which is typically done by means of fuzzy sets defined in the [0, 1] interval, which are usually described by a membership function µA˜ with a triangular, trapezoidal or other shapes. If we consider a numeric value for a given quality property,
M. Ficco et al. / Future Generation Computer Systems (
)
–
5
Fig. 1. The set of five terms in Eq. (3) and its semantics as fuzzy sets.
then the function µA˜ specifies the membership in the vaguely defined set A of such a number, with an output within the interval [0, 1]. An example of a balanced linguistic label set of five terms, as in Eq. (3), i.e., with terms that are uniformly and symmetrically distributed in the [0, 1] interval, and the underlying fuzzy sets are depicted in Fig. 1. Typically, the tuning of the parameters characterizing the membership functions for the linguistic terms within a fuzzy variable can be made by experts and must be tailored to the specific characteristics of the measure it represents. Since, we assume a balanced linguistic term sets, we adopt a partition of the interval [0, 1] in even parts among the fuzzy sets of the adopted linguistic terms. Since, we have formulated the set of requirements in a SLA as fuzzy linguistic labels, we have the problem of comparing crisp numeric values returned by the SLA management system with these requirements. Such a problem can be resolved by a proper process of converting a fuzzy linguistic label into a crisp number, called defuzzification [38], namely φ(l), where l is a fuzzy linguistic label in the label set Λ. Defuzzification of a linguistic label is performed by using the centroid method, by considering the membership function of the linguistic term as the input to the operation. Such a method determines the geometric center of the membership function over the x-axis. Typically, a required property can assume a single linguistic label, or even a set of them, each with a given membership degree. In the second case, defuzzification is obtained by computing the weighted median point among the centroids obtained for each linguistic label, where the weights are the membership degrees for each label. Both the two crisp numbers must be within the [0, 1] interval, if the values of fi are not within such an interval, they can be normalized as follows. Let us assume that, the values admissible for (i) (i) a given monitored property fi lie between Blower and Bupper , we can convert it into an attribute with values between 0 and 1, according to the following transformation:
(i) fi − Blower B(i) − B(i) upper lower f¯i = i) B(upper − fi (i) (i) Bupper − Blower
for a positive property (4) for a negative property
where, a positive property means that high values for such a characteristic indicates a better services (e.g., higher number of nines indicates a higher reliability degree), while a negative property has an opposite interpretation (e.g., a higher values in milliseconds as completion time indicates worse performances). After, we have brought all the involved measures, i.e., fi and qi for all i = 0, . . . , n, in a form that is comparable, we are able to compute a satisfaction degree, which is a measure of the satisfaction of the SLA properties by the current task settings in the cloud platform. If such an index is negative, then the quality of the provided cloud service does not respond to the customer requirements
and some changes are needed. Specifically, such an index can be formulated as follows:
σ (τCj ) =
n
φ(wi ) · (φ(qi ) − f¯i )
(5)
i=0
where, ωi is a weight given to the ith property. Such a weight is a measure of the importance of the satisfaction of a given propriety for the consumer, and it is formulated in terms of a fuzzy linguistic label provided by the consumer. 4.2. Simulating cloud elasticity by Coral Reefs-based ecosystem approach Coral-Reefs Optimization (CRO) is a meta-heuristic approach classified as a bio-inspired algorithm, similarly to ant colony optimization [39] and artificial bee colony (ABC) [40] schemes. It artificially simulates the behavior of a coral reefs ecosystem to tackle optimization problems. The algorithm mimics the process of the sexual and asexual coral reproduction, as well as the process of coral reefs formation, where a fight for space occurs. The scope is to find a place in the reefs where the coral can be settled, considering that the space in the reefs is a limited resource compared with the high reproduction rate of the corals. Therefore, corals must fight to obtain a place in the reefs. In particular, in each step of the CRO algorithm, a set of coral larvae is generated, and each larva must fight to obtain a place in the reefs. The result of this fight is that some corals die, because they cannot defend the place where they are located, as well as not all larvae find any place in the reefs where they can be settled [41]. It depends on how strong the larva is, i.e., how good the solution to the optimization problem is. Specifically, CRO algorithm simulates the life of the corals in a reef Λ modeled as a grid of NxM elements, during different generations. We assume that each grid element (x, y) of Λ, is able to allocate a micro-colony of corals Φ(x,y) , representing a solution to the optimization problem, which is encoded as a string of numbers in a given alphabet Γ . According to the considered scenario, the reefs represent the cloud infrastructure, where each grid element (x, y) is a HM, and the micro-colony Φ(x,y) represents the set of hosted VMs A = {ax,y,i }. The initialization of the CRO algorithm consists in randomly assigning some elements in the reefs to be occupied by corals and some other elements to be empty, where new corals can freely settle and grow in the future. The rate between free/occupied elements determines the initial population density ρ0 (with 0 < ρ0 < 1). By analogy, during the initialization process, only some HMs host one or more VMs, whereas the other are free. The CRO is based on the fact that the reefs will progress, as long as stronger corals survive, whereas less healthy corals perish. This means that, the cloud resource utilization changes over time. The VMs are replicated and migrated, as well as new VMs are added, and others are removed (turned-off).
6
M. Ficco et al. / Future Generation Computer Systems (
After the initialization process, the reefs formation in terms of the corals reproduction is artificially simulated by the CRO algorithm until an objective criteria is achieved. Several operators are used to imitate corals reproduction, which model the following five different reproduction procedures and phenomena: 1. Brooding (internal sexual reproduction): Selected couples will form a coral larva by sexual crossover. It consists in the generation of new larvae by copying and mutation of existing corals. This reproduction procedure generates a coral larva, which is a copy of its parent and it is randomly mutated. The produced larva is then released to be located in the reefs. 2. Budding (asexual reproduction): The budding model consists of the formation of a coral larva by means of a random mutation of the brooding-reproductive coral (self-fertilization considering hermaphrodite corals). It consists in sorting all the corals according to their health. From this sorted list, a fraction of the best corals are duplicated and tried to be located in the reefs. 3. Broadcast Spawning (external sexual reproduction): New larvae could come from outside the reefs, and try to settle in the reefs. 4. Catastrophic events in the reefs: at the end of each reproduction step p, a small number of corals in the reefs can be depredated by polyps. The depredation operator is applied with a very small probability Pp at each step p. 5. Natural death: Corals may die during the reefs formation phase, thus liberating space in the reefs for next coral generation. Note that, the corals that have been reproduced by using a method in the current generation cannot be selected to reproduce by using other methods. After the sexual and asexual reproduction, the larvae produced have to be placed to grow in the reefs. Therefore, in the considered cloud context, the larvae represent either a request of additional cloud resources from a hosted VM, or a replication request of VM instance, or a request of resources from a new VM. Therefore, the previous phenomena simulate, respectively, the following behaviors: 1. brooding: the co-residency of a VM i with other tenants can require additional resources, activating a decision process for choosing whether re-size VM i or migrate a hosted VM from the host machine. In the last case, it is necessary to decide which VM migrates, and on which HM it has to be deployed; 2. budding: reducing or increasing the load can trigger an autoscaling process, which can involve either the turn-off of a VM instance, or the activation of a replicated VM instance, respectively; in the second case, it is only necessary to decide on which HM it has to be deployed; 3. broadcast: the new larvae represent requests to host new VMs, which have to be allocated on the more appropriate HMs; 4. catastrophic events: a software or hardware failure is generated on a HM, i.e., one or all the VMs running on the host machine fail; in this case, all the involved VMs must be replicated on other hosts; eventually the failed HM is removed from the cloud; 5. natural death: a VM has completed its work or is turned off. Therefore, the reefs are defined by VMs, where the set of VMs deployed on the cloud HMs represent a possible solution I for the modeled problem. Each coral i is labeled with an associated health evaluation function that measures the fitness of such coral within each micro-colony f (Φ(x,y)i ) : I ⇒ R, which represents the problem’s objective function. Such a function is equal to the satisfaction index σ (τCj ), that we have defined in Eq. (5). It is used for comparing corals during the selection or reproduction processes. This aspect means that each VM requires a certain amount of cloud resources (e.g., CPU, memory and storage) to satisfy the SLA contracted with the cloud customer. f (Φ(x,y)i )
)
–
measures if the VM i can be hosted by a physical machine that satisfies such requirements. The described optimization problem has been faded by a Game Theory approach described in the next section. According to the coral model, each larva located either in a free space, or in an occupied one, has to fight against the existing corals, and only the strongest individuals will survive. A VM can be migrated from its hosting HM, either to acquire more resources, or to make available resources to other VMs which have more critical SLA requirements, as well as to minimize the number of active HMs. Moreover, in the considered cloud context, if the cloud is public, we can assume that the resources are unlimited and all the requests can be always satisfied (larvae cannot be deprecated). If the objective is the optimization of the resource utilization in a private cloud, we can assume that the resources are limited and some requests could not be successfully allocated into a place (whose capacity eventually grows), during the current allocation step (larvae are deprecated). The deprecated larvae will be queued for the next allocation step. Finally, the allocation process must minimize the number of migrations per step. 4.3. Distributed strategic resource allocation with Game Theory The abstraction provided by the CRO algorithm perfectly disciplines and models the competing behavior of the VMs within a cloud infrastructure, to obtain the resources needed to satisfy both the consumers’ SLA requirements and the cloud provider’s benefit. Therefore, according to the problem formulated in the previous section, the issue to be treated is determining: (1) which VM has to be migrated from a certain HM, and towards which HM such VM should be migrated or replicated, and (2) where allocate new VMs. These objectives have been performed with the intent of optimizing the satisfaction degree for all the consumers and minimizing the number of active HMs. As a consequence, our game-theoretic scheme is applied in the two mentioned situations. In the following, we resolve the migration case, since both the replication and allocation of new VMs are a naive relaxation of the first one (the problem is to determine only the destination of the VM and not which VM must be replicated or created, while in the migration case both elements must be determined). Such a problem has been resolved in a strategic manner by means of the Game Theory [42], which has been already identified as a really promising framework for managing resource reallocation within clouds [43]. The overall goal of the approach, called in the literature as the social welfare, is to maximize the satisfaction of customers’ SLA requirements by the capabilities provided of the selected HMs, and minimizing the number of globally involved HMs in the cloud infrastructure. Specifically, we assume that the set of VMs in a HM are the players of a one-shot game, which has a set of strategies among which they can make an action in the game. Such strategies for the ith player are binary n values being elements of the set indicated with S pi = {s∅ , j=0 si }, where s∅ indicates the strategy of not migrating, while si is the strategy of migrating towards the ith HM of the cloud that hosts n usable HMs. When a player selects a given strategy, namely s¯ within its strategy set, it has to pay a certain cost, namely Φ p (¯s). Dually, the player receives the payoff φ p (¯s), which is the gain achievable by a player when following a given strategy, considering the costs implied by the chosen strategy. In our case, the game is defined non-cooperative since players are selfish, i.e., there is no direct communication among the players, and each one only cares to minimize its own cost, or dually to maximize their payoff, without considering the state of the other players (with the eventuality of damaging them, even if it is not intentional). In our game, we have to define a way so that, the strategy that maximizes the payoff of each player is the one achieving the optimal social
M. Ficco et al. / Future Generation Computer Systems (
welfare of our optimization problem. Specifically, the payoff of a given strategy can be formalized as follows:
φ p (¯s) = s∅ · σ (s∅ ) + s∅
n
sj · [σ (sj ) − δ+HM − Φ p (sj )],
(6)
• The VM that has to migrate experiences a temporary unavailability, and this causes an inefficiency and a delay in the completion of the allocated task. Such an effect is modeled by a proper cost function, which depends on the VM to be migrated and the criticality of the task running on the VM. In fact, the bigger is the VM to be migrated, the higher is the inefficiency to be experienced. Analogously, more critical is the task allocated to the VM and more serious would result the inconvenience caused by the migration. Such a cost can be formulated by assuming a constant value for the inefficiency, weighted by the sum of the importance and the weight of the VM to be migrated:
Φ (sj )contr1 = (ω1 + ω2 ) · cost migr , p
(7)
where ω1 and ω2 are real numbers in the [0, 1] interval given by the defuzzification of a fuzzy linguistic label respectively indicating the importance and the weight of the player. The first one is inherited by the player from the task allocated to its VM, while the second one is assigned at the allocation time of the VM and is determined according to threshold-based rules: if the request of resource for a VM is within a given interval i i [rdown , rup ], then the VM has assigned li to ω2 . • There is a cost to pay in order to physically move the VM, and allocate it to the destined HM (e.g., in terms of power energy and network bandwidth). Such a cost is a proper function of the weight of the VM to be migrated:
Φ p (sj )contr2 = ω2 · cost move .
(8)
–
7
is low and the residual capacity is high (this is done to promote the selection of a scarcely used active HM). In the other two cases, the cost assumes a null value, when the residual capacity is high and the weight of the VM is high, or a value ϵ when the residual capacity is low and the weight of the VM is low. ϵ can be properly selected in order to discourage a migration towards a HM with a limited residual capacity, despite the weight of the migrating VM is low; for this reason ϵ is a function of the κ parameter.
j =0
where all these contribution to the payoff function have values within the interval [0, 1]. σ (·) indicates the satisfaction degree expressed by Eq. (5), considering the strategy in input, i.e., migration or no migration, δ+HM is the cost to be paid if a new HM is activated as the result of the migration, and Φ p (si ) expresses the cost for the migration from the current HM to the target one. The cost value, i.e., δ+HM , has to be properly selected in order to discourage the activation of a new HM and can be modeled as a constant, or can be assumed as a function of the actual number of active HMs, in order to make it higher as the number of HMs increases. This is needed to keep the number of active HMs minimal. In addition, the cost Φ p (si ) is composed of three different contributions to be properly considered:
)
Definitely, the cost for the migration is formulated as follows:
Φ (sj ) = (ω1 + ω2 ) · cost migr + ω2 · cost move p
+ (ω2 − κ) · cost alloc + κ · ϵ(κ).
(10)
The scope of the game is achieved by identifying a point of equilibrium among players. It is named Nash equilibrium in the case of non-cooperative games, based on the concept of domain strategy, i.e., given a certain strategy s¯ ∈ S, it is not profitable for a player to select a different strategy than the one in the current profile, assumed dominant, since a different strategy will not change or even augment the needed costs, so a player has no incentive to change strategy:
∃s ∈ S : ∀p ∈ P , ∀x ∈ Y , φ¯ p (sp , s−c ) ≥ φ¯ p (x, s−p ) →s is a Nash equilibrium.
(11)
Specifically, if we consider that the first decision to be taken is to migrate or not, and then, to identify the most suitable HM towards which the deployment must take place, we have that no migration is profitable if the following condition holds:
σ (s∅ ) ≥ σ (s∗ ) − δ+HM − Φ p (s∗ ),
(12)
where s∗ is the strategy of the migration towards the most convenient HM, which is the one that resolve such an optimization problem: s∗ = max si · [σ (si ) − δ+HM − Φ p (si )]. i=0,...,n
(13)
Eq. (12) can be further improved in order to find a condition for the migration or not:
σ (s∗ ) − σ (s∅ ) ≥ δ+HM + Φ p (s∗ ),
(14)
which means that is convenient to migrate, only if, the improvement obtained in terms of improved satisfaction of the customer’s SLA requirements is higher than the costs to be paid for this operation. Moreover, such a solution, minimizes the number of HMs jointly activated, thanks to the cost of the additional HM inserted in Eq. (13).
• When a VM is migrated to a new HM, it causes an inefficiency to the other VMs already hosted by the destination HM. Such a situation is modeled by a proper cost function, depending on the number and type of the hosted VMs. Moreover, each HM is characterized by a capacity C , which has to be split among the hosted VMs. We can have a situation where the residual capacity is not sufficient for the migrated VM and/or the migration may cause a considerable reduction in the capacity available for an eventual resizing of one of the hosted VM. Such a case can be considered in computing the cost of migration, by having a function that augments its value as the residual capacity decreases (due to the migration):
Φ p (sj )contr3 = (ω2 − κ) · cost alloc + κ · ϵ(κ),
(9)
where κ indicates the residual capacity of the jth HM at the time of decision, and cost alloc is a constant. Such a formulation implies the highest value, when the weight of the migrating VM is high and the residual capacity is low. On the contrary, the cost is negative, i.e., we have an incentive, when the weight of the VM
5. Experimental results According to the proposed Coral Reefs-based ecosystem approach, each grid element (x, y) of the reef (modeled in our context as a vector A = {ax,y,i }) represents a HM hx,y . We assume that, the space available (resource capacity cx,y ) within each element hx,y is different, i.e., the HMs are heterogeneous. Moreover, the HM hx,y can host a set of VM instances (micro-colony), such that Eq. (2) is verified. Thus, at each step of the simulation process, the CRO-algorithm computes which VM instances require more resources (brooding), activating a decision process for choosing whether to re-size the VM (if there are still resources available on the HM), or migrating one of the instances hosted on the involved HM. We assume an increase or a reduction of resources, which is no greater than 30% compared to the resources already used by the involved VM. In order to make the simulated scenario more realistic, we assume that the increase of resources may also be negative. In
8
M. Ficco et al. / Future Generation Computer Systems (
)
–
Fig. 2. The behavior of the decision process.
Fig. 3. Number of migrations and replications requests to be satisfy, during each simulation step pi .
this case, the decision process implies a resize (reduction) of the resources used by the involved VM. Moreover, assuming that some kind of auto-scaling policy has been contracted in the SLA, the algorithm computes which instances are subjected to a load increase or reduction (budding), which triggers either the turn-off of a replicated VM instance (scaling-down), or the activation of a new VM instance (scaling-up). Finally, the algorithm decides which VM instance has to be removed (for natural death or catastrophic events), and how many added (broadcast). Then, by using Game Theory, we identify which VMs of the involved HMs must be migrated, and what are the destination HMs on which he migrated or replicated instances have to be deployed, as well as what are the destination HMs on which the new VMs can be deployed. During each simulation step, the amount of free resources available on each HM is updated according to the decision process. In order to make the experimental results more clear, we assume that, at each step, only a new VM is added to the cloud, and no VMs are removed. Therefore, the number NM /R of VMs to be reallocated is the sum of migration and replication requests NM + NR + 1. Moreover, at step 0, we assume that five VMs are already hosted in the cloud, and they are deployed on some HMs, as well as the allocation of each new VM i on the host hx,y requires an amount of resources ri included in the range:
(cm ∗ 0.1) < ri < (cm ∗ 0.3),
(15)
where cm represents the average resource capacity ci,j of the HMs available in the cloud ecosystem. The results reported in Fig. 2 show the behavior of the decision process. In particular, for each simulation step, the number of migrations (M) and replications (R) are reported, together with the number of the involved instances (VMs) and the physical host on which they are hosted (HMs). Fig. 3 shows, at each step pi , the total number of migrated and replicated requests (NM /R ), generated by the Coral-Reef bioinspired algorithm, which must be satisfied. In Fig. 4 is shown, for each simulation step pi , the simulation time necessary for computing the allocation schema (i.e., to reallocate in the cloud the VMs allocation requests computed by the Coral-Reef algorithm).
Fig. 4. Time need to reallocate the allocation requests computed at each step pi .
Fig. 5. Number of active HMs and VMs instances hosted in the cloud ecosystem.
Finally, Fig. 5 represents graphically how the number of the involved physical hosts (i.e., the number of active HMs that host at last a VM) and the number of VM instances hosted in the cloud ecosystem (such as VM (pi ) = VM (pi−1 ) + R + 1) increase during the simulation process. 6. Conclusions In this work, a new bio-inspired and game theoretic-based meta-heuristic scheme for managing elastic resources reallocation in cloud infrastructures has been presented. Specifically, the proposed solution leverages an evolutionary algorithm based on the observation of the reefs structure and coral reproduction, showing a very interesting behavior to simulate the continuous requests of new cloud resources, which trigger the resize, replication and migration processes characterizing cloud data-center activities. It also exploits the competition dynamics among the strategic users (with their SLAs) and service providers (with their revenue maximization objectives), to converge towards socially optimal equilibrium solutions that are able to satisfy the apparently contradicting interests of the involved parties. In particular, a game theory-based approach is used to identify the best VM reallocation schema with respect to the conflicting performance objectives of the customers and the cloud provider,
M. Ficco et al. / Future Generation Computer Systems (
by optimizing a global objective function that takes into account the individual interests of all the involved entities. The CRO-based model strongly characterizes the proposed solution by simulating interesting elasticity properties of the cloud paradigm. Due to its evolutionary nature, in which the collective emergent behavior of the ‘‘reefs’’ exhibits a great deal of global intelligence, capable of complex tasks, the resulting schema can provide a good scalability in presence of a large number of virtual machines and host-related resources available into the cloud. We analyzed the performance of the proposed approach by implementing a simple simulation-based proof of concept scenario that considered both providers and users together with their available resources, capacities and SLAs. The experimental results validated our hypothesis that a combined bio-inspired and game theoretic-based meta-heuristic approach, does not only achieve a satisfactory solution in terms of adaptiveness and elasticity, but it also can lead to significant performance improvements in terms of convergence time, when the problem scales towards very large clouds with plenty of machines and VMs to be reallocated. References [1] Weiwei Lin, Baoyun Peng, Chen Liang, Bo Liu, Novel resource allocation model and algorithms for cloud computing, in: Proc. of the 4th Int. Conf. on Emerging Intelligent Data and Web Technologies, 2013, pp. 77–82. [2] M. Ficco, C. Esposito, Henry Chang, Kim-Kwang Raymond Choo, Live migration in emerging cloud paradigms, IEEE Cloud Comput. (2016) 12–19. [3] L. Badger, R. Patt-Corner, J. Voas, Draft cloud computing synopsis and recommendations recommendations of the national institute of standards and technology, NIST Spec. Publ., vol. 146, Available at: http://csrc.nist.gov/publications/drafts/800-146/Draft-NIST-SP800-146.pdf. [4] Gu. Galante, L.Carlos E. de Bona, A survey on cloud computing elasticity, in: Proc. of the IEEE/ACM 5th Int. Conf. on Utility and Cloud Computing, 2012, pp. 263–270. [5] M. Garey, D. Johnson, Computers and Intractability: A Guide to the Theory of NP-completeness, W.H. Freeman and Co, 1979. [6] S. Salcedo-Sanz, A. Pastor-Sanchez, D. Gallo Marazuela, A. Portilla-Figueras, A novel coral reefs optimization algorithm for multi-objective problems, in: Proc. of the Intelligent Data Engineering and Automated Learning, in: LNCS, vol. 8206, Springer Berlin Heidelberg, 2013, pp. 326–333. [7] D. Agrawal, A. El Abbadi, S. Das, A.J. Elmore, Database scalability, elasticity, and autonomy in the cloud, in: Proc. of the 16th Intl. Conf. on Database Systems for Advanced Applications, Springer-Verlag, 2011, pp. 2–15. [8] L.M. Vaquero, L. Rodero-Merino, R. Buyya, Dynamically scaling applications in the cloud, SIGCOMM Comput. Commun. Rev. 41 (2011) 45–52. [9] R.N. Calheiros, C. Vecchiola, D. Karunamoorthy, R. Buyya, The aneka platform and qos-driven resource provisioning for elastic applications on hybrid clouds, Future Gener. Comput. Syst. 28 (6) (2011) 861–870. [10] Z. Shen, S. Subbiah, X. Gu, J. Wilkes, Cloudscale: elastic resource scaling for multi-tenant cloud systems, in: Proc. of the 2nd Symposium on Cloud Computing, (SOCC), ACM, 2011, pp. 1–14. [11] S. Ricciardi, F. Palmieri, J. Torres-Vials, B. Di Martino, G. Santos-Boada, J. Sol-Pareta, Green data center infrastructures in the cloud computing era, in: Handbook of Green Information and Communication Systems, 2013, pp. 267–293. [12] Z. Gong, X. Gu, J. Wilkes, Press: Predictive elastic resource scaling for cloud systems, in: Proc. of the 6th Int. Conf. on Network and Service Management, (CNSM), IEEE, 2010, pp. 9–16. [13] W. Dawoud, I. Takouna, C. Meinel, Elastic vm for cloud resources provisioning optimization, in: Advances in Computing and Communications, Vol. 190, Springer Berlin, Heidelberg, 2011, pp. 431–445. [14] J.O. Fito, I.G. Presa, J.G. Fernandez, Sla-driven elastic cloud hosting provider, in: Proc. of the 18th Euromicro Conf. on Parallel, Distributed and NetworkBased Processing, (PDP), IEEE, 2010, pp. 111–118. [15] N. Roy, A. Dubey, A. Gokhale, Efficient autoscaling in the cloud using predictive models for workload forecasting, in: Proc. of the 4th Int. Conf. on Cloud Computing, (CLOUD), IEEE, 2011, pp. 500–507. [16] N. Vasic, D. Novakovic, S. Miucin, D. Kostic, R. Bianchini, Dejavu: accelerating resource allocation in virtualized environments, in: Proc. of the 17th Intl. Conf. on Architectural Support for Programming Languages and Operating Systems, (ASPLOS), ACM, 2012, pp. 423–436.
)
–
9
[17] T. Knauth, C. Fetzer, Scaling non-elastic applications using virtual machines, in: Proc. of the 4th Int. Conf. on Cloud Computing, CLOUD, 2011, pp. 468–475. [18] K. Lazri, S. Laniepce, J. Ben-Othman, When dynamic VM migration falls under the control of vm users, in: Proc. of the IEEE 5th Int. Conf. on Cloud Computing Technology and Science, CloudCom, 2013, pp. 395–402. [19] M. Mishra, A. Das, P. Kulkarni, A. Sahoo, Dynamic resource management using virtual machine migrations, IEEE Commun. Mag. 50 (9) (2012) 34–40. [20] S. Ricciardi, D. Careglio, G. Santos-Boada, J. Sol-Pareta, U. Fiore, F. Palmieri, Saving energy in data center infrastructures, in: Proceedings—1st International Conference on Data Compression, Communication, and Processing, CCP 2011, 2011, pp. 265–270. [21] E. Arzuaga, D.R. Kaeli, Quantifying load imbalance on virtualized enterprise servers, in: Proc. of the 1st Int. Conf. on Performance Engineering, (WOSP/SIPEW), ACM, 2010, pp. 235–242. [22] A. Gulati, A. Holler, M. Ji, G. Shanmuganathan, C. Waldspurger, X. Zhu, Vmware distributed resource management: Design, implementation, and lessons learned, 2012. [23] X. Zhang, Z.-Y. Shae, S. Zheng, H. Jamjoom, Virtual machine migration in an over-committed cloud, in: Proc. of the IEEE Network Operations and Management Symposium, NOMS, 2012, pp. 196–203. [24] N.M. Calcavecchia, O. Biran, E. Hadad, Y. Moatti, VM placement strategies for cloud scenarios, in: Proc. of the IEEE 5th Int. Conf. on Cloud Computing, CLOUD, 2012, pp. 852–859. [25] Andrei Sfrent, Florin Pop, Asymptotic scheduling for many task computing in big data platforms, Inform. Sci. 319 (20) (2015) 71–91. [26] Mihaela-Andreea Vasile, Florin Pop, Radu-Ioan Tutueanu, Valentin Cristea, Joanna Kolodziej, Resource-aware hybrid scheduling algorithm in heterogeneous distributed computing, Future Gener. Comput. Syst. 51 (2015) 61–71. [27] M. Ficco, B. Di Martino, R. Pietrantuono, S. Russo, Optimized task allocation on private cloud for hybrid simulation of large-scale critical systems, Future Gener. Comput. Syst. (2016) http://dx.doi.org/10.1016/j.future.2016. 01.022. available online at: http://www.sciencedirect.com/science/article/pii/ S0167739X16300061. [28] C. Esposito, A. Castiglione, F. Palmieri, M. Ficco, Trust management for distributed heterogeneous systems by using linguistic term sets and hierarchies, aggregation operators and mechanism design, Future Gener. Comput. Syst. (2015) available online at: http://www.sciencedirect.com/science/article/pii/S0167739X15003878. [29] V. Casola, A. Mazzeo, N. Mazzocca, M. Rak, A SLA evaluation methodology in service oriented architectures, Adv. Inf. Secur. 23 (2006) 119–130. [30] A. Andrieux, K. Czajkowski, A. Dan, K. Keahey, H. Ludwig, T. Nakata, J. Pruyne, J. Rofrano, S. Tuecke, M. Xu, Web services agreement specification (WSAgreement), in: Global Grid Forum, The Global Grid Forum (GGF), 2004. [31] R. Kubert, G. Katsaros, T. Wang, A RESTful implementation of the WSagreement specification, in: Proceedings of the Second International Workshop on RESTful Design, Ser. WS-REST’11, ACM, New York, NY, USA, 2011, pp. 67–72. [32] A. De Benedictis, M. Rak, M. Turtur, U. Villano, REST-based SLA management for cloud applications, in: Proceedings of REST-based SLA Management for Cloud Applications, WETICE 2015, Larnaka, Cyprus, pp. 93–98. [33] European Commission. Cloud Service Level Agreement Standardisation Guidelines, 2014, available online at https://ec.europa.eu/digital-agenda/en/news/cloud-service-level-agreementstandardisation-guidelines. [34] International Organization for Standardization (ISO). ISO/IEC DIS 19086-1, June, 2015, available online at http://www.iso.org/iso/home/store/catalogue_ tc/catalogue_detail.htm?csnumber=67545. [35] V. Casola, A. De Benedictis, M. Rak, On the adoption of security SLAs in the cloud, in: Accountability and Security in the Cloud, in: LNCS, vol. 8937, 2014, pp. 45–62. [36] M. Ficco, M. Rak, SLA-oriented security provisioning for cloud computing, in: Cloud Computing and Services Science, in: LNCS, vol. 367, Springer-Verlag, 2013, pp. 230–244. [37] D. Bacciu, M.G. Buscemi, L. Mkrtchyan, Adaptive fuzzy-valued service selection, in: Proceedings of the 2010 ACM Symposium on Applied Computing, SAC, March 2010, pp. 2467–2471. [38] T.J. Ross, Fuzzy Logic with Engineering Applications, third ed., Wiley, 2010. [39] M. Dorigo, V. Maziezzo, A. Colorni, The ant system: optimization by a colony of cooperating ants, IEEE Trans. Syst. Man Cybern. 26 (1) (1996) 29–41. [40] D. Karaboga, B. Basturk, On the performance of the artificial bee colony (ABC) algorithm, Appl. Soft Comput. 8 (2008) 687–697. [41] S. Salcedo-Sanz, J.E. Sanchez-Garcia, J.A. Portilla-Figueras, S. JimenezFernandez, A.M. Ahmadzadeh, A coral-reef optimization algorithm for the optimal service distribution problem in mobile radio access networks, Trans. Emerg. Telecommun. Technol. 25 (11) (2013) 1057–1069.
10
M. Ficco et al. / Future Generation Computer Systems (
[42] M.J. Osborne, A. Rubinstein, A Course in Game Theory, The MIT Press, 1994. [43] F. Palmieri, L. Buonanno, S. Venticinque, R. Aversa, B. Di Martino, A distributed scheduling framework based on selfish autonomous agents for federated cloud environments, Future Gener. Comput. Syst. 29 (6) (2014) 1461–1472.
Massimo Ficco received the degree in computer engineering from the University of Naples Federico II (IT) in 2000 and the Ph.D. degree in information engineering from the University of Parthenope in 2010. He is an assistant professor in the Department of ‘‘Ingegneria Industriale e dell’Informazione’’ at the Second University of Naples (SUN). From 2000 to 2010, he was a senior researcher at the Italian University Consortium for Computer Science (CINI). His current research interests include software engineering architecture, cloud computing, security aspects of critical infrastructure, and mobile computing.
Christian Esposito received the graduate-degree in computer engineering in 2006, and the Ph.D. degree in 2009 from the University of Naples Federico II. Currently, he is a postdoc researcher in the Institute of High Performance Computing and Networking (ICAR) at the National Research Council (CNR). His main interests include mobile computing, benchmarking, aspects of publish/subscribe services, and reliability strategies for data dissemination in large-scale critical systems.
)
–
Francesco Palmieri received the M.S. degree and the Ph.D. degree in computer science from the University of Salerno. He was an Assistant Professor at the Second University of Napoli. Currently he is an Associate Professor at the University of Salerno. His research interests include advanced networking protocols and architectures and network security. He has been the director of the Networking Division of the Federico II University of Napoli and contributed to the development of the Internet in Italy as a senior member of the Technical-Scientific Advisory Committee and of the CSIRT of the Italian NREN GARR. He serves as the editor-in-chief of an international journal and participates to the editorial board of other ones. Aniello Castiglione received a Ph.D. in Computer Science from the University of Salerno, Italy. Actually he is an adjunct professor at the University of Salerno (Italy) and at the University of Naples ‘‘Federico II’’ (Italy). He received the Italian National Academic Qualification (ASN) as Associate Professor in Computer Science. He serves as a reviewer for several international journals and is Managing Editor of two international journals. He also acts as a reviewer in around 50 international journals. He served as Program Chair and TPC member in around 90 international conferences. He acted as a Guest Editor in several journals and serves as Editor in several editorial boards of international journals. He published more than 100 papers in international journals and conferences. One of his papers has been selected as ‘‘Featured Article’’ in the IEEE Cybersecurity initiative. He has been involved in forensic investigations, collaborating with several Law Enforcement Agencies as a consultant. He is a member of several associations, including ACM, IEEE. His current research interests include Information Forensics, Digital Forensics, Security and Privacy on Cloud, Communication Networks and Applied Cryptography.