ARTICLE IN PRESS
JID: SIMPAT
[m3Gsc;November 3, 2016;18:8]
Simulation Modelling Practice and Theory 0 0 0 (2016) 1–15
Contents lists available at ScienceDirect
Simulation Modelling Practice and Theory journal homepage: www.elsevier.com/locate/simpat
Non-deterministic security driven meta scheduler for distributed cloud organizations Agnieszka Jakóbik a, Daniel Grzonka a,∗, Francesco Palmieri b a b
Institute of Computer Science Cracow University of Technology, Poland Universit degli Studi di Salerno Fisciano, Campania, Italy
a r t i c l e
i n f o
Article history: Available online xxx Keywords: Cloud computing Cloud security Independent batch scheduling Genetic algorithms Multi-agent systems
a b s t r a c t Security is a very complex and challenging problem in Cloud organizations. Ensuring the security of operations within the cloud by also enforcing the users’ own security requirements, usually results in a complex tradeoff with the efficiency of the overall system. In this paper, we developed a novel architectural model enforcing cloud security, based on a multi-agent scheme and a security aware non-deterministic Meta Scheduler driven by genetic heuristics. Such model is explicitly designed to prevent Denial of Service and Timing Attacks over the cloud and has been demonstrated to be integrable within the wellknown OpenStack platform. Additionally, we proposed two different models for assuring users security demands. The first is a scoring model that allows scheduling tasks only on the Virtual Machines offering proper security level. The second model takes into account the time spent on the necessary cryptographic operations dedicated to particular task. The above scheduling system has been simulated in order to assess the effectiveness of the proposed security architecture, resulting in an increased system safety and resiliency against attacks, without sensibly impacting the performance of the whole cloud environment. © 2016 Elsevier B.V. All rights reserved.
1. Introduction In the recent years Computational Clouds become very essential part of information modern society. New architectural models, management strategies and techniques, supporting these scalable, flexible, virtualized and geographically distributed systems are very intensively developed. Vendors are offering very efficient, sophisticated tools for science, industry and business that are widely used by common users for their day-to-day activities. Computational Cloud (CC) environment can be defined as a model of large-scale distributed physical and virtual network with shared resources. The classic Cloud model consists of many different elements, usually interconnected through an high-performance transport network, that are strictly related each other [1,2]. Data and tasks from Customers are uploaded to be managed, scheduled and processed by Cloud Providers. Users’ data are often stored in different sites that may be geographically scattered throughout the world. Very often, the middlemen chain represented by Technical or Business Brokers is incorporated in the processing and resource management infrastructure and handled in an almost fully automated way. Therefore, different aspects of security in such a complex systems become fundamental. Most of the responsibilities to ensure the safety rests with the Cloud Providers that ∗
Corresponding author. E-mail address:
[email protected] (D. Grzonka).
http://dx.doi.org/10.1016/j.simpat.2016.10.011 1569-190X/© 2016 Elsevier B.V. All rights reserved.
Please cite this article as: A. Jakóbik et al., Non-deterministic security driven meta scheduler for distributed cloud organizations, Simulation Modelling Practice and Theory (2016), http://dx.doi.org/10.1016/j.simpat.2016.10.011
JID: SIMPAT 2
ARTICLE IN PRESS
[m3Gsc;November 3, 2016;18:8]
A. Jakóbik et al. / Simulation Modelling Practice and Theory 000 (2016) 1–15
have to provide the proper level of authentication, authorization, and confidentiality as well as the HW/SW infrastructure i.e. reliable and resistant to attacks and security menaces. The basic security objectives that have to be achieved in modern Clouds are: • the preservation of confidentiality - information/data/services should be accessible only from users/entities who are explicitly authorized for this; • integrity - consists in ensuring the consistency, accuracy, and trustworthiness of all provided service and stored data over their entire life cycle. • availability - involves always guaranteeing access to informations and services as well as their correct behaviour, also in presence of hardware or software failures or security attacks. The inherent redundancy and the distribution of resources in multi-site or federated clouds provide an ideal safeguard against data losses or service interruptions. In this scenario, Cloud security concerns the security of the Cloud infrastructure itself, whereas Cloud computing security aims at ensuring trust on the computing services provided by the cloud and finally, Cloud for security involves the usage of Cloud technologies to develop and deliver security solutions for massive scale recipients [3]. The presented paper considers the first two problems. Information security methods are used for the cryptography service designed and implemented as the working example for the theoretical model. By considering the above security objectives, a Cloud architecture can be divided into multiple components that have to be properly secured in order to ensure the security of the whole Cloud system [4]: • Cloud consumers, providing security when accessing services (e.g. keeping private keys secret), • Cloud providers: providing secure service orchestration, secure service deployment, secure resource abstraction, and management, as well as physical layer security. • Cloud brokers: providing secure services aggregation, secure wrapping services, etc. Cloud infrastructure is exposed to many new security threats. The examples of such are: stealing data and computational time or Distributed Denial-of-Service attacks. Moreover, a large number of threats come from the fact that such systems may have millions of users that have different roles with distinct privileges as far as access to different resources. Service providers must also take into account local security policies, governance and geographical regulations. Consequently, users’ side security requirements specified in Service Level Agreement (SLA) have to be taken into consideration during tasks scheduling and resources provisioning. Eighteen security control points are introduced by NIST in [5]. The model proposed in the paper may be located in the following: Access Control, Protection, Audit and Accountability Planning, Security Assessment and Authorization, Risk Assessment, Identification and Authentication, System and Communications Protection, Incident Response and System and Information Integrity. Security in clouds is standardized by international independent institutions. The model presented in the paper meets the criteria of NIST Cloud Computing Security Reference Architecture standard [4]: • Rapid provisioning: automatic service deployment considering the requested service and resources and capabilities. • Resource providing and changing: mapping authenticated and authorized data and tasks into VMs, assuring proper quality of service as far as computational time usage and security requirements. • Monitoring and reporting: generating security reports and monitoring the state of the environment. • Metering: providing a metering for storage, processing, bandwidth, active user accounts. Securely managing metering includes tools for internal control to ensure encrypted storage, secure processing, detection of any abnormal usage, and ensuring compliance security policies. • Service Level Agreement management: ensuring agree between customer and service providers in the context of scope, quality, responsibility and security. Guaranteeing the security of both data storage and processing operations together with the integrity of the operating environment itself, according to the more and more challenging customer requirements, usually results in a tradeoff with the efficiency of the overall system. Starting from these considerations, in this work, we developed a novel cloud security architectural model, based on an Agent-Supported Security Aware Non-Deterministic Meta Scheduler. The proposed model may be seen as a part of a secure resource management system. It supports secure task distribution inside the cloud infrastructure according to the security demands coming from cloud consumers. To meet the security requirements, an Independent Batch Scheduler, leveraging a genetic heuristic approach is used. The scheduling process is supported by a multi-agent system that monitors the task flow characterizing the cloud processing activity. Such model is explicitly designed to prevent two types of threats on the Cloud: Denial of Service and Timing Attacks. All the scheduling, monitoring and reporting activities are accomplished in non-deterministic time intervals, which prevents timing attacks. In addition, two different models for supporting users’ security demands are proposed. The first one is a scoring model, that allows scheduling tasks only on the Virtual Machines that are characterized by a security level (and hence a score) higher or equal than the demanded one. In this model the additional processing time associated to security operations is averaged and added to the fee for the total runtime used by each customer. The second model takes into account the time needed by the cryptographic operations associated to each specific task. These operations are modelled by enlarging the time required for processing the task and are considered during the scheduling process. Please cite this article as: A. Jakóbik et al., Non-deterministic security driven meta scheduler for distributed cloud organizations, Simulation Modelling Practice and Theory (2016), http://dx.doi.org/10.1016/j.simpat.2016.10.011
JID: SIMPAT
ARTICLE IN PRESS A. Jakóbik et al. / Simulation Modelling Practice and Theory 000 (2016) 1–15
[m3Gsc;November 3, 2016;18:8] 3
We have implemented, as a proof of concept, the above architecture by using the FastFlow framework. Additionally, as a part of the model implementation, the encryption service has been designed and implemented. It provides functionality of data integrity monitoring, confidentiality and authenticity checking according to the ISO/IEC Standard. An empirical study conducted on OpenStack cloud platform is shown to prove the effectiveness of the proposed model and the impact on the performance of the whole cloud environment. The paper is organized as follows: Section 2 describes task scheduling problem research context with an emphasis on security driven schedulers. In Section 3 our models are introduced and the novelty of proposed approaches is presented. Section 4 presents evaluation of proposed model and numerical results. Finally, Section 5 proposes directions for further development and conclusions. 2. Related work Task scheduling is a fundamental component in Cloud architecture. From the security point of view, scheduling has to take into account different security requirements. For instance, storing of 10 0 0 image files at the server needs: • a free stock photo service, where only the authors rights and integrity of stored data have to be assured, or • a medical image analysis service, where the highest security level is required due to the sensitivity of data and international regulations. Therefore, additional security objectives and user behaviour requirements have to be added to the ones associated to traditional scheduling strategies that only consider makespan reduction, minimizing energy consumption or completion time as well as optimizing load balancing [6,7]. As a consequence the complexity of the overall problem increases significantly. Models presented in this section show the spectrum of approaches in the field of security driven scheduling . Khan et. al. [8], proposed dividing the security services in Clouds into three categories: 1. application services,located in application layer 2. secure process hosting services, implemented in infrastructure level and supervisor level 3. secure physical services incorporated in physical layer, An example of dedicated tools for assuring security can be Security-as-a-Service (SecaaS) [9]. The SecaaS packages includes: Identity and Access Management services, Data Loss Prevention services, Web Security services, EMail Security services, Intrusion Management, Business Continuity and Disaster Recovery tools, Encryption services, Network Security services, Afoulki et al. in [10] proposed a security-aware scheduler based on Haizea [11] open-source virtual management architecture. The scheduler isolates adversary users. The aim is to minimize attacks between VMs running on the same hardware by migrations. The core of the system is a list of adversary users, specified by the users itself with whom they does not want to share hardware equipment. The scheduling algorithm is based on greedy policy to spread requested. The requests are rejected if the maximum capacity is reached. The main drawback is that it can increase the risk of cross-VM attacks only from the identified sources. In [12] authors defined an algorithm using meta-heuristic optimization techniques and particle swarm optimization. A dedicated coding strategy is proposed to minimize the total work flow while meeting the time deadlines of the tasks and risk constraints. Authentication service (3 algorithms), integrity service (7 hash functions) and confidentiality service (8 encryption methods) is used. The authors assumed that each task has its own types of security service requirements with different security levels specified by users. Authentication service works during data transfers into working nodes. Integrity service works during task execution. Confidentiality service is used during output data generation. Each VM is equipped with all security services, and users selected security levels to meet their Quality of Service requirements. The main drawback of this approach is the assumption that once the authorization is accomplished on VM, its task is assumed to be trusted and pushed to the worker node. Zeng et al. [13] also considered security levels specified by the users for task segregation. Clustering and prioritization is provided. Liu et al. [14] defined a Background Key Exchange algorithm for security-aware Cloud scheduling. In this algorithms each task is authorized by using a re-keying procedure. Each key is used only once in order to ensure the proper level of security. The main drawback of this model is the fact that during a multi-step task processing participants do not have to re-authenticate themselves after successful authentication in the first step. Nadu et al. [15], proposed a model for secure services delivery without getting users involved in the process. It incorporated dedicated software agents. The security of such model is guaranteed by the fact that VMs are used in the scheduling process only when they have been carefully monitored and behave as expected. In [16] the authors proposed a framework using The Ant Colony Optimization (ACO) to ensure an appropriate level of security in the scheduling process. Data is split into partitions respecting confidentiality, integrity and authenticity. The scheduler works based on classifying data into different levels of security, declared by the user. The level of security of data storage was changing during processing the data (5 classes are available and they correspond to different encryption algorithms, hash functions and authentication schemes). The security level/class of stored data may change during processing. Most of the presented models are based on monitoring user requirements and dividing them into classes. The schedulers assign the jobs to VMs or to the physical units according to their compliance to these classes. Input data are managed Please cite this article as: A. Jakóbik et al., Non-deterministic security driven meta scheduler for distributed cloud organizations, Simulation Modelling Practice and Theory (2016), http://dx.doi.org/10.1016/j.simpat.2016.10.011
ARTICLE IN PRESS
JID: SIMPAT 4
[m3Gsc;November 3, 2016;18:8]
A. Jakóbik et al. / Simulation Modelling Practice and Theory 000 (2016) 1–15
Fig. 1. Architecture of the system.
accordingly to their security sensitivity. Tasks are prioritized according to security demands. Also, dedicated monitoring of VMs has been proposed to detect the possibility of being under attack by checking for unusual behaviours. 3. Modelling the proposed approach Presented work considers independent task model, where tasks submitted by the Cloud users are processed in a batch mode [7,17]. The proposed scheduler is working on the basis of the knowledge about the security services offered by the VMs. A Genetic Algorithm (GA) is used as an heuristic for determining the best (near-optimal) schedule. It optimizes the mapping in order to optimize the fitness function that is keeping the makespan 3 minimal. The GA has been selected due to the possibility of granularly controlling the scheduling quality by monitoring the fitness function value and early stopping. Furthermore, the model supports the data and tasks integrity preserving. It is designed according to the international standards formulated by International Organization for Standardization (ISO), and National Institute of Standards and Technology (NIST). For Cloud monitoring we decided to use software agents [18]. The persistence, continuous running and autonomic decision capabilities characterizing agents are well suited for Cloud monitoring needs. Autonomy, in the form of goal-directed strategic behaviour and decision-making without human intervention, allows us to design a self tuning monitoring facility. Social abilities, communication and coordination allows collaboration between multiple agents operating within the system. Reactivity allows to easily and flexibly adapt to dynamically changing Cloud environments [19]. The main novelty of the presented approach is a set of models and criteria for governing the time when scheduling actions should be accomplished. Every optimization problem, formulated for finding the best schedule by minimizing one of the objective function, assumes that schedule is calculated for each particular task, or given task batch. No clear criteria is proposed by researchers stating when we should stop gathering tasks and start the scheduling activity. The proposed non deterministic model is trying to fill this gap. The objective of the proposed scheduling approach is achieving the minimum makespan within the whole Cloud organization, while satisfying all the users’ security demands expressed in the form of task requirements and Virtual Machines (VMs) trust vectors. The proposed model may be incorporated in fully distributed Cloud environments composed by VMs running on multiple hosts located in several sites scattered throughout the world. An individual task is processed by only one VM. Tasks are gathered together in order to formulate a batch. Each single batch is scheduled separately from other batches. The performance of each given task on a single VM is not related and not affected by the performance of any other tasks running on the same or other WMs. The tasks are sent to worker agents, providing batch processing facilities, one batch after another. The whole system consists a set of agents processing the various tasks, as sketched in Fig. 1): • • • •
a DoS Agent that collects tasks into batches (Collector Agent); master agent supports task scheduling as well as monitoring and governing the system’s performance; worker agent monitoring and governing the batch processing functions and the allocation of tasks on the VMs; task collector agent gathering the results from the workers.
3.1. DoS aware agent collecting batches During a Denial-of-Service (DoS) attack, an attacker attempts to block legitimate users to services by generating an excess amount of data or tasks to be handled by the Cloud [20] (see: Fig. 2). Distributed Denial of Service (DDoS) are a coordinated, large-scale Denial of Service attacks using large number of attacking machines, for example Botnet computers [21]. In both cases statistical based anomaly detection is an effective protection tool against DoS. Frequency thresholds that is the number of requests in a specific period is mostly successful method [22]. Four most frequent categories of such an attack are: Please cite this article as: A. Jakóbik et al., Non-deterministic security driven meta scheduler for distributed cloud organizations, Simulation Modelling Practice and Theory (2016), http://dx.doi.org/10.1016/j.simpat.2016.10.011
ARTICLE IN PRESS
JID: SIMPAT
A. Jakóbik et al. / Simulation Modelling Practice and Theory 000 (2016) 1–15
[m3Gsc;November 3, 2016;18:8] 5
Fig. 2. Denial-of-service (DoS) attack scenario.
• TCP Connection Attack: creating a huge number of TCP connection attempts in order to saturate the maximum number of simultaneous connections supported at the system or firewall level • Volumetric Attacks: consuming the bandwidth available at the target network/service level, by generating huge traffic volumes towards the victim • Fragmentation Attacks: floods of fragmented packets are sent to the target in order to reduce performance by forcing continuous reconstructions, and finally, • Application layer Attacks: lowering the effectiveness of systems or services as well as improving their energy consumption by generating one or multiple low traffic rate flows [23–25]. Unfortunately, most of the available solutions against DoS are designed for analyzing and filtering on the network (or system) edge the traffic coming from outside sources [26,27]. Instead, the model proposed in this paper considers inside sources. It reflects the situation when the attacker is logged as a legitimate Consumer, or administrator. Then, he is trying to upload unauthorized tasks (Fig. 3). To decrease the possibility of such kind of attacks, some specific DoS prevention methods have been introduced in the collector agent. The Agent continuously monitors the statistics about batches loaded into the systems and an alert is generated in presence of tasks that are significantly bigger/smaller than the normal ones, in terms of resources demand, or just do not fit usual consumers’ tasks profiles. This requires the acquisition of a baseline view of users’ tasks under standard operating conditions that constitutes the knowledge of the concept of “normality”. Suspicious tasks are sent back to the consumer together with the alert information. If the consumer confirms the task execution, it proceeds. Otherwise, the task is considered as fake and/or malicious and removed from the system before starting the scheduling procedure. This procedure is made before the task is passed to be scheduled.
3.2. Task injection attack prevention However, several cloud scheduling systems may be affected by a vulnerability which allows malicious virtual machines to circumvent CPU usage checks in order to obtain more resources at the expense of others. In particular, the use of periodic sampling to check resources usage creates a loophole, can be maliciously exploited by an unauthorized task that passes unaffected all the previous checks and is scheduled and processed on the cloud runtime system [28], by eating plenty of resources, as sketched in Fig. 3. This attack is commonly known as task injection can be simply countered by randomizing the scheduler service activity. The model presented in this paper is based on the idea that the scheduling, monitoring and reporting is activated in non-deterministic time intervals, called ticks. Three models were selected for generating truly random sequences of ticks: Binary Blum-Blum-Shub pseudo random number generator [29], SHA-2 hash function working as a random oracle [30], and uniform model [31]. Zeros and ones generated by models drive the time ticks of the scheduler. The ones activate the process of scheduling, monitoring, and reporting (see: Fig. 3). The methods are further described in [31]. Please cite this article as: A. Jakóbik et al., Non-deterministic security driven meta scheduler for distributed cloud organizations, Simulation Modelling Practice and Theory (2016), http://dx.doi.org/10.1016/j.simpat.2016.10.011
JID: SIMPAT 6
ARTICLE IN PRESS
[m3Gsc;November 3, 2016;18:8]
A. Jakóbik et al. / Simulation Modelling Practice and Theory 000 (2016) 1–15
Fig. 3. Task injection attack.
Fig. 4. Task injection prevention.
3.3. Security constraints incorporated into the scheduler Following [32] the security demand vector characterized by the security needs of the system users is assumed in the form:
SD = [sd1 , . . . , sdn ]
(1)
where sdj defines the security demand parameter specified by the jth task in the batch. The trust level vector:
T L = [t l1 , . . . , t lm ]
(2)
describes security parameters of all VMs in the system. Both parameters assume values from the [0,1] range, with 0 meaning the lowest and 1 meaning the highest security level for a task execution and the most untrustworthy and fully trusted VM. A task may be scheduled to a particular worker when it offers a security level that is equal or higher than the demanded one. Task integrity monitoring. All the tasks are passed through the aforementioned chain of agents (see: Fig. 1). At every step the integrity of each task is checked, by calculating its corresponding hash using SHA-2 compressing function. This provides resistance to unauthorized modifications. 3.4. Batch attributes Each batch of n tasks is characterized by several attributes that are independent from the past operations history: • • • •
body of batch consisting of the tasks t1 . . . tn security demand vector SD for all the tasks in the batch, workload vector for all the tasks in the batch W = [w1 , . . . , wn ] expressed in Millions of Instructions (MI), h(t1 ), . . . , h(tn ) hashes, depending on the stage: hashes of batch delivered to the DoS Attack-aware agent, hashes of batch delivered to the master agent, hashes of batch delivered to workers, • o(t1 ), . . . , o(tn ) output generated by the task, to be stored in encrypted form together with some others explicitly considering the past, needed for helping DoS attack-aware agent to detect untrustworthy tasks: Please cite this article as: A. Jakóbik et al., Non-deterministic security driven meta scheduler for distributed cloud organizations, Simulation Modelling Practice and Theory (2016), http://dx.doi.org/10.1016/j.simpat.2016.10.011
ARTICLE IN PRESS
JID: SIMPAT
[m3Gsc;November 3, 2016;18:8]
A. Jakóbik et al. / Simulation Modelling Practice and Theory 000 (2016) 1–15
7
• mean mean workload of all batches scheduled so far • s standard deviation of workload of all batches scheduled so far. Therefore, each batch B may be represented by a vector that is the concatenation of the above elements: B = [t1 ..tn , sd1 , ..sdn , w1 , .., wn , h(t1 ), . . . , h(tn ), o(t1 ), ..., o(tn ), mean, s] 3.5. Scheduling objectives We relied on a heuristic model for task-execution scheduling activity [6], structured as an Independent Batch Scheduling (IBS) problem based on the Expected Time to Compute (ETC) Matrix [33]. A genetic algorithm is used to search for a nearoptimal solution, as described in the following Section 3.8. The scheduling objectives associated to the performance of the task to be managed are: • The Makespan, that is the maximum completion time of all the job instances in the schedule, and hence characterized by the completion time of the latest task:
Cmax =
min
S∈Schedules
max C j ,
j∈Tasks
(3)
where Cj is the time when task j is finalized, Tasks are all the tasks submitted to the system and Schedules is the set of all possible schedules. • The Flowtime, referring to the response time to the user requests for task executions, and hence measuring the minimum of the sum of times of all the tasks:
F=
min
S∈Schedules
Cj ,
(4)
j∈Tasks
The Makespan and Flowtime are indicators that strongly characterize the performance of the scheduling system and their mutual relation far from straightforward, and they may result at a first glance as contradictory objectives since aiming at optimizing one of them could not obtain the same effect on the other one, mainly in presence of scheduling plans that are near to optimum. Indeed, makespan can be considered as an indicator of the overall cloud productivity, where small values indicate that the scheduling system is efficiently distributing the tasks on the available resources. On the contrary, minimizing the flowtime can result in a reduction of the average response time of the cloud, as perceived by the users. Our specific goal is maximizing the cloud productivity (i.e., its throughput) by achieving a good balance of resources usage and simultaneously offering to the cloud users a satisfactory quality of service. To estimate the time needed by worker nodes for processing the assigned task, a computing capacity vector for each worker is introduced:
CC = [cc1 , . . . , ccm ]
(5)
The vector describes computational ability in Millions of Instructions Per Second (MIPS). In order to determine the schedules without considering the security workload, we rely on the Expected Time to Compute (ETC) matrix estimated as follow:
ET C[ j][i] = w j /cci
(6)
3.6. Scoring workers This method assumes that a specific task may be processed only by workers having a proper trust level tl, that is, equal of higher than the security demand sd of that task. The computational cost of additional security is not incorporated into workers time in the straight forward way. The price for that additional elapsed time is estimated by calculating the total cost of all assets and tools that make a particular worker able to operate with security trust level tl. That price is added to the price all users are paying for the services they already used. This model may be useful when the computational supplement required by the security operations is very small in comparison to the tasks execution time. 3.7. Security bias matrix approach This alternative method also assumes, like the previous one, that the task may be run only by workers with a trust level tl that is equal or higher than the security demand sd of that task. But, differently by the previous one, this model is based on the fact that additional security operations will significantly increase the time in which the worker will process that task. This fact is reflected by adding additional time to the Expected Time to Compute. In such a case, the security operations time is taken into account when calculating the schedule. Instead of the Machine Failure Probability matrix presented in [34], the elements of which, are interpreted as the probabilities of failures of the machines during the tasks’ executions due to the high security restrictions, we proposed Security Bias matrix SB. Please cite this article as: A. Jakóbik et al., Non-deterministic security driven meta scheduler for distributed cloud organizations, Simulation Modelling Practice and Theory (2016), http://dx.doi.org/10.1016/j.simpat.2016.10.011
ARTICLE IN PRESS
JID: SIMPAT 8
[m3Gsc;November 3, 2016;18:8]
A. Jakóbik et al. / Simulation Modelling Practice and Theory 000 (2016) 1–15 Table 1 Security demands SD for integrity, confidentiality and authenticity. Service
Level 1
Level 2
Level 3
Level 4
Integrity Confidentiality Authenticity: RSA DS, SHA
SHA-1 – –
SHA-256 RSA 1024 1024, 256
SHA-256 RSA 2048 1024, 512
SHA-512 RSA 2048 2048, 512
The bias of security demand sdj for a task-machine pair is defined as the estimated time necessary to assure required security level for task j (described by workload wj with security demand level sdj ) on the machine i (described by computational capacity cci and trust level tli ) during task execution. It is specifically denoted by:
b(sd j , w j , tli , cci ) By gathering such biases into a matrix representation we obtain the Security Bias Matrix (SB):
SB[ j][i](SD, T L ) = [b(sd j , w j , tli , cci )]
(7)
where SD and TL are respectively the security demand vector (see Eq. (1)) and trust level vector (see Eq. (2)) for the workers in the system. By considering biases, the ETC matrix can be modified to the Security Biased Expected Time to Compute (SBETC) matrix:
SBET C[ j][i](SD, T L ) = w j /cci + b(sd j , w j , tli , cci )
(8)
SBET C (SD, T L ) = SB(SD, T L ) + ET C
(9)
In the proposed model, cryptographic services has been chosen according to the FIPS standard [35], and ISO/IEC 19790 standard [36] for Security requirements for cryptographic modules. These standards specify four operating levels of general security requirements for cryptography modules: 1. Level 1: at least one approved algorithm or security function shall be used. 2. Level 2: role-based authentication in which a cryptographic module authenticates the authorization of an operator to assume a specific role and perform a corresponding set of services. 3. Level 3: identity-based authentication mechanisms, enhancing the security provided by the role-based authentication mechanisms, a cryptographic module authenticates the identity of an operator and verifies that the operator ID is authorized to assume a specific role and perform a corresponding set of services. Moreover, the entry or output of plaintext has to be processed inside a module using a trusted path from other interfaces. Plaintext may be entered into or sent in output from the cryptographic module in encrypted form. 4. Security Level 4: the highest level of security defined in this standard. Penetration of the cryptographic module enclosure from any direction has a very high probability of being detected, resulting in the immediate suppression of all operations. Specific security algorithms have been chosen from approved security functions, certified by National Institute of Standards and Technology [37] (see: Table 1): • Cryptographic Hash Functions: SHA-1, SHA-224, SHA-512 [30], • Asymmetric-Key Algorithm Rivest-Shamir-Adleman algorithm (RSA) with the key length 1024 or 2048 bits [38], • Digital Signature Standard incorporating both SHA and RSA algorithms [39]. These schemes are implemented in workers. The additional time that is necessary support security demands is charged on users’ accounts. For example, if a user specified security level 2 for his task, the integrity of the task is checked using SHA-256 hash function. The result of task calculation is stored in encrypted form by using the RSA algorithm with 1024 bits long key and hence retrieving the data is possible only by using the RSA Digital Signature scheme and SHA-265 as the integrity checking function. Level zero means no additional cryptographic services. The trust level vector for such a system corresponds to the one from five different trust Levels 0–4, offered by a particular VM ranging from 1 to m:
T L = [tl1 , ..., tlm ] ∈ {0/4, 1/4, 2/4, 3/4, 4/4} = {0, 1/4, 1/2, 3/4, 1}
(10)
3.8. The genetic batch scheduler Genetic Algorithms essentially consist in stochastic techniques for exploring the solution space of a complex problem according to a naturally inspired heuristic approach modelling the genetic evolution process and relying on natural selection. In such a process, each species available in nature tries to conveniently adapt within a continuously changing environment in order to maximize its own fitness. Genetic Algorithms can be also considered as sophisticated self-learning strategies Please cite this article as: A. Jakóbik et al., Non-deterministic security driven meta scheduler for distributed cloud organizations, Simulation Modelling Practice and Theory (2016), http://dx.doi.org/10.1016/j.simpat.2016.10.011
ARTICLE IN PRESS
JID: SIMPAT
A. Jakóbik et al. / Simulation Modelling Practice and Theory 000 (2016) 1–15
[m3Gsc;November 3, 2016;18:8] 9
when the knowledge acquired by each “generation” is embedded into the genetic heritage (i.e., the chromosomes) of its components. They operate according to a multi-stage process by modelling the search space of the optimization problems of interest by using multiple populations of individuals, representing all the possible solutions, where each individual, that is a point in the solution space, is modelled by using a sequence (s1 , . . . , sn ) of n numeric values (or “genes”) referred as a “chromosome”. A specific fitness function, representing the objective function of the problem of interest, is associated to chromosomes. A set of chromosomes with their specific fitness scores constitute a “population”, where the fitness scores model the optimality of the contribution of the chromosome to the problem solution. The populations emerging at each stage of the algorithms are referred as “generations”. The chromosomes characterized by a good fitness score are selected as an elite that can directly survive in the next generation or can be chosen for reproduction through cross-breeding with other highly fit chromosomes belonging to a population, so that new populations of feasible (and hopefully better) problem solutions can be generated by choosing the highest quality chromosomes from the actual generation and mating them in order to obtain a new series of chromosomes (the next generation). In other words, the cross-breeding operation (also known as crossover) starts from a couple of sequences s1 and s2 breaking them at random points i in order to exchange the resulting halves (i.e. shuffling (s11 , . . . , s1n ) with (s21 , . . . , s2n ) as well as (s1i , . . . , s1n ) with (s2i , . . . , s2n ) to obtain new chromosomes/solutions whose components can be inherited from a parent or the other one. Clearly, it is highly probable that combining individuals with a good genetic heritage can result in an again good or even better ones. Since crossover may potentially result in an information loss, where some useful element in a good solution may disappear, an additional operation, known as “mutation”, that randomly selects and changes some single chromosome component si , is sometimes added to the available options. It is straightforward that the reproduction probability of a chromosome either via elite selection or crossover, is directly proportional to the optimality of the associated solution, so that chromosomes associated to solutions with weak optimality features have lower chances of surviving respect to those characterized by an higher fitness. On the other hand the mutation operation is useful for diverting the exploration in the solution space towards new regions, in order to avoid being trapped into local optima. The Genetic Algorithm starts by randomly creating an initial population is created and evaluating the fitness of each chromosome belonging to it and proceeds in a multi-stage iterative way through the selection of the best chromosomes to be carried in the next stage (the following generation), followed by the crossover and mutation operations, resulting in the replacement of the worst chromosomes with better ones, since the population in each stage has a finite number of chromosomes, and hence introducing new chromosomes implies discarding the worst ones. This mimics the mechanisms of “natural selection” described by Darwin’s evolution theory. The process ends only when there are no more improvements in the solutions’ fitness from a generation to the next one, or when he number of iterations exceeds an upper bound specified as an algorithm parameter, so that it is highly probable that the algorithm has converged towards an optimal or near optimal solution. Genetic Algorithms can be used to successfully cope with a very large class of complex multi-objective optimization problems, such as scheduling [33,40]. The chromosome representation of the problem’s solution is a fundamental point for the algorithm’s success, so that building a suitable genetic model for describing the problem solution space is the crucial preliminary step, for managing our batch scheduling system. Accordingly, a genetic representation of the batch scheduling problem is the following: • the individual task assignment to a worker’s runtime resources is the single gene in the chromosome, • the chromosome represents the schedule for a single worker, as formulated by genes, and there are m chromosomes in the population, • an initial population is created by task distributed randomly (by considering all the tasks in the batch), • the whole population is kept, no members are erased, • the population is changed according to: 1. the crossover operation is performed for a selected percentage of the population, the rest (the fittest ones) is selected as elite and remains unchanged. In this way, the population does not lose any jobs, and no duplication takes place, 2. crossover is performed by randomly selecting parents in the above percentage and using a single crossover/shuffling point, randomly chosen as the point at which the algorithm swaps the remaining alleles from one parent to the other, so that from two parents two offspring are created, 3. mutation is not performed, • the fitness function ϕ used to evaluate the chromosomes/solutions is a weighted combination of the makespan Cmax and flowtime F (see Eqs. (3) and (4)), aggregating both performance indicators/objectives according to a multi-objective logic, by assigning to each of them a specific weight ωi , according to the relative importance of the individual objective in the cumulative goal (scalarization principle):
ϕ=
1
ω1 · Cmax + ω2 · F
(11)
The aim of the algorithm is to find the population maximizing this value. • the determination of the schedule is constrained by the SBETC matrix, considering both the workloads and the capacity constraints together with the security requirements. Some additional assumptions are needed to simplify the problem: • Each machine is used for task assignment, there is no machine without jobs, Please cite this article as: A. Jakóbik et al., Non-deterministic security driven meta scheduler for distributed cloud organizations, Simulation Modelling Practice and Theory (2016), http://dx.doi.org/10.1016/j.simpat.2016.10.011
ARTICLE IN PRESS
JID: SIMPAT 10
• • • •
[m3Gsc;November 3, 2016;18:8]
A. Jakóbik et al. / Simulation Modelling Practice and Theory 000 (2016) 1–15
The number of tasks assigned to each machine is constant, Each of tasks must be assigned to only one machine, All tasks must be allocated, The number of tasks and machines in the whole process is static.
The solution of the problem, that is the desired schedule, is the whole population of chromosomes, representing a schedule for the complete batch. From single chromosome we are retrieving the schedule for a single worker. 4. Experimental analysis This section presents the results of conducted experiments. We have tested all proposed secure cloud improvements. For ETC matrix based experiments we have created 20 instances of workers and a batch with 200 tasks. The characteristics of the tasks were chosen according to the normal distribution. A FastFlow task farm has been used as a task dispatcher module. FastFlow is a structured parallel programming environment written in C++ on top of the POSIX thread library. It provides programmers with predefined and customizable stream parallel patterns, including task-farm and pipeline–and has been successfully employed in clouds’ performance evaluation [41]. The task-farm FastFlow pattern has been explicitly used for the replication of a stateless worker function, where each replica receives stream inputs from a dispatcher (Emitter) and sends outputs to the Collector. The Model presented in Fig. 1, has been mapped into the above farm pattern: • the DoS Attack-aware agent collecting batches, is located in the dedicated listener, • the master agent supporting scheduling as well as monitoring and governing the performance of the system, is located in the FastFlow Emitter, • the worker agent monitoring and governing the batch processing on the individual worker, is located in the FastFlow Worker, • the task collector agent gathering the results from the workers, is located in the FastFlow Collector. The implementations of the Blum-Blum-Shub Pseudo-random number generator and SHA-2 based pseudo-random number generator have been implemented by using the OpenSSL C++ library in the OpenStack cloud platform [42]. The pseudorandom number generation for uniform distributions is provided by the Random Number Engines and Random Number Distributions for C++ [43]. The implementation of the Genetic Batch scheduling algorithm has been made in C++ also. All measurements were made on the computational and data infrastructure located in the Institute of Computer Science at Cracow University of Technology in Poland, and Cloud data storage center located in the Cloud Competency Center at National College of Ireland (see: [2,42]). 4.1. ETC and SBETC matrices generation For testing purposes, we have generated ETC matrix with dimension 20x200 for 20 machines (workers) and 200 tasks. Characteristic of test data is as follows (in sec.): • the minimum mean ETC value for tasks min j meani ET C[ j][i] = 1.0350, meaning that for each tasks the mean time necessary for this task is calculated by considering a given set of workers, then the less time-demanding task needed in average 1 s to be computed, • the maximum mean ETC value for tasks min j meani ET C[ j][i] = 79.0575, similarly, but the most time-demanding task needed in average 79 s to be computed, • the minimum mean ETC value for workers mini mean j ET C[ j][i] = 24.5493, meaning that for each worker the mean time necessary for all the tasks is calculated by considering the given set of tasks, then the fastest worker is found that needed 24.5 s for processing all the tasks, • the maximum mean ETC value for workers mini mean j ET C[ j][i] = 49.0932, similarly, but the then the slowest worker was found that needed 49 s for processing all the tasks, • maxET C = 105.4100 the time necessary for processing the most demanding task by the slowest worker, • minET C = 0.6900 the time necessary for processing the less demanding task by the fastest worker. The SBETC matrix is calculated by adding the time necessary for all cryptography operations depending on the workload of the tasks:
b(sd j , w j , tli , cci ) = time(SHA(t j ), i ) + time(RSA(o(t j )), i ) + time(DS(t j ), i )
(12)
where time(ε (tj ), i) denotes the time needed for the operation ε on the element tj of the batch B on ith worker by offering the proper security level. For the proposed ETC matrix, the related Security biases b(sdj , wj , tli , cci ) (see Eq. (7)) have been determined for all the workers for the basic workloads and then estimated into bigger/smaller workloads. Please cite this article as: A. Jakóbik et al., Non-deterministic security driven meta scheduler for distributed cloud organizations, Simulation Modelling Practice and Theory (2016), http://dx.doi.org/10.1016/j.simpat.2016.10.011
ARTICLE IN PRESS
JID: SIMPAT
[m3Gsc;November 3, 2016;18:8]
A. Jakóbik et al. / Simulation Modelling Practice and Theory 000 (2016) 1–15
11
Table 2 The makespan (Eq. (3)) value [sec.] for different number of tasks: 10 tasks distributed 12 times an hour and 15 tasks distributed 12 times an hour. Model
120 tasks/hour
% of ref. model
180 tasks/hour
% of ref. model
Ref. Uniform BBS SHA
14,185 (3,9h) 10,522 (2,9h) 12,921 (3,5h) 13,124 (3,6h)
100% 74% 91% 93%
14,523 (4,3h) 13,260 (3,6h) 13,959 (3,8h) 14,018 (3,8h)
100% 91% 96% 97%
Table 3 The frequency of the workers A, B, C, D, E usage during performing 240 tasks using BBS and SHA models. Model
A
%
B
%
C
%
D
%
E
%
Ref. Uniform BBS SHA
65 55 63 61
100% 85% 97% 94%
53 56 51 51
100% 106% 96% 96%
45 48 51 50
100% 107% 113% 111%
42 41 38 39
100% 98% 90% 93%
35 40 37 39
100% 114% 106% 111%
4.2. Workflow bottlenecks examination for reference, uniform, BBS and SHA models During models examination, we have checked the impact of non-deterministic time intervals for the scheduling process on the environment performance. In our consideration, the reference model is a typical, deterministic scheduling approach. In this model schedules are generated according to the equal time intervals. Second reference model (named uniform) is non-deterministic. In this model the time intervals are generated according to the uniform probability. These models were compared to the proposed non-deterministic BBS and SHA models. Firstly, the makespan for all models was measured. For the same batch we have checked if time interval generation model has the significant impact on the performance of the system. 10 and 15 tasks were distributed 12 times an hour using all four models (see: Table 2) Changing the time of scheduling activation has no negative impact on the systems’ performance. Even more, the makespan for the batch was increased compared to the deterministic model. For all non-deterministic models the systems was working without any side effects. The potential for speeding up the system performance without changing the scheduler itself has been assessed. The benefits are noticeable for the cases of the low and moderate system load. When the system is heavy loaded, the makespans for all models is very close. 4.3. Utilization of non-heterogeneous workers for reference, uniform, BBS and SHA models This simulation relates to the different worker physical location. It results to the different time spent to connect with the workers during sending the data and gathering the results. Five types of workers are introduced. For simulation purposes, workers are arranged as follows: • • • • •
Very high computational load worker A, using 100% of the computational time available in favour of calculation, High computational load worker B, using 90% of the computational time, Moderate computational load worker C using 75%, Low computational load worker D using 60%, Very low computational load worker E using 50%, according the introduced referential time interval.
Comparing workers utilization, the BBS model does not differ much from the SHA model. Moreover, introducing nondeterministic (instead of equal) time intervals did not effected the utilization significantly. For non heterogeneous workers models BBS and SHA may be used interchangeably (Table 3). 4.4. Analysis of genetic scheduler performance The analysis of genetic scheduler performance includes: • Repeating process 100 times for 10 0 0 epochs and different initial populations, examining mean and best makespan relation. This experiment helps to estimate the influence of the initial population into the performance of scheduler. • Examining number of the epochs necessary for obtaining satisfactory results, by checking how mean makespan is changing in consecutive epochs. • The influence of introducing both security models into scheduling process. Please cite this article as: A. Jakóbik et al., Non-deterministic security driven meta scheduler for distributed cloud organizations, Simulation Modelling Practice and Theory (2016), http://dx.doi.org/10.1016/j.simpat.2016.10.011
ARTICLE IN PRESS
JID: SIMPAT 12
[m3Gsc;November 3, 2016;18:8]
A. Jakóbik et al. / Simulation Modelling Practice and Theory 000 (2016) 1–15
560
540
520
500
480
460
440
420 0
5
10
15
Fig. 5. The makespan at the beginning of the population change, no security services (vertical: the epoch number, horizontal: the makespan value). Table 4 The mean makespan [sec.] for 10 0 0 epochs of the Genetic Batch scheduler, 200 tasks, 20 workers, no security services.
Makespan
Epoch 0
Epoch 14
Epoch 37
Epoch 51
Epoch 81
Epoch 996
540
421
401
407
401
380
580 560 540 520 500 480 460 440 420 400 380 0
10
20
30
40
50
60
70
80
90
100
Fig. 6. The makespan change for 100 epochs, no security services (vertical: the epoch number, horizontal: the makespan value).
4.4.1. No additional security services The number of epochs necessary to obtain satisfactory results is relatively low, see Fig. 5. More epochs resulted in finding better result, see Table 4, but stopping of the algorithm has to be done accordingly to mean makespan controlling, not arbitrary the number of due to the oscillation, see Fig. 6. That leads to the conclusion that the early stopping for the genetic algorithm may be applied. If it is beneficial to obtain the schedule with much more shorter makespan, the minimum so far value should be compared to the current value. The number of epochs in the Genetic process should not be the only stopping criterion. 4.4.2. Scoring workers and security bias matrix approach realization In Scoring workers approach the schedule is generated using basic ETC matrix, see Eq. (6). The tasks are scheduled according to trust level vector defined by Eq. (2). The tasks are allowed to be calculated by the worker that has equal or Please cite this article as: A. Jakóbik et al., Non-deterministic security driven meta scheduler for distributed cloud organizations, Simulation Modelling Practice and Theory (2016), http://dx.doi.org/10.1016/j.simpat.2016.10.011
ARTICLE IN PRESS
JID: SIMPAT
[m3Gsc;November 3, 2016;18:8]
A. Jakóbik et al. / Simulation Modelling Practice and Theory 000 (2016) 1–15
13
Table 5 Statistics for the makespan [sec.] for 100 runs of the Genetic Batch scheduler (200 tasks, 20 machines, 10 0 0 epochs): mean and variance. No. of pairs crossovered
With security
Without security
1 2 1 2
mean 471.187 mean 475.048 var 0.0189 var 1.9311
mean 389.607 mean 393.92 var 266 var 243
Table 6 The minimum (best) makespan compared to the average makespan for different security levels.
0
1.1
1.2
1.3
1.4
1.5
Min (best) makespan: Avg makespan: Avg as %
384.4 465.9 122%
411.0 512.9 124%
433.8 572.8 132%
508.8 643.3 126%
545.0 705.4 139%
620.7 740.4 119 %
higher then task security demand specified by Eq. (1). Having 20 workers we decided to dived them into 4 classes (see: Table 1): 1. 2. 3. 4.
5 5 5 5
workers workers workers workers
offering offering offering offering
no additional security, i.e. Level 0, Level 1, Level 2, Level 3.
Level 4 is omitted in simulation experiments due to the special requirements in case if machine failure that causes the difficulty of time estimation. The genetic algorithm working to find the schedule constrained by the above scoring enables crossover operations only among workers offering the same security level. The initial population is created according to the security demands of the tasks. The mean makespan obtained for 100 runs of Genetic Scheduler is presented in the Table 5. Model without security refers to the situation when all the workers are offering the same security level 0. Adding 5 levels of security services means that crossover is possible only inside security groups. It results to 5 separate populations. The mean makespan, when security services are offered, is higher due to the additional necessary operations like: hashing or cyphering. Variance obtained for 100 runs genetic scheduler is much higher when the population is bigger. Considering security Bias matrix approach, we have also examined the influence of a Security Bias matrix application on the scheduling process. Five levels of security services resulted in estimation of biases in the following way:
b(sd j , w j , tli , cci ) = ET C (sd j , w j , tli , cci ) ∗
(13)
where ∈ {0, 0.1, 0.2, 0.3, 0.4, 0.5}. Each consecutive level of security resulted in longer time of the tasks computation, but the minimum (best) makespan ratio to average makespan is comparable for all scenarios (see: Table 6). 5. Conclusions and future work The paper presents the security challenges in the computational clouds which are Denial-of-Service and Timing attacks prevention. We have modelled and implemented Security Aware Non-Deterministic Task Meta Scheduler integrated with Agent-Based Monitoring System. Additional, we have proposed two models for assuring users security demands: Scoring workers model and Security Bias. Our case study together with the associated experimental evaluation results has demonstrated the effectiveness of the proposed models and its potential of increasing the system safety without substantial modifications in the scheduler itself. All proposed non-deterministic models for time intervals are numerically effective at the same time without causing bottlenecks. Implemented secure genetic scheduler is able to assure the quality of services in the context of security. Our study shows the impact of security overhead on the process of task scheduling. The implemented models effectively solves the problem of assuring users security demands. The models were examined for resources consumption, the makespan of the tasks batch execution, the average time of single batch processing and the bottlenecks occurrence. No side effects were stated. Many parts of the presented model have development potential. The scope of advanced decision for multi-agent system can be extended. Currently, the negotiation process among the worker agents and master agent is very simple. Based on the state of worker agents, the self tuning frequency of ticks can be upgraded. Please cite this article as: A. Jakóbik et al., Non-deterministic security driven meta scheduler for distributed cloud organizations, Simulation Modelling Practice and Theory (2016), http://dx.doi.org/10.1016/j.simpat.2016.10.011
JID: SIMPAT 14
ARTICLE IN PRESS
[m3Gsc;November 3, 2016;18:8]
A. Jakóbik et al. / Simulation Modelling Practice and Theory 000 (2016) 1–15
However, the proposed model is very elastic and may be used for different scheduler types. This approach ensured proper security of the system. Monitoring and reporting are not possible to predict - they are based on the ’safe’ random number generator (BBS), generator working as the random oracle (SHA) or the uniform distribution. Security elements incorporated during scheduling: DoS attack and task injection preventions, task modification integrity, confidentiality preserving and authenticity assuring are lowering the probability of unauthorized task inside the system and support the secure flow of task and data. Monitoring system helps to detect abuse and unjustified use of resources. The next stage of the research will be developing aforementioned elements in multicloud environment consisting of the OpenStack platform and AWS Cloud. The number of additional tests and extensions are planned in the nearest future: considering non-heterogeneous batches of tasks, and introduction of the Artificial Neural Network for supporting master agent decisions. The future plan involves the development of new monitoring and reporting methods. As well as plans to implement new schedulers - especially those working inside the OpenStack and Amazon AWS environments. Acknowledgement The authors would like to acknowledge the contribution of the ICT COST Action IC1406 High-Performance Modelling and Simulation for Big Data Applications (cHiPSet). References [1] P.M. Mell, T. Grance, The NIST Definition of Cloud Computing. SP 800-145, Technical Report, 2011, doi:10.6028/NIST.SP.800-145. [2] D. Grzonka, The analysis of openstack cloud computing platform: features and performance, J. Telecommun. Inf. Technol. 3 (2015) 52–57. [3] G. Zhao, C. Rong, M.G. Jaatun, F.E. Sandnes, Reference deployment models for eliminating user concerns on cloud security, J. Supercomput. 61 (2) (2012) 337–352. [4] NIST Cloud Computing Security Reference Architecture. SP 500–299, Technical Report, 2013. [5] Security and Privacy Controls for Federal Information Systems and Organizations. SP 800–53 Rev. 4, Technical Report, 2013. 10.6028/NIST.SP.800-53r4. [6] F. Magoulès, J. Pan, F. Teng, Cloud Computing: Data-intensive Computing and Scheduling, CRC press, 2012. [7] J. Kolodziej, F. Xhafa, Meeting security and user behavior requirements in grid scheduling, Simul. Modell. Pract. Theory 19 (1) (2011) 213–226, doi:10. 1016/j.simpat.2010.06.007. [8] A.N. Khan, M.L. Mat Kiah, S.U. Khan, S.A. Madani, Towards secure mobile cloud computing: a survey, Future Gener. Comput. Syst. 29 (5) (2013) 1278– 1299, doi:10.1016/j.future.2012.08.003. [9] A. Furfaro, A. Garro, A. Tundis, Towards security as a service (secaas): On the modeling of security services for cloud computing, in: 2014 International Carnahan Conference on Security Technology (ICCST), 2014, pp. 1–6, doi:10.1109/CCST.2014.6986995. [10] Z. Afoulki, A. Bousquet, A security-aware scheduler for virtual machines on iaas clouds, Adv. Comput. Sci. (2013) 48–52. [11] B. Sotomayor, K. Keahey, I. Foster, T. Freeman, Enabling cost-effective resource leases with Virtual Machines, Hot Topics Session in ACM/IEEE Int’l Symp. High-Performance Distributed Computing 2007 HPDC 07, 2007. [12] Z. Li, J. Ge, H. Yang, L. Huang, H. Hu, H. Hu, B. Luo, A security and cost aware scheduling algorithm for heterogeneous tasks of scientific workflow in clouds, Future Gener. Comput. Syst. (2016), doi:10.1016/j.future.2015.12.014. [13] L. Zeng, B. Veeravalli, X. Li, SABA: A security-aware and budget-aware workflow scheduling strategy in clouds, J. Parallel Distrib. Comput. 75 (2015) 141–151, doi:10.1016/j.jpdc.2014.09.002. [14] C. Liu, X. Zhang, C. Yang, J. Chen, CCBKE session key negotiation for fast and secure scheduling of scientific applications in cloud computing, Future Gener. Comput. Syst. 29 (5) (2013) 1300–1308. Special section: Hybrid Cloud Computing. doi: 10.1016/j.future.2012.07.001. [15] R.B. Chandar, S. Manojkumar, Towards reliable and secure resource scheduling in clouds, in: Computer Communication and Informatics (ICCCI), 2014 International Conference on, 2014, pp. 1–5, doi:10.1109/ICCCI.2014.6921757. [16] M.R. Islam, M. Habiba, Data intensive dynamic scheduling model and algorithm for cloud computing security, J Comput. 9 (8) (2014) 1796–1808. [17] J. Kolodziej, F. Xhafa, Integration of task abortion and security requirements in ga-based meta-heuristics for independent batch grid scheduling, Comput. Math. Appl. 63 (2) (2012) 350–364, doi:10.1016/j.camwa.2011.07.038. [18] M. Wooldridge, An Introduction to Multiagent Systems, John Wiley & Sons, 2009. [19] M. Wooldridge, N.R. Jennings, Intelligent agents: theory and practice, Knowl. Eng. Rev. 10 (02) (1995) 115–152. [20] M. McDowell, Security Tip (ST04-015): Understanding Denial-of-Service Attacks, 2009. [21] S.T. Zargar, J. Joshi, D. Tipper, A survey of defense mechanisms against distributed denial of service (ddos) flooding attacks, IEEE Commun. Surv. Tut. 15 (4) (2013) 2046–2069, doi:10.1109/SURV.2013.031413.00127. [22] S.Y. Zhu, R. Hill, M. Trovati, Guide to Security Assurance for Cloud Computing, Springer, 2015. [23] T. Mather, S. Kumaraswamy, S. Latif, Cloud Security and Privacy an Enterprise Perspective on Risks and Compliance, O’Reilly Media, Inc., 2009. [24] F. Palmieri, S. Ricciardi, U. Fiore, M. Ficco, A. Castiglione, Energy-oriented denial of service attacks: an emerging menace for large cloud infrastructures, J. Supercomput. 71 (5) (2015) 1620–1641. [25] M. Ficco, F. Palmieri, Introducing fraudulent energy consumption in cloud infrastructures: A New generation of denial-of-Service attacks, IEEE Syst. J. (99) (2015) 1–11, doi:10.1109/JSYST.2015.2414822. [26] C.L. Schuba, I.V. Krsul, M.G. Kuhn, E.H. Spafford, A. Sundaram, D. Zamboni, Analysis of a denial of service attack on TCP, in: Security and Privacy, 1997. Proceedings., 1997 IEEE Symposium on, IEEE, 1997, pp. 208–223. [27] P. Ferguson, D. Senie, Network ingress filtering: Defeating denial of service attacks which employ IP source address spoofing, Technical Report, 1997. [28] F. Zhou, M. Goel, P. Desnoyers, R. Sundaram, Scheduler vulnerabilities and coordinated attacks in cloud computing, J. Comput. Secur. 21 (4) (2013) 533–559. [29] D.R. Stinson, Cryptography: Theory and Practice, CRC press, 2005. [30] Secure hash standard (SHS), Technical Report, 2015. 10.6028/NIST.FIPS.180-4. [31] A. Jakobik, D. Grzonka, J. Kolodziej, H. Gonzalez-Velez, Towards Secure Non-Deterministic Meta-Scheduling for Clouds, in: 30th European Conference on Modelling and Simulation, ECMS 2016, Regensburg, Germany, May 31, - June 03, 2016. Proceedings., 2016, pp. 596–602. [32] J. Kolodziej, F. Xhafa, M. Bogdanski, Secure and task abortion aware ga-based hybrid metaheuristics for grid scheduling, in: Parallel Problem Solving from Nature - PPSN XI, 11th International Conference, Kraków, Poland, September 11–15, 2010, Proceedings, Part I, 2010, pp. 526–535, doi:10.1007/ 978- 3- 642- 15844- 5_53. [33] J. Kołodziej, Evolutionary Hierarchical Multi-Criteria Metaheuristics for Scheduling in Large-Scale Grid Systems, 419, Springer, 2012. [34] D. Grzonka, J. Kooodziej, J. Tao, S.U. Khan, Artificial neural network support to monitoring of the evolutionary driven security aware scheduling in computational distributed environments, Future Generation Computer Systems 51 (2015) 72–86, doi:10.1016/j.future.2014.10.031. [35] Security Requirements for Cryptographic Modules. FIPS PUB 140–2, Technical Report, 2001. [36] ISO/IEC 19790 Security requirements for cryptographic modules (2012).
Please cite this article as: A. Jakóbik et al., Non-deterministic security driven meta scheduler for distributed cloud organizations, Simulation Modelling Practice and Theory (2016), http://dx.doi.org/10.1016/j.simpat.2016.10.011
JID: SIMPAT
ARTICLE IN PRESS A. Jakóbik et al. / Simulation Modelling Practice and Theory 000 (2016) 1–15
[m3Gsc;November 3, 2016;18:8] 15
[37] [38] [39] [40]
E. Barker, Guideline for Using Cryptographic Standards in the Federal Government. SP 800-175, Technical Report, 2016. Recommendation for Key Management. NIST Special Publication 800–57, Part 1, Rev. 4, Technical Report, 2016. Digital Signature Standard (DSS). FIPS PUB 186–4, Technical Report, 2013. K. Gkoutioudi, H.D. Karatza, A simulation study of multi-criteria scheduling in grid based on genetic algorithms, in: 2012 IEEE 10th International Symposium on Parallel and Distributed Processing with Applications, 2012, pp. 317–324, doi:10.1109/ISPA.2012.48. [41] S. Campa, M. Danelutto, M. Goli, H. González-Vélez, A.M. Popescu, M. Torquati, Parallel patterns for heterogeneous CPU/GPU architectures: structured parallelism from cluster to cloud, Future Gener. Comput. Syst. 37 (2014) 354–366, doi:10.1016/j.future.2013.12.038. [42] A. Jakóbik, A cloud-aided group RSA scheme in java 8 environment and openstack software, J. Telecommun. Inf. Technol. (2) (2016) 53. [43] Pseudo-random number generation. URL: http://en.cppreference.com/w/cpp/numeric/random.
Please cite this article as: A. Jakóbik et al., Non-deterministic security driven meta scheduler for distributed cloud organizations, Simulation Modelling Practice and Theory (2016), http://dx.doi.org/10.1016/j.simpat.2016.10.011