Modeling and evaluation of highly complex computer systems architectures

Modeling and evaluation of highly complex computer systems architectures

Journal of Computational Science 22 (2017) 126–130 Contents lists available at ScienceDirect Journal of Computational Science journal homepage: www...

237KB Sizes 1 Downloads 68 Views

Journal of Computational Science 22 (2017) 126–130

Contents lists available at ScienceDirect

Journal of Computational Science journal homepage: www.elsevier.com/locate/jocs

Editorial

Modeling and evaluation of highly complex computer systems architectures a b s t r a c t Modern computer based systems are characterized by several complexity dimensions: a non-exhaustive list includes scale, architecture, distribution, variability, flexibility, dynamics, workloads, time constraints, dependability, availability, security, performances. The design, implementation, operation, maintenance and evolution of such systems require informed decisions, that must be founded onto techniques and tools enabling an anticipated knowledge about the behavior of every subsystem, including hardware, software and interactions, and the whole system, and the relationship of the system and the external world, considering workloads, communication, sensing of physical interactions. Performance prediction, and in general behavior prediction, may exploit simulation based approaches or analytical techniques to evaluate in advance the effects of design choices, or variability under different workloads, or emerging behavior, of systems, and provide a valuable support in all the phases of the lifecycle of a system by means of proper modeling approaches. In this Special Issue we present some contributions that offer a glance on modeling and evaluation of complex computer based system, and that have been chosen in order to provide a view on different domains and different approaches, mainly focusing on simulation techniques and related applications. © 2017 Published by Elsevier B.V.

1. Introduction Increasingly available computing resources enable the design and implementation of services that are more and more sophisticated, and rely on high scale, high efficiency computational and communication architectures. This is a fundamental, foundational resource for computational science, that benefits from theoretical results and innovation in algorithms, but also needs the availability of raw computing power abundance to face the challenges of increasingly complex problems. State of the art technological solutions, like federated multi-site cloud and grid systems, heterogeneous systems and any kind of large scale systems, are characterized by the fact that scale and distribution are fundamental parameters to be considered when designing and evaluating them. High architectural complexity factors arise in the extension of the software and hardware subsystems, in the relations between the two, in the interactions with layers of middleware or abstractions, in the management of the HW/SW architecture, in the presence of heterogeneous subsystems, in the interface with the environment, in the nature of the requirements, because they are interdependent or strict about time, dependability, safety or security. Designing and assessing architecturally complex computer systems is a classic but still open challenge, as the identification of new solutions pushes beyond their current limits the goals for the next future. Modeling, analysis and evaluation tools and methodologies are the key that should be leveraged to turn complexity into opportunities. Classical analytical and simulation modeling and evaluation approaches are the base on which suitable abstractions to manage https://doi.org/10.1016/j.jocs.2017.10.003 1877-7503/© 2017 Published by Elsevier B.V.

the complexity of the analysis should be built, learning from the experience and exploiting cross fertilization between different application fields and different approaches. This Special Issue will primarily encompass practical and methodological approaches that advance research in all aspects of modeling and simulation for architecturally complex systems. Successful contributions may range from advanced technologies, applications and innovative solutions to concurrently dominate the architectural, applicative and system specifications. 2. Modeling highly complex computer systems Modeling is a powerful and flexible conceptual tool that can be applied to support the whole life cycle of highly complex computer systems (HCCS). Models are used to define the architectural and software parts, to describe the constraints, to explore the behaviors, to gain knowledge about an unknown system, to design non-functional specifications like performances, dependability, availability, timeliness or conformity, to document a system, to generate hardware and software, to assess a system after an incident, to prove some properties. It is not possible to present a complete list of cases, due to the extreme variety of goals, approaches, methods, conceptual and practical tools and applications: consequently, we decided to provide some relevant examples, chosen in the application fields that have been targeted by the authors of the papers selected for this Special Issue, and to propose some of the main issues that can be managed by means of a savvy application of different modeling techniques and that we consider of special interest for the readers of this Journal.

Editorial / Journal of Computational Science 22 (2017) 126–130

2.1. Models in the lifecycle of HCCS The lifecycle of HCCS is characterized by some peculiarities. HCCS may be one-of-a-kind systems, that are not produced in large numbers but are designed around a given need, or that may have been composed by standard subsystems and evolved under special requirements, or critical because human life may depend on their correct execution: to offer some examples, the first case is the case of mission support systems for space exploration, while the second is of the case of very large data centers, and the third is the case of high speed railways or aerospace systems, or, more commonly known, modern automotive automatic driving systems. The main problem is to ensure a correct behavior and a full exploitation of these systems with a certain degree of trust in the fact that their complete evolution will follow what prescribed by the designers and the administrators, because monitoring and understanding the overall behavior may be extremely complex or may simply happen too late to solve problems in real time. The consequences of misbehavior may affect human life, may cause extensive damage on other interacting subsystems, or may simply result in significant economic loss or missing profits. This description includes classical critical systems, but also a number of other artifacts, like modern large scale computing systems, such as cloud computing facilities, or autonomic systems, or geographical communication networks and hybrid or cyberphysical systems. Roughly speaking, we may divide, for the purpose of this paper, the lifecycle of a system in phases: design, integration, verification and validation, operation and maintenance. For the most of these systems, design phase may be long and articulated, and, for some of them, during maintenance the evolutionary aspects may be stressed at the point that it may be considered as an extension of the design phase, because they are often subject to very variable workloads, major updates or extension, or frequent reconfigurations. Anyway, in each phase, models are of paramount importance and are considered a resource. Models may be oriented to representing a system or to actually support the verification of properties. In the first case, the representation may have documentation or presentation purpose, and is oriented to human analysis, or to the automatic generation of hardware or software, as in the case of Model Driven Engineering [1], in which Unified Modeling Language (UML) [2] or System Modeling Language (SysML) [3] are used to define, by means of different modeling diagrams, the software architecture and the structure of a system, and to automatically generate, by means of transformations, the actual software and components of the system. In the second case, that is the case on which this Special Issue puts the focus, models are meant to be processed in order to produce results according to proper metrics or to demonstrate theses about hypotheses formulated about the system. Processing may happen by means of analytical techniques, e.g. the automatic derivation of equations describing some aspects of the evolution of the system, or simulation, that may mimic the actual behavior of the modeled system or may exploit general simulation algorithms capable of providing information on the modeled system. In the following, we will only consider processable models. In the design phase, modeling is generally adopted as a means to explore the solutions before developing prototypes: for HCCS, prototypes may be very expensive or not a viable possibility. For example, systems that must strictly adhere to safety standards and require a proof of conformance in order to be viable [4], such as railways or aerospace systems [5–7], in which models may help in reducing the use of expensive prototypes and lower the costs of the overall design phase; or, in the case of very large computing facilities, adopting a prototype in the small will not reveal all problems that may arise in the large scale, such as communication problems, energy consumption effects [8,9], workload interactions [10,11],

127

scheduling effects at various scales [12–14], and all the adaptations to the evolutions of the workload in the lifespan of the system [15], and in this case models may help in choosing between alternative approaches that show their effects in the large only, after big investments, or may signal in advance emerging problems such as saturation, competition for resources or side effects. In the integration phase, the problem lies in the possible mismatches or, again, interactions between components, that may lead to unexpected problems. For example, in high speed transportation systems communication delays may lead, in certain condition, to the unnecessary execution of safety procedures because of a false detection of a major problem, and proper exploitation of performance models may reveal this condition and enable the right design choices [16]; or it is possible to use models to verify by formal methods [17,18], e.g. the correctness of communication protocols to avoid misbehaviors or locking in systems with shared buses. In the verification and validation phase, measures and traces are available to compare the execution of a system with the designed behavior and to account for variations in the configurations, or anomalies, or to verify the coherence of performance measures such as throughput, availability, dependability, speed, and models may be updated or integrated with additional elements, in order to have a constantly valid reference that may help in validation, verification or anomaly detection, that may reveal incoming failures, or weariness of components that are not directly observable; this is again the case of large computer-controlled transportation systems, in which the complexity level is very high and not manageable without proper reference models, or, in general, systems that must be formally verified and validated along their lifetime. In the operation phase, models may be used for system management and administrations by means of monitoring to support decisions with real time updated information, or may be automatically produced by means of data mining [19] or process mining [20,21] techniques or artificial intelligence [22,23] and provide non-trivial information hidden in system traces. In the maintenance phase, specially in the case of evolutionary maintenance, the same approach seen for design and operation, with the advantage of the availability of the system itself and its traces as a reference. Models may be used to found reengineering or major updates or redesign. 2.2. Making and using models Readers of this Journal are already familiar with many modeling approaches and solution techniques. Besides classical mathematical models, either timeless or timed, either deterministic or stochastic, a number of specialized higher level modeling approaches are used for HCCS, such as, e.g. for performance modeling, Petri Nets [24,25], Fault Trees [26,27], Queuing Networks [28], or multiformalism/multiparadigm/multisolution approaches [29,30], or more scalable analytical techniques [31] together with less structured techniques or naives and ad hoc approaches [32–34], or tool related modeling languages. Modeling approaches adapt to the application field, to the goals and to the professional background of the modeler. In most of the cases, models have to be processed in order to produce useful information. Topic is definitely too wide to deal with it in detail: in general, analytical solutions (and related approximate solutions) produce results by applying a computing process to a mathematical model derived from the original model, while simulation models [35] exploit proper tools that use the model to mimic the evolution of the events in the real system to simulate its general behavior. Analytical or simulation based solutions are used for performance prediction or dependability evaluation, and in

128

Editorial / Journal of Computational Science 22 (2017) 126–130

general for the verification of non-functional specification, to cope with problems arising from concurrency, resource allocation, interacting requirements, architectural complexity, time criticalness, typical, for examples, of application fields that are at the state of the art of current HCCS research, such as high performance computer networks [36–38], cloud and edge systems [39], cyberphysical systems [40,41], security [42–44], web oriented software [45], critical systems [46,47]. Papers chosen for this Special Issue try and give a glance on different application fields and modeling approaches, in order to provide an up-to-date view on the most common problems and inspire the readers of this Journal to explore the wide field of modeling and simulation of HCCS, with the purpose of fostering cross fertilization and innovative applications.

3. A brief review of accepted articles of this Special Issue The first paper of this Special Issue concerns architectural issues in high performance computing. In “High performance system based on cloud and beyond: Jungle computing” the authors analyze current and future perspective of large computing architectures, dealing with the extension of common Cloud computing infrastructures by joining together the efforts of all available computing facilities in a single, flexible resource, aiming to the best possible exploitation techniques to balance use patterns and reduce costs and energy consumption. The authors exploit simulation, by means of an ad hoc approach, in support to the development of a Jungle system, with tests in a real scenario, and show that a significant improvement of resource usage can be achieved. The second paper deals with another critical aspect in the design of large computing infrastructures: networks. Networking is at the state the main potential bottleneck in large computing infrastructures, thus modeling and simulation may greatly benefit of the availability of emulation tools in order to provide developers and administrators the means for building effective testbeds and networks in efficient design and development cycles. In “Control frameworks in network emulation testbeds: A survey” the authors provide a comprehensive state of the art of existing architectures, issues, open problems and solutions in the promising research area of network emulation testbeds, discussing the available large scale and geographically distributed network testbeds. The third paper deals with networking problems in the perspective of network operators, that have to deal with massive traffic estimation for resource allocation and network development. In “Traffic matrices estimation with software-defined NFV: Challenges and opportunities” the authors present the problem of estimating traffic matrices, that are the abstraction modeling overall network traffic in large scale networking, by inferring it from link-level measured traffic, and provide solutions to support the current shift from physical appliances to virtualized services, that significantly impacts traffic dynamics, to keep an efficient resource exploitation. The fourth paper is about the domain of Internet of Things (IoT), that is characterized by large number of sensors that are variously interconnected into networks capable of collecting large volume of heterogeneous data. In “Adaptive data rate control in low power wide area networks for long range IoT services” the authors use simulation to support the design of long range wide area network based IoT networks, and propose a congestion classifier that adopts logistic regression and modified adaptive data rate control to achieve efficiency in data transmission. The fifth paper introduces another very relevant domain, security, in the context of Critical Infrastructures. In “Simulation platform for cyber-security and vulnerability analysis on Critical Infrastructures” the authors face the challenge of assessing security and identifying vulnerabilities in complex architectures devoted

to implement the mechanisms that let critical infrastructures operate and provide services. The authors propose a hybrid and distributed simulation platform with the aim of providing a cost effective tool for developers and administrators of large scale critical infrastructures systems, integrating different simulators to support penetration tests, vulnerability analysis and monitoring of resources. The sixth paper considers a different aspect of security, that is related to the generation of threat signatures to enable the efficient detection of malicious content in network traffic. In “Design and evaluation of a system for network threat signatures generation” the authors present a method to pre-generate, according to a given preexisting model, signatures of zero-day attacks by means of an algorithm that accelerates the production and the update of a signature database, to allow a timely identification of effective threats, and validate the approach against real and synthetic network traces by means of a simulator. The seventh paper focuses on the last of the application domains of this Special Issue, that is the domain of web based applications. In “Modelling a non-stationary bots’ arrival process at an e-commerce web site” the authors offer a model for web traffic analysis based on real traces, with the purpose of providing a consistent model of the workload of e-commerce web sites generated by automatic applications, namely web bots. The authors apply regression analysis to obtain a mathematical model of the features of bot generated traffic and find the existence of a heavy-tail in the resulting traffic distributions. The eighth paper deals with low level architecture complexity, and proposes a simulation approach at the hardware level. In “An investigative analysis of RFiop architecture: RF-memory path to address on-package I/O pad and memory controller scalability” the authors propose a simulation based framework for the evaluation of RFiop, a radio frequency (RF) I/O pad-scalable package-based memory organization implemented in an architecture in which the traditional memory path is replaced with an RF-based one, with special attention to power issues. To close the Special Issue, we chose two papers that are focusing on more computation oriented approaches. The first is a paper that introduces a novel optimization technique that may be useful as a complement to simulation to search for solutions and help reducing the set of parameters in complex problems. In “Artificial Acari optimization as a new strategy for global optimization of multimodal functions” the authors introduce the technique and illustrate its advantages and effectiveness, comparing it with other comparable approaches by means of benchmarking. The last paper of this Special Issue, “A parallel PDE-based numerical algorithm for computing the Optical Flow in hybrid systems”, proposes a fine-to-coarse parallelization strategy exploiting a parallel hybrid computing architecture for a case study that deals with the Optical Flow numerical problem. The authors approach the problem with a partial differential equations based analytical model that is solved by a parallel multilevel implementation, combining code running on Graphic Processor Units and a standard cluster.

4. Conclusions In this Special Issue we tried and offered to the readers of this Journal an introduction to the use of computing as a support for modeling and simulation of HCCS. This research field has a strong background and a solid methodological base, but HCCS push the limit of existing approaches and foster a continuous evolution of the field. Our wish is that this Special Issue will contribute to stimulate the interest of researchers from different fields to suggest cross fertilization.

Editorial / Journal of Computational Science 22 (2017) 126–130

Acknowledgements This publication is based upon work from COST Action IC1406 High-Performance Modelling and Simulation for Big Data Applications (cHiPSet), supported by COST (European Cooperation in Science and Technology). We are very grateful to all the authors and the reviewers that helped us in the making of this special issue. Our special thanks go to the Editor-in-Chief Prof. Peter Sloot for this chance and for his patience.

[23]

References

[27]

[1] A. Rodrigues Da Silva, Model-driven engineering: a survey supported by the unified conceptual model, Comput. Lang. Syst. Struct. 43 (2015) 139–155, http://dx.doi.org/10.1016/j.cl.2015.06.001. [2] J. Rumbaugh, I. Jacobson, G. Booch, Unified Modeling Language Reference Manual, 2nd ed., Pearson Higher Education, 2004. [3] OMG, OMG Systems Modeling Language (OMG SysML), Version 1.3, 2012 http://www.omg.org/spec/SysML/1.3/. [4] R. Panesar-Walawege, M. Sabetzadeh, L. Briand, Supporting the verification of compliance to safety standards via model-driven engineering: approach, tool-support and empirical validation, Inform. Softw. Technol. 55 (5) (2013) 836–864, http://dx.doi.org/10.1016/j.infsof.2012.11.009. [5] N. Othman, E. Legara, V. Selvam, C. Monterola, A data-driven agent-based model of congestion and scaling dynamics of rapid transit systems, J. Comput. Sci. 10 (2015) 338–350, http://dx.doi.org/10.1016/j.jocs.2015.03.006. [6] S. Litescu, V. Viswanathan, M. Lees, A. Knoll, H. Aydt, Information impact on transportation systems, J. Comput. Sci. 9 (2015) 88–93, http://dx.doi.org/10. 1016/j.jocs.2015.04.019. [7] V. Viswanathan, D. Zehe, J. Ivanchev, D. Pelzer, A. Knoll, H. Aydt, Simulation-assisted exploration of charging infrastructure requirements for electric vehicles in urban environments, J. Comput. Sci. 12 (2016) 1–10, http://dx.doi.org/10.1016/j.jocs.2015.10.012. [8] S. Abd, S. Al-Haddad, F. Hashim, A. Abdullah, S. Yussof, An effective approach for managing power consumption in cloud computing infrastructure, J. Comput. Sci. 21 (2017) 349–360, http://dx.doi.org/10.1016/j.jocs.2016.11.007. [9] A. Fanfakh, J.-C. Charr, R. Couturier, A. Giersch, Optimizing the energy consumption of message passing applications with iterations executed over grids, J. Comput. Sci. 17 (2016) 562–575, http://dx.doi.org/10.1016/j.jocs. 2016.07.012. [10] A. Jobava, A. Yazidi, B. Oommen, K. Begnum, On achieving intelligent traffic-aware consolidation of virtual machines in a data center using Learning Automata, J. Comput. Sci. (2017), http://dx.doi.org/10.1016/j.jocs.2017.08.005. [11] E. Barbierato, M. Gribaudo, M. Iacono, A performance modeling language for Big Data architectures, in: W. Rekdalsbakken, R.T. Bye, H. Zhang (Eds.), ECMS, European Council for Modeling and Simulation, 2013, pp. 511–517. [12] M.-A. Vasile, F. Pop, R.-I. Tutueanu, V. Cristea, J. Kolodziej, Resource-aware hybrid scheduling algorithm in heterogeneous distributed computing, Future Gen. Comput. Syst. (2014), http://dx.doi.org/10.1016/j.future.2014.11.019. [13] J. Kolodziej, F. Xhafa, L. Kolanko, Hierarchic Genetic Scheduler of Independent Jobs in Computational Grid Environment, 2009, pp. 108–114. [14] X. Zeng, S. Garg, Z. Wen, P. Strazdins, A. Zomaya, R. Ranjan, Cost efficient scheduling of MapReduce applications on public clouds, J. Comput. Sci. (2017), http://dx.doi.org/10.1016/j.jocs.2017.07.017. [15] J. Shi, J. Luo, F. Dong, J. Jin, J. Shen, Fast multi-resource allocation with patterns in large scale cloud data center, J. Comput. Sci. (2017), http://dx.doi.org/10. 1016/j.jocs.2017.05.005. [16] F. Flammini, S. Marrone, M. Iacono, N. Mazzocca, V. Vittorini, A multiformalism modular approach to ERTMS/ETCS failure modelling, Int. J. Reliab. Qual. Saf. Eng. 21 (1) (2014), http://dx.doi.org/10.1142/ S0218539314500016, 1450001-1–1450001-29. [17] E.M. Clarke, J.M. Wing, Formal methods: state of the art and future directions, ACM Comput. Surv. 28 (4) (1996) 626–643, http://dx.doi.org/10.1145/242223. 242257. [18] C. Fetzer, C. Weidenbach, P. Wischnewski, Compliance, functional safety and fault detection by formal methods, in: Leveraging Applications of Formal Methods, Verification and Validation: Discussion, Dissemination, Applications: 7th International Symposium, ISoLA 2016, Imperial, Corfu, Greece, October 10–14, 2016, Proceedings, Part II, Springer International Publishing, Cham, 2016, pp. 626–632. [19] S.-H. Liao, P.-H. Chu, P.-Y. Hsiao, Data mining techniques and applications – a decade review from 2000 to 2011, Expert Syst. Appl. 39 (12) (2012) 11303–11311, http://dx.doi.org/10.1016/j.eswa.2012.02.063. [20] W. Van der Aalst, Process Mining: Data Science in Action, 2016, http://dx.doi. org/10.1007/978-3-662-49851-4. [21] W. Van der Aalst, B. Van Dongen, J. Herbst, L. Maruster, G. Schimm, A. Weijters, Workflow mining: a survey of issues and approaches, Data Knowl. Eng. 47 (2) (2003) 237–267, http://dx.doi.org/10.1016/S0169-023X(03)00066-1. [22] A. Mozaffari, M. Azimi, M. Gorji-Bandpy, Ensemble mutable smart bee algorithm and a robust neural identifier for optimal design of a large scale

[24] [25]

[26]

[28]

[29]

[30]

[31]

[32]

[33]

[34]

[35] [36] [37] [38]

[39] [40]

[41]

[42] [43] [44]

[45]

[46]

[47]

129

power system, J. Comput. Sci. 5 (2) (2014) 206–223, http://dx.doi.org/10. 1016/j.jocs.2013.10.007. C. Esposito, M. Ficco, F. Palmieri, A. Castiglione, Smart cloud storage service selection based on fuzzy logic, theory of evidence and game theory, IEEE Trans. Comput. PP (99) (2015) 1, http://dx.doi.org/10.1109/TC.2015. 2389952. T. Murata, Petri nets: properties, analysis and applications, Proc. IEEE 77 (4) (1989) 541–580, http://dx.doi.org/10.1109/5.24143. D. Kartson, G. Balbo, S. Donatelli, G. Franceschinis, G. Conte, Modelling with Generalized Stochastic Petri Nets, John Wiley & Sons, Inc., New York, NY, USA, 1994. W. Lee, D. Grosh, F. Tillman, C. Lie, Fault tree analysis, methods, and applications – a review, IEEE Trans. Reliab. R-34 (3) (1985) 194–203, http:// dx.doi.org/10.1109/TR.1985.5222114. E. Ruijters, M. Stoelinga, Fault tree analysis: a survey of the state-of-the-art in modeling, analysis and tools, Comput. Sci. Rev. 15 (2015) 29–62, http://dx.doi. org/10.1016/j.cosrev.2015.03.001. S. Balsamo, V.D.N. Personè, P. Inverardi, A review on queueing network models with finite capacity queues for software architectures performance prediction, Perform. Eval. 51 (2-4) (2003) 269–288. M. Gribaudo, M. Iacono, An introduction to multiformalism modeling, in: M. Gribaudo, M. Iacono (Eds.), Theory and Application of Multi-Formalism Modeling, IGI Global, Hershey, 2014, pp. 1–16. E. Barbierato, M. Gribaudo, M. Iacono, Exploiting multiformalism models for testing and performance evaluation in SIMTHESys, in: Proceedings of 5th International ICST Conference on Performance Evaluation Methodologies and Tools – VALUETOOLS 2011, 2011. E. Barbierato, M. Gribaudo, M. Iacono, Modeling and evaluating the effects of Big Data storage resource allocation in global scale cloud architectures, Int. J. Data Warehousing Mining 12 (2) (2016) 1–20, http://dx.doi.org/10.4018/ IJDWM.2016040101. E. Barbierato, M. Gribaudo, M. Iacono, Modeling Apache Hive based applications in Big Data architectures, in: Proceedings of the 7th International Conference on Performance Evaluation Methodologies and Tools, ValueTools’13, ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering), Brussels, Belgium, Belgium, 2013, pp. 30–38, http://dx.doi.org/10.4108/icst.valuetools.2013.254398. L. Rogovchenko-Buffoni, A. Tundis, M. Hossain, M. Nyberg, P. Fritzson, An integrated toolchain for model based functional safety analysis, J. Comput. Sci. 5 (3) (2014) 408–414, http://dx.doi.org/10.1016/j.jocs.2013.08.009. E. Barbierato, M. Gribaudo, M. Iacono, S. Marrone, Performability modeling of exceptions-aware systems in multiformalism tools, in: ASMTA, 2011, pp. 257–272. S. Robinson, Simulation: The Practice of Model Development and Use, John Wiley & Sons, Inc., New York, NY, USA, 2004. M. Gribaudo, M. Iacono, D. Manini, Three Layers Network Influence on Cloud Data Center Performances, 2016, pp. 621–627. F. Celik, Devs-m: a discrete event simulation framework for MANETs, J. Comput. Sci. 13 (2016) 26–36, http://dx.doi.org/10.1016/j.jocs.2015.11.012. J. Ben-Othman, K. Bessaoud, A. Bui, L. Pilard, Self-stabilizing algorithm for efficient topology control in wireless sensor networks, J. Comput. Sci. 4 (4) (2013) 199–208, http://dx.doi.org/10.1016/j.jocs.2012.01.003. S. Yi, C. Li, Q. Li, A Survey of Fog Computing: Concepts, Applications and Issues, 2015-June, 2015, pp. 37–42, http://dx.doi.org/10.1145/2757384.2757397. S. Khaitan, J. McCalley, Design techniques and applications of cyberphysical systems: a survey, IEEE Syst. J. 9 (2) (2015) 350–365, http://dx.doi.org/10. 1109/JSYST.2014.2322503. R. Seiger, C. Keller, F. Niebling, T. Schlegel, Modelling complex and flexible processes for smart cyber-physical environments, J. Comput. Sci. 10 (2015) 137–148, http://dx.doi.org/10.1016/j.jocs.2014.07.001. Suryambika, A. Bajpai, S. Singh, A Survey on Security Analysis in Cloud Computing, Springer India, New Delhi, 2016, pp. 249–262. R. Roman, J. Lopez, M. Mambo, Mobile edge computing, Fog et al.: a survey and analysis of security threats and challenges, 2017 abs/1602.00484. S. Hosseini, M. Azgomi, A. Rahmani, Malware propagation modeling considering software diversity and immunization, J. Comput. Sci. 13 (2016) 49–67, http://dx.doi.org/10.1016/j.jocs.2016.01.002. S. Linck, E. Mory, J. Bourgeois, E. Dedu, F. Spies, Adaptive multimedia streaming using a simulation test bed, J. Comput. Sci. 5 (4) (2014) 616–623, http://dx.doi.org/10.1016/j.jocs.2014.02.004. J. Ostroff, Formal methods for the specification and design of real-time safety critical systems, J. Syst. Softw. 18 (1) (1992) 33–60, http://dx.doi.org/10.1016/ 0164-1212(92)90045-L. A. Kornecki, J. Zalewski, Certification of software for real-time safety-critical systems: state of the art, Innov. Syst. Softw. Eng. 5 (2) (2009) 149–161, http:// dx.doi.org/10.1007/s11334-009-0088-1.

Mauro Iacono Dipartimento di Matematica e Fisica, Università degli Studi della Campania “Luigi Vanvitelli”, viale Lincoln 5, 81100 Caserta, Italy

130

Editorial / Journal of Computational Science 22 (2017) 126–130

Marco Gribaudo Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, via Ponzio 34/5, 20133 Milano, Italy Joanna Kołodziej Institute of Computer Science, Cracow University of Technology, ul. Warszawska 24, 31-155 Cracow, Poland

Florin Pop Department of Computer Science and Engineering, University Politehnica of Bucharest, Splaiul Independentei 313, 060042 Bucharest, Romania E-mail addresses: [email protected] (M. Iacono), [email protected] (M. Gribaudo), [email protected] (J. Kołodziej), fl[email protected] (F. Pop).