Editors’ introduction

Editors’ introduction

Performance Evaluation 67 (2010) 583–584 Contents lists available at ScienceDirect Performance Evaluation journal homepage: www.elsevier.com/locate/...

151KB Sizes 2 Downloads 66 Views

Performance Evaluation 67 (2010) 583–584

Contents lists available at ScienceDirect

Performance Evaluation journal homepage: www.elsevier.com/locate/peva

Guest editorial

Editors’ introduction

This special issue originated with the Seventh ACM Workshop on Software and Performance (WOSP), held in Princeton, New Jersey in June 2008. The topic of the workshop was methods and tools for addressing performance concerns in software design, configuration and execution. Thirteen papers were submitted to an open call, and the eight papers presented here (of which six descend from workshop papers) were selected. Research on software and performance can be categorized in various ways, such as:

• insight based on measurement or on predictive modeling, • early analysis using predictive models, or later analysis based on measurement, • concern with software architecture and design, versus configuration and tuning. These eight papers will be discussed under three headings: modeling techniques (three papers), system design and configuration issues (four papers), and novel performance measures (one paper). Predictive modeling of software performance has seen intense activity in recent years, and derivation of the performance model structure and parameters from the software itself, or from design models (e.g. in UML), has been a major theme in WOSP. Wang and Herkersdorf describe a source-level simulator (in which the simulation logic uses source code derived from a design model) for embedded systems. The execution demand parameters for code segments are derived from execution of instrumented low-level code, in a way which captures the effect of compiler optimizations. When cache effects are included in the simulation, excellent accuracies (errors of a few percent at most) are achieved. Happe et al. use a simulation model to describe message-intensive distributed applications, with a focus on how the message related aspects are incorporated into the model. They use ‘‘performance completions’’, which are submodels for the messaging functions inserted in the model where messages are passed. The goal is a separation of concerns, and reuse of the completion submodels wherever similar messaging is used. They parameterize the completions (e.g. for message sizes) and treat them as components in their Palladio workbench. Zhao and Thomas have quite different concerns, with finding efficient solution techniques for an analytic state-based model which is subject to state explosion. They use a case study of key-based communication security to demonstrate the efficiency and accuracy of approximations to the state-based solution, using first queues and then a fluid approximation. They achieve errors of just a few percent, and use the approximations to study trade-offs in designs for distributing the keys. Guidance for decisions on software architecture, design and run-time configuration are the payoff for software performance insights, and there are four papers related to this question. An important opportunity arises in componentbased systems, where performance measurements, models or parameters for components may be re-used in predicting performance of composed systems. Koziolek gives a wide-ranging survey of techniques that can be used to capture the information and make the predictions. Moreno and Smith contribute to the usability of predictive performance modeling of software by describing (1) an improved model-interchange format for extended queueing models, and (2) a transformation from a software component-composition specification (in CCL) to a performance model. Using a model-interchange format allows the use of alternative model solvers. They provide a case study giving a model for a robot controller. Menasce, Casalicchio, and Dubey consider the configuration of a Service-Oriented Architecture by selecting services to be included, to minimize the mean response time (notice that this also is a kind of component-base system). The response time distribution of each service is assumed known from measurement, and a business process model defines the service requirements. A fast heuristic algorithm is described which comes close to a full optimization (a few percent in QoS). Xu considers design optimization. She applies rule-based automation and known principles and patterns for performance improvement, to a model of an application (assumed to be derived from a UML design model). Each change has a corresponding change in the design space, sometimes requiring further designer effort (for instance where a critical block of code is identified, with a high benefit for reducing its runtime). Her PB framework delivers reductions in the predicted response times of between 50% and over 90%, in examples. 0166-5316/$ – see front matter © 2010 Published by Elsevier B.V. doi:10.1016/j.peva.2010.05.001

584

Guest editorial / Performance Evaluation 67 (2010) 583–584

Reinecke and Wolter evaluate measurements on the execution of web services from the point of view of adaptivity, with a definition of adaptivity based on a payoff function for the system. They define a notion of beneficial and non-beneficial adaptation steps, and an adaptivity measure that uses the relative frequency of beneficial steps. They apply this to three strategies for setting a restart time for web service requests, to see how well they adapt to bursts of packet losses. Elaine Weyuker ATT Labs – Research, Florham Park, NJ, USA Murray Woodside Carleton University, Ottawa, Canada