A tool to enhance model exploitation

A tool to enhance model exploitation

ELSEVIER Performance Evaluation 22 (1995) 59-74 A tool to enhance model exploitation Jane Hillston Department of Computer Science, University of Edi...

1MB Sizes 0 Downloads 27 Views

ELSEVIER

Performance Evaluation 22 (1995) 59-74

A tool to enhance model exploitation Jane Hillston Department of Computer Science, University of Edinburgh, Mayfield Road, Edinburgh, UK EH9 3JZ

Abstract

Recent work at t.he University of Edinburgh has aimed to enhance the use of performance models and to make such models more alccessible to non-expert users such as designers. This work, part of the Integrated Modelling Support Environment (IMSE) project supported by the CEC, resulted in the development of a tool for creating and running experiments within a performance modelling context. This paper describes the main features of this tool, the Experimenter, and the support environment within which it was developed. We also explain how these features provide a new level of support for the performance model user. Keywords: Model imeraction;

Experimentation;

Graphical interface; Reuse of models; Modelling support environ-

ment

1. Introduction The increasing complexity of computer systems has had a significant effect upon the field of performance evaluation. The importance of performance analysis is increased as the behaviour of systems becomes less straightforward and intuitive. Several people have recognised this trend and identified th,e need of involving performance evaluation in the design process [1,2]. This has led to some effort towards making the existing modelling techniques accessible to non-expert modellers, for example designers. Meanwhile, complex systems require complex models to capture their behaviour. This complexity may lmake models difficult to understand when seen in detail; indeed it poses a serious challenge to the capabilities of model construction and solution techniques. Thus, recently there has been much interest in the use of structuring paradigms within performance models, such as modules and hierarchies, and the approach of decomposition and aggregation. Although recognition of the importance of performance analysis is now widespread, the usefulness of performance evaluation techniques is limited by the support available in the form of tools and environments. The last decade has seen considerable work on the construction of systems to support some aspects of performance analysis and there has been some success. However the emphasis has been on support for model construction and solution, with little or no consideration given to the details of the subsequent use of the model to investigate the underlying system. 0166-5316/95/$09.50 0 1995 Elsevier Science B.V. All rights reserved SSDI 0166-5316(93)E0038-7

60

J. Hillston /Performance

Evaluation 22 (1995) 59-74

The work described here specifically aimed to enhance the use of models once they have been constructed. It has also been found that the support provided makes models more accessible to non-expert users who may not themselves have been involved in the construction process. Thus, designers or managers, who may be interested in the results from a performance model without wanting to know the details of the underlying techniques, now have a context within which to work with the model, in terms which they understand. This support also appears to form a natural framework for the support of a more modular approach to modelling; in particular it has been readily integrated with the use of decomposition and aggregation techniques. This work was carried out during the development of the Integrated Modelling Support Environment (IMSE) ‘, and resulted in a tool, the Experimenter, which is at the core of the environment. However, the design of the Experimenter is general and it is possible to envisage such a tool working with other model construction tools and environments. The rest of this paper is structured as follows. Section 2 discusses the support currently and potentially available for model construction and model exploitation. Section 3 describes the major features of the IMSE system, while Section 4 explains how this gives rise to a new approach to the exploitation of models; Section 5 outlines how this is implemented in the Experimenter. In Section 6 some advantages of this new approach are highlighted and in the final section we draw some conclusions and point out possible directions for further work.

2. Model construction and model exploitation The tools which have appeared in the past decade represented a substantial step forward in the support given to performance analysis practitioners. Without such tools an analyst is limited to developing models using paradigms and solution techniques he can implement himself. In many of the more recent tools graphical interfaces have also been added offering powerful visualisation tools, which can lead to a considerable speed-up in model construction. Examples of such systems are PIT, RESQ and GreatSPN [3-51. Also, the adoption of software engineering style structuring paradigms within model construction has led to a modular, hierarchical approach to modelling being advocated [6] and included in some tools [7]. IMSE [8] was developed as part of a growing trend towards developing environments to support the whole performance evaluation process [9]. The TANGRAM system, developed at UCLA, has similar objectives [lo]. These toolkits aim to provide a context within which modelling tools can be incorporated and used in conjunction with other facilities designed to assist the performance analyst. Wherever possible IMSE has been designed to support users who do not have in-depth knowledge of the tools and techniques employed. The increasing sophistication of modelling tools, with the inclusion of features such as automatic report generation, multiple solvers and integrated methodologies has generally allowed the analyst to concentrate on those aspects of the model which describe the system ’ The Integrated Modelling Support Environment project was ESPRIT2 2143, funded by the CEC. IMSE reports are available from BNR (Europe) Ltd., London Road, Harlow, Essex, UK.

J. H&ton /Performance

Evaluation 22 (1995) 59-74

61

rather than details of the model’s internal operation. The Experimenter aims for a further separation of model construction and model exploitation. During model construction the analyst is model-oriented, concentrating on building a model which is an accurate and appropriate representation of the system. On the other hand, during model exploitation, work should be more system-oriented, using the model as a tool with which to investigate the behaviour of the system. Within IMSE model construction is carried out within a model construction tool, whilst model exploitation is performed in the setting of the Experimenter. This separation is designed to facilitate the use of models by people other than those who built them. During experimentation each model is regarded as a black box, and is presented in terms of the system it represents, the input parameters which may be influenced by the user and the output which may be observed. Input parameters may easily be varied without detailed knowledge of the internal structure of the model.

3. IMSE The IMSE project aimed to exploit current workstation technology and build upon recent advances in the support of modelling activities, in order to produce an integrated set of tools to help performance analysts in their work. Within IMSE there are tools available for system description, workload characterisation, experimentation upon models and report generation, supporting the use of modelling tools based on three previously established tools (PIT, GreatSPN, and QNAP2 [ll]). The tools are made available to the user through a graphical interface, known as the WorkBench, and by mouse and menu-driven operations on iconic objects. Via the three embedded modelling tools IMSE supports three alternative dynamic modelling paradigms and has been designed to be open to new tools and techniques [12]. The tools presently included support generalised stochastic Petri networks (PNT), queueing networks (QNET) and process interaction view simulation (PIT). Models are constructed with graphical editing tools, all based on a common (data manipulation and graphical support system. Analytical, numerical and simulation solution techniques are supported. Model execution is invoked via the paradigm independent Experimenter so model solution tools are generally transparent. Thus the Experimenter is central within the environment and gives greater flexibility to the treatment of models and the analysis of systems than is commonly available. In particular it supports interaction between models developed using different modelling paradigms. This is currently limited by modelling tool capabilities but shows the potential for a modular approach to model execution as well as model construction. Much of the work of the Experimenter in supporting multiple modelling paradigms and solution methods is new. However, some earlier work has been carried out in this field in the support of specific tools [7,13,14,10]. Tools for static modelling and workload analysis are also available. Static models provide a hierarchical deslcription of the system in terms of subsystems and offered and used services. This structural description, constructed using the Sp tool [15], is then used for calculating workloads and performance specifications. The workload analysis tool (WAT) [16] analyses external data to provide input values for dynamic or static models. These analyses may also be

62

J. Hillston /Performance

Evaluation

22 (1995) 59-74

performed on the output from models for validation purposes. The framework of the Experimenter allows results from these tools to be used as input values for dynamic models automatically. Central to the environment is an object-oriented approach which is implemented by the Object Management System (OMS) [17]. The OMS stores all the “objects” within the system and the links between them. Objects are derived from a common entity relationship model of the whole system and may, for example, be models, results, reports or collections of input parameter levels. Some objects correspond to physical files; others represent “conceptual entities”. Such structures provide a means of recording, maintaining and possibly reusing the work carried out within the environment. The OMS provides functions allowing tools, and the user, to create and delete objects, group objects into directories, and to establish relational links between them. To illustrate the use of the OMS and the approach to model development and reuse which it embodies we will describe the objects and links associated with an experiment. In the following objects will be denoted by italics. Each experiment is linked to an experimental plan, the description of the experiment generated by the user. It is linked to one or more model objects, the link indicating the use of the model within the experiment. The input, output and control interfaces of the model are detailed in input, output, and control specification objects linked to the model object. An experiment may require several executions of a model: a YUIZobject will be created to represent each execution and will be linked to both the experiment and the model. Several runs may be linked to each model, one for each required model execution. Linked to each run will be input, output and control templates providing parameters for this model execution. Also linked to the run object will be the output object, containing the model output. The outcome of the experiment will be contained in the results object, linked to the experiment. Each of these objects will be described in more detail in Sections 4 and 5. An important feature of the IMSE is the set of environmental tools which enable the user to make full use of the models created. As well as the Experimenter, there is also an Animator tool, driven by simulation traces, allowing the user to visualise an execution, and a Reporter tool for the creation and collation of results and reports. In this paper, however, we focus on the facilities provided by the Experimenter. The schematic architecture of IMSE is shown in Fig. 1.

4. The Experimenter approach to model exploitation The Experimenter offers model exploitation facilities for models developed in a certain way, as supported by the IMSE. These facilities can extend the use of models and it is important that model construction is such that this potential can be exploited. The salient features of model construction, and the subsequent use of models, are outlined below in terms of the IMSE environment. However we stress that the features of IMSE supporting experimentation are general and could be provided for any modelling tool. During model construction, whichever model construction tool is used, the features of the model which represent the external influences on the system or aspects of the system likely to

J. Hillston /Performance

Evaluation 22 (1995) 59-74

63

+_F*gs Fig. 1. The schematic architecture of the IMSE.

change, are marlked as “free” variables, using a simple mouse operation. The type and role of these variables is defined by their occurrence within the model, and they may be given a default value to be used if no other is assigned later. Marking them as free places them in the input interface to the model. Each input parameter is given a name, which will be used to identify it in the experimental plan editor, and may also be described by a short piece of text. The model is also fully instrumented, with output parameters for all possible requirements. Where appropriate to the solution technique control parameters will be instantiated, or left free in a similar way to input variables. When development of the model is complete, and it has been suitably tested and validated, the model is stabilised or frozen. Further development is now only possible by creating a new version of the model, which is separately identifiable. This fixing of the moldel structure is crucial to maintain the integrity of results within the OMS. Since previously generated outputs and results are stored as objects, with appropriate links, users can make use of them again. All models in IMSE are built graphically and the executable version of the model is generated from the graph and its associated forms. This involves a graph traversal/model generation program which is specific to each modelling tool. During graph traversal each tool also generates input, output and control interfaces for the model. These interfaces, known as the model specijications, are stored in the object store, linked to the model. In Fig. 2 the rnodel shown has three input parameters, four possible output parameters and two control parameters. In order for the model to execute it must be supplied with a corresponding set of templates which will provide values to be assigned to the input and control parameters, and mark as selected the required output parameters (Fig. 3). These templates are generated by the: Experimenter. A single value is supplied for each input parameter according to a function fi specified for that parameter in the experimental plan. Output parameters are the responses which are derived from the model. In the experimental plan the user specifies which of these possible observations are to be made. The model solution tool uses this selection to select or filter the outputs from the model; only selected outputs will be placed in the output object linked to this run. Control parameters define how the model is to be executed or solved.

64

J. Hillston/Perfortnance

Evaluation 22 (1995) 59-74

Input Specification

i’ cl

MODEL

cl

Control Specification

ioooo7 Output Specification Fig. 2. A model within the IMSE object store.

Changing the value assigned to control parameters does not change the model or the system represented but may affect the value of the output parameters.

5. How the templates are generated The Experimenter is comprised of two major modules-an experimental plan editor, which allows the user to specify the experiment to be executed, and the experimental plan executor which interprets this plan, generating appropriate templates, invoking the model solution tool and collecting model outputs to form experimental results.

Input Template

Output Template Fig. 3. Templates passed back to the model solution tool for the nth execution.

.I. Hillston /Performance

Evaluation 22 (1995) 59-74

65

5.1. The experimental plan editor

An experiment is a series of related model solutions, using one or more models. The experimentation within IMSE is based on clearly defined experimental plans, similar to those in [X3], leading to #an approach which is systematic and objective-driven. An experimental plan allows the grouping of model executions into coherent groups with a clear objective. As described earlier the OMS records these model executions, and their results, linked within the object store, andi the Experimenter provides facilities to combine the results of the successive model executions to produce coherent results for the experiment. The notion of an experimental plan is not commonly supported by modelling systems. Support may be provided for expressing multiple runs (replications, regeneration points, etc.) and subsequent analysis of results. The analyst can easily become entangled in a complicated series of executions. Within the IMSE object store an experimental plan has similar status to a model. The existence of “experiments” within the system provides documentation of the use of models in a way that most modelling tools cannot support. Within an experiment there may be several experimental frames *, each containing a single model. These frames are small experiments which are somehow linked by a common objective, the results of whiich may be used in combination to address that objective. Like the other visible IMSE tools the experimental plan editor has a graphical interface. Although nodes and arcs can convey only organisational, or schematic, information about an experimental plan, this information was found difficult to express in an earlier prototype with a textual interface [19]. The structured forms associated with the nodes of the experimental plan graph proved to be ideal for displaying the specifications of a model and allowing users to enter values, or value formulae, for parameters. In addition the graphical approach provided the simplest interface to the Experimenter and the greatest coherence with the other IMSE tools. The plan is represented as a series of nodes, each of which has associated attributes representing the details of the plan. In this representation each frame is represented as a collection of nodes, depicting the model together with its input and output specifications, and control specification if it has one 3. Constructing an experimental plan starts when a model node is selected and placed on the background canvas. Clicking on this node opens a form prompting for the name of the model to be used within this frame, the model’s name and object store location must be given. From this information the plan editor is able to retrieve the model specifications, which define the interface to the model. These are then displayed in their respective nodes within the experimental plan, linked to the model node. The display of the experimental plan editor for a simple two frame experiment is shown in Fig. 4. Opening the input node reveals a form with one entry for each parameter in the input specification of the model. Each entry displays the parameter’s name, as it was given when the variable was marked as free during model construction, its type and any explanatory text which was associated with the parameter. There is also a cycling menu allowing the user to choose

xes are not ;supported as separate objects within the OMS. 3 Analytically solved models do not generally have any free control parameters.

66

J. Hillston /Performance I.

- .

-

.

ft button

- Hove

ddle

- Create

ght

button button

- Shw

l

a.

.-a- .

Evaluation 22 (1995) 59-74

-.

object lrnk operation

menu

Fig. 4. Editing an experimental

plan.

how values are to be assigned to this parameter during the experiment. The choices available are dependent on the type of the parameter concerned. Within a frame each parameter is fixed, taking on only a single value (a set of one), or varying. Varying parameters are assigned a

Fig. 5. An experiment.

J. Hi&on / Perfomance

Evaluation 22 (1995) 59-74

67

value generating function (Fig. 5). If a default value was supplied during model construction, this is used as a default fixed value in the experimental plan; otherwise a default value for the type is used. The variation of a parameter may be specified by set: a collection of values may be assigned to the parameter in turn. A fixed parameter is specified as varying over a set with one value. range: parameters taking on a series of regularly increasing values may be specified over a range; starting, stopping and step values are specified. distribution: when the parameter represents a random variable the distribution to be sampled may be specified within the experimental plan, together with appropriate parameters. auxiliary experiment: an experiment may be used to supply input values. In this way a series of model executions may be carried out to create a succession of values to be used in the current experimental plan. When an auxiliary experiment is to be used as a source of values it is denoted by a subnode, attached to the input node of the frame. extracted valuta: a particular value may be extracted from a previously generated output object. Many of the IMSE modelling tools produce compound values in their outputs, this mechanism allows the user who understands how these compounds are structured to extract a single value for use as an input value. dependency: the values of a parameter may be specified in terms of the values taken on by other input para,meters or in terms of the output values previously generated by the model. Search strategies may be specified in this way. Although by diefault every possible combination will be used when there are several varying parameters, there are facilities for constraining the search space. Constraints are expressed after the definition of input parameters using logical and arithmetic expressions to exclude unwanted combinations of parameters (Fig. 6). This gives a great deal of flexibility to the experiments which can be carried out. The use of constraints, dependencies, auxiliary experiments and extracted values will be explained in more detail in the following section. Opening an output node displays a form with one entry for each parameter in the output specification of the model. Beside the name of each output parameter there is a toggle on /off switch. By default all output parameters are marked as on meaning that they will be collected

H fl

f2

f3

constraint

MODEL

Fig. 6. An experiment with a constraint.

68

J. Hillston /Performance

Evaluation 22 (1995) 59-74

during the model execution. Clicking on the toggle, the user may switch off unrequired output parameters. If the model has control parameters a control subnode will be created, and attached to the model node. In the associated form each of the control parameters is listed with a prompt at which the user must supply an appropriate value. In the case of trace controls an on/off toggle is given which the user can use to indicate whether tracing is required. Other typical control parameters are setting the maximum time for a simulation model execution or the required degree of accuracy for a numerical solution. To complete the plan the user must include at least one analysis node. This node will be connected, via the output node, to each frame within its scope. In an analysis node the user specifies how the experimental results are to be generated from the model outputs. This specification has two parts: the first determines which values are to be collected to form experimental results, the second determines any post-processing which is to be carried out and the display formats which are to be generated. The analysis node can only use values from frames to which it is connected. The form associated with an analysis node is divided into two parts. The top part shows a list of the data which are in scope; that is all the input parameters and chosen output parameters from the frames to which this node is attached. Each time another frame is linked to the node this table is updated. Each entry in the table is numbered and labelled by its frame and parameter name. The lower part of the form is an extensible list of result formats which may be formed from the collected data listed above. The user selects a result format, such as Histogram and then, using the numbers from the table selects the parameters which are to appear in the histogram. Any number of result formats may be specified in this way. They will all be stored in the results object linked to the experiment in the OMS, and may be displayed by the Reporter. Thus values from different frames within the scope of the node may be combined within a single result format. 5.2. The experimental plan executor The experimental plan executor does not itself carry out the model executions specified within the plan-this is done by the appropriate modelling tool. However, the Experimenter retains control of these executions and collects and collates the results. Thus a distinction has been introduced between the control of model execution, directed by the Experimenter, and the realisation of model execution, carried out by the individual modelling tools. This could be compared to a UNIX make file which will specify which programs are to be compiled to produce a library while the compilation is actually done by the appropriate compiler. The executor carries out the experimental plan a frame at a time. Each model execution is invoked by sending the name and OMS location of the model to be solved, together with the instantiated input, output and control templates to the correct modelling tool. The templates must match the specifications of the model which define the input, output and control interfaces of the model. The plan executor uses the specification and the experimental plan to instantiate the appropriate templates to achieve the executions required. The output and control templates are constant throughout a frame so these are generated once per frame. If there are no control parameters an empty control template is generated. Otherwise a

J. Hillston /Performance

Evaluation 22 (1995) 59-74

69

template is generated with one entry for each control parameter which was in the control specification, and this entry is given the value which was assigned to that parameter in the control subnode of the plan. Similarly the output template has one entry for each output parameter of the specification, each appropriately marked as being oy1 or off. In order to instantiate the input template the executor must generate a vector of input values corresponding to the input parameters in the specification. This is done according to the input generating functions given in the experimental plan, as described below: set: each of the values in the set is used in turn. range: a set of values is generated using the starting, step and stopping values specified in the plan. These values are then used in turn. auxiliary experiment: any auxiliary experiments needed within the plan are carried out before any model executions are started. The results are collected to form a set, the values of which are then used in turn. The auxiliary experiment itself is carried out as normal. extracted values: using the object location and value definition provided by the user within the experimental plan, the executor carries out a look-up procedure to extract the appropriate value from the o’utput object. This value is then assigned to the input parameter in all input templates generated for the frame. dependency: the two types of dependency are treated differently within the plan executor. If one parameter is dependent on the value of another input parameter this parameter does not affect the number of executions necessary in the frame, but appears in each input template with a value derived from the value given to the other parameter, as defined by the dependency. On the other hand if the parameter is dependent on an output parameter this dependency must be exhausted for every other combination of input values, i.e. for each value assignment vector generated by the rest of the input functions the dependency will generate a set of successive model executions. These two cases are shown in Fig. 7. The generating functions behave like successively nested for loops. Thus by default the search space of the experiment may be considered to be the cross-product of each parameter’s set of values. However, if one or more constraints have been defined in the experimental plan, the executor chelcks each generated vector of values against the constraint(s) before using it to generate an inpu.t template. If using this vector of values, the constraint evaluates to true this

MODEL

V a) output dependency Fig. 7. Use of dependencies

V

b) input dependency within experiments.

J. Hi&on

/Performance

Evaluation

22 (1995) 59-74

r

I hxiliary Experiment

MODEL 1

Fig. 8. Use of an auxiliary experiment.

vector is discarded and the executor will go on to generate the next vector according to the generating functions. Both value extraction and auxiliary experiments allow the Experimenter to make use of the output of one model to form an input value for another model. In an auxiliary experiment the supplying model must be placed in a simple experiment on its own, the only result of which is the input value required. This does however allow some post processing on the output from the model and also for a series of values to be used reflecting a series of executions of the supplying model (Fig. 8). The auxiliary experiment is carried out automatically within the execution of the experimental plan using the values it generates. Value extraction, on the other hand, is used to take a single value directly from the output object associated with a single model execution. The model execution must have already been carried out, as part of a previously executed experimental plan. The experimental plan executor is also responsible for collecting the required values from the model output objects, and creating the experiment results. Thus, as each frame is executed the values specified by the associated analysis nodes are collected and treated as specified within those nodes. These results are then placed in a results object, which is linked to the current experiment object, in the object store. Results may subsequently be displayed by the Reporter.

6. Advantages

(1) Better

of the Experimenter

approach

orgunisation. Keeping track of multiple model solutions often poses a problem, particularly when a complex search strategy is being used. The analyst was previously given little or no support when trying to formulate such a series of executions, or in carrying them out. The Experimenter addresses this in two ways: (1) Within the experimental plan the analyst can clearly specify the executions required; and

J. Hillston /Performance

(2)

(3)

(4)

(5)

(6)

Evaluation 22 (1995) 59-74

71

(2) The existence of an experiment within the object store, linking all the models used and their results with the associated input parameters, provides a much better level of “housekeeping” than was previously available. Abstraction of detail. As the internal mechanisms of a model are not visible within the Experimenter, no knowledge, or even awareness, of them is necessary. Thus the Experimenter enables a user to exploit a model without being an expert in simulation, statistics, queueing theory, or related techniques. Thus the model may be used in a clear and meaningful way by someone other than the model builder. This has implications for the incorporation of performance analysis techniques into the design process. System-orientNedinterface. The parameters which are left free within the model do not appear within the Experimenter interface with the same name as they had within the modelling interface. Thus, within the Experimenter, parameters have a name pertaining to the system, whilst a name appropriate to the parameter’s role is maintained at the modelling interface. This system-oriented name is provided by the modeller at the time of model construction but is then maintained by the system. It is used within the Experimenter and .the Reporter, allowing model executions and their results to be phrased in terms of the system to which they relate. The modeller is also able to attach a short explanatory text to each parameter, explaining what it is meant to represent, the intended units or other information. These facilities, although simple to provide, are valuable to support the use of models by those who were not involved in their construction. Selective instrumentation. The information required from a model will change during the lifetime of the model. For example during validation it is likely that more output parameters will be needed than during the subsequent use of a model as a design aid. Similarly, the parameters which are important will be different again if the model is used to assist tuning. Incorporating additional instrumentation into an operational model can be problematic. Even reducing the amount of data produced will entail altering the code of the model. However, fully instrumenting the model to cover all eventualities may leave the user overwhelmed by data. Clearly, there is a compromise to be reached to avoid this and yet make as much information as possible accessible from the model. The filtering facilities provided by the Experimenter mean that a model may be written with full instrumentation anticipating all the information which may be required. However, for any particular execution of the model the user need only select those outputs which are of interest. Additional outputs will not be calculated, or will be discarded, so the user will not be confronted with them. Result manip,uZation. Even after selective instrumentation the volume of data produced during an experiment may make manual analysis daunting, or even impossible. The Experimenter includes mechanisms for abstracting results from the data produced by the model execmions. We make a distinction between model output and experimental results. The Experimenter allows the data produced by a model to be filtered; it also may carry out post-processing or analysis on the model output in order to produce the experimental results. Reusability of modeb. A modelling system will often produce standard reports which may not address the questions which need to be answered, may provide too much information or may provide the right information but in the wrong format. The analysis facilities within

12

J. Hillston /Performance

Evaluation

22 (1995) 59-74

the Experimenter allow the user to specify how the collected data is to be presented. This means that the appearance, format and even the content of the results produced may be adjusted to an experiment’s objectives without changing the model. The output nodes of the plan determine which data is put into the associated model outputs. Similarly the analysis nodes of the plan determine the contents of the experiment results. By default these are the concatenation of the associated model outputs. However, there are many other possibilities: further filtering may take place; statistical analyses may be applied to particular results or groups of results, such as average, minimum, standard deviation, etc.; more complex result structures may be constructed from the available model outputs (those not filtered out by the output node) and model inputs such as tables or regression curves. These facilities are not limited to the results of a single frame. A single analysis node can be linked to several frames and therefore will have access to the data associated with each of them. Thus a model need not be written to address a single purpose but instead is written in greater generality to be used and reused in many different experiments. (7) Flexible execution strategies. Model solution may take a long time. However, when the result of one resolution may influence the parameters to be used in the next, with existing tools human interpretation and intervention may be necessary before the next solution can proceed. This can be time consuming. The Experimenter allows the user to express such dependencies within the experimental plan. Within the Experimenter a dependency is the use of a variable expression to provide values for an input parameter. The variables take the values associated with other input or output parameters of the same frame. There are two different types of dependency expressible within the input node of an experimental plan, as explained earlier. These features are particularly useful for expressing search strategies and the analyst may be able to use some insight to reduce the number of model execution necessary. Using dependencies even a complex experiment may be left to run in batch mode and yet be sensitive within its search strategy to the results being produced.

7. Conclusions

The initial motivation behind the Experimenter [20] was to provide a tool to support systematic experimentation with performance models. It was felt that this phase of a modelling study had inadequate support, as all automation effort had gone into providing powerful model construction tools. It was hoped that providing support for experimentation, would help the user to investigate a model fully, and in a goal-directed manner. The introduction of a separate experimentation facility into the IMSE environment resulted in a clear distinction being made between the construction of a model and its execution. The new interface to the model provided by the Experimenter gave greater flexibility to a user of the model who had not been involved in its construction than might otherwise be possible. It has been successfully used in this capacity during a modelling study of a large real time application at Thomson CSF [21]. The Experimenter also provides a natural infrastructure within which other tasks within the performance analysis activity can interact with the modelling task. For example the use of the workload analysis tool in conjunction with the auxiliary experiment mechanism allows actual

J. Hiltston/Performance Evaluation22 (1995) 59-74

73

data to be used to generate input parameter values for a dynamic model, in a clear and documented way. The use of the object store and the IMSE protocols which ensure data integrity, together with the Experimenter, offer new potential for the use of performance techniques within a team. Many new possibilities are opened up by the Experimenter and the associated approach to model development and exploitation. The tool developed for IMSE and described in this paper only begins to explore this potential. Some directions which appear useful to pursue are suggested below: - The experimental plan executor treats each frame within a plan separately and executes them sequentially. The time involved in executing an experiment, often a bottleneck in performance modelling studies, could be reduced if a distributed approach were used to execute frames concurrently. Indeed the experimental plan contains enough information for the executor to be able to determine which executions within a frame could be executed concurrently without jeopardising results. - The Experimenter does not currently include a great deal of support for the important area of experimental design, instead relying on the user to be able to supply an appropriate design. The plan editor, does however, provide a framework into which greater support for experimental designs and search strategies could be introduced. - The use of auxiliary experiments, extraction of values and the construction and use of aggregates, all of which are currently supported in the Experimenter, appear to be the first steps towards multi-paradigm modelling and interacting models. The Experimenter seems well-suited to a role of configuration management, allowing complex models to be built up from smaller ones which interact.

Acknowledgements The author would like to thank everyone who was involved in the IMSE project, and in particular her colleagues at Edinburgh, who contributed a great deal to the development of the Experimenter, Rob Pooley and Neil Stevenson.

References [l] R.J. Pooley, Hierarchical

simulation

and system description,

Proc. 3rd European Simulation Congress (1989)

94-100. [2] B.P. Zeigler, Miiltifacetted Modelling and Discrete Event Simulation (Academic Press, New York, 1984). [3] E.O. Barber, Process interaction tool user guide, IMSE (Esprit II Project 21431, BNR Europe Limited, London

Road, Harlow, Essex, UK, R5.1-6 edition, April 1991. [4] C.H. Sauer, RESQ2, Proc. Int. Conf on Modelling Techniques and Tools for Performance Analysis (1984) 16-18. [5] G. Chiola, A graphical Petri net tool for performance analysis, Proc. 3rd Znt. Workshop on Modelling Techniques and Performance Evaluation (1987) 297-308. [6] H. Beilner, Structured modelling-heterogeneous modelling, Proc. European Simulution Multiconference (1989). [7] H. Beilner, J. Maeter and N. Weissenberg, Towards a performance modelling environment: news on HIT, Proc. 4th Int. Conf on Modelling Techniques and Tools for Computer Performance (Plenum Press, New York, 1988).

J. H&ton /Performance Evaluation 22 (1995) 59-74

74

181 R.J. Pooley, The integrated modelling support environment: a new generation of performance modelling tools, Proc. 5th Znt. Conf on Modelling Techniques and Tools for Computer Performance Evaluation (1991) 1-15. 191 R.J. Pooley and J. Hillston, The performance evaluation process, 7th UK Performance Engineering Workshop

m

(Springer, Berlin, 1991) 1-15. L. Golubchik, G.D. Rozenblat,

W.C. Cheng and R.R. Muntz, The TANGRAM

modeling environment,

Proc.

5th Znt. Conf on Modelling Techniques and Tools for Computer Performance Evaluation (1991) 421-435.

WI M. Veran and D. Potier, QNAP2: A portable environment

for queueing systems models, in: D. Potier (Ed.),

Modelling Techniques and Tools for Performance Analysis (1985). 1121 G. Brataas, A.L. Opdahl, V. Vetland and A. Solvberg, Information

systems: final evaluation of the IMSE, IMSE Project Deliverable D6.6-2 (University of Trondheim), BNR Europe Ltd., London Rd., Harlow, Essex, UK, December 1991. 1131 T.I. &en, GEST: a modelling and simulation language based on system theoretic concepts, in: T.I. &en, B.P. Zeigler, and M.S. Elzas (Eds.), Simulation and Model-Based Methodologies: An Integrative Vkw (Springer, Berlin, 1984) 281-335. 1141 D.A. Umpress, U.W. Pooch and M. Tanik, Fast prototying of a goal-oriented simulation environment, Comput. .Z 23(6) (1989) 541-548. WI N. Xenios, The structure and performance

specification tool user guide, IMSE Project Internal Report R4.1-2, BNR Europe Ltd., London Road, Harlow, Essex, UK, October 1991. [161 M. Calzarossa, L. Massari, and G. Serazzi, The WAT design document, IMSE Project Deliverable D4.2-2 (University of Pavia), BNR Europe Ltd., London Road, Harlow, Essex, UK, August 1990. D71 G. Titterington and A. Taylor, The OMS design, IMSE Project Deliverable D1.2-2, BNR Europe Ltd, London Road, Harlow, Essex, UK, May 1990. Ml B.P. Zeigler, Theory of Modelling and Simulation (Krieger, 19761. D91 N. Stevenson, J. Hillston and R. Pooley, A tool for conducting modelling studies, Proc. 4th European Simulation Multiconference (1990).

role and design, ml1 R.J. Pooley, An experimentation tool for an integrated modelling support environment-its Software Engrg. .Z. (May 1989) 163-170. m C. Ricard, WP6.1: Final evaluation report. a real time application within IMSE, IMSE Project Deliverable D6.1-1 (Thomson CSF), BNR Europe Limited, London Road, Harlow, Essex, UK, December 1991. Jane Hillston received a B.A. degree in Mathematics from the University of York, England in 1985, and an M.Sc. degree in Mathematics from Lehigh University, U.S.A. in 1986. After a brief spell in industry she worked as a research associate for Kingston Business School for one year. During 1989-1991 she was employed as a research associate in the Department of Computer Science, at the University of Edinburgh, working on the ESPRIT II Integrated Modelling Support Environment Project. She recently completed her Ph.D. degree in Computer Science. Her current research interests include process algebras, stochastic Petri nets, approaches to large scale performance modelling, and aggregation and decomposition techniques.