DESIGNING EXPERT SYSTEMS FOR MODELING COMPLEX COMPUTER SYSTEMS
/ /
7 7
E. E Upchurch K. S. Rarnan K. Ranai
is, performance engineering methodology suitable for automation and integrationwith automated system design melhods is described. Such a composite systemoffers a pmmlseof automatedsystemgenerationwith performanceoptimizedfor given hardwarearchitecturalenvironmentsand/ or s e l e c t ~ of best hardware architecture for given performanceobjectives. The performance engineering component of the automated system designerwould consistof a graphicaluser Interfacewith an incorporatedknowledgebase of algorithms;analyticaland discreteevent simulalioncor~ structs; generic and applicationspedlic submodels lUxary;computm system architecture;and hardwareand software knowledge
INTRODUCTION In the design of computer systems, especially information systems, the focus has traditional been on the functionality of the systems. Performance issues and problems are handled as afterthoughts, as reactions to crises in the operation stage of systems. Such an approach often proves expensive. Some of the reasons articulated to rationalize this approach are: (1) performance engineering is highly technical and difficult; (2) performance engineering is not an on-going activity and, therefore, investment in it is not jnstifmble; (3) performance engineering is an expensive activity and it could be cheaper to buy more hardware; and (4) large and expensive performance studies have resulted in only marginal improvements. One could add to this list. These reasons might have been justified in the past, but recent developments in performance modeling tools and AI and expert system technologies seem to offer capabilities to I ~ N 0019-OY/g/gg~I/0027~/$2.50 © ISA 1989
carry out performance engineering studies concurrently with the system functional design studies. These technologies and tools are being developed at the microcomputer level and show a promise of being cost effective. ENHANCED MODEL OF SYSTEM DEVELOPMENT The classical life cycle model of system development consists of the definition stage, design and development stage, and installation and operation stage. This model has been widely used in the information systems context [2] but can be adapted equally well to other system situations. The main focus of this model is system functionality. It does not address performance issues. An enhanced life cycle model of system development that integrates the functional and performance aspects of system design (Figure 1) was recently proposed. [6] In this enhanced model a statement of performance requirements and objectives is included as a part of system definition. Thereafter, the functional design activities shown on the left side of Figure 1 and performance engineering activities on the fight side proceed concurrently. The key phases in the performance engineering life cycle ISA Transactions. Vol. 28, No. 1
27
are defining the performance objectives to be achieved by the proposed system, characterizing the workload, understanding the static characteristics of the system and each resource in the system, and analyzing and understanding the dynamics of the interaction between the workload and the system resources. ANALYZING AND UNDERSTANDING SYSTEM DYNAMICS Analyzing and understanding the dynamic interaction between the workload and the system requires specialized knowledge and expertise in characterizing workloads and system resources and the techniques and tools required in the analysis. Mathematical models of system interactions and dynamics can be built by using queueing theory, product form analysis, or operational analysis for simple systems. These mathematical techniques require simplifying assumptions, are limited in scope, and become intractable when applied to large and complex systems. Computer simulation models of system dynamics can be built using simulation languages. Historically, procedural languages such as GPSSTM and SIMSCRIFY TM have been used to model computer systems. This approach to modeling has often proved to be time-consuming and expensive. Recently, special-purlx~sesimulation languages with graphics front-end facilities have become available. Performance Analysis Workstation (PAW)TM from AT&T, (5) Performance Analyst's Workbench System System (PAWS)TM and GPSMTM from information Research Associates, [1, 3] and RESQTM from IBM [4] are some good examples. Graphical representation is a natural form to express a performance model. Graphical representation provides a consistent interface for all phases of the modeling and simulation exercise, starting with the creation and modification/enhancement of the model to graphic presentation of results. While these new graphics-oriented languages significantly improve user interface, some of them continue to be deficient in areas such as analysis and presentation of modeling results. Further, these languages do not offer the facilities to build a library of higher level constructs from successful past experiences.
PROPOSED AUTOMATED PERFORMANCE ENGINEERING SYSTEM The proposed automated performance engineering system would use graphics user interface technology, which as greatly facilitated model building and AI and expert system technology to capture knowledge relating to modeling tools, measurement tools, hardware and system knowledge, software knowledge, the modeling expertise of an experienced model builder, and a library of submodels and high-level constructs from successful past experiences. The system, whose conceptual architecture is shown in Figure 2, would consist era database, a knowledge base, an inference mechanism, a problem-solving system, an explanation system, and 28
ISA Transactions" Vol. 28, No. 1
a facility to continually update the database, knowledge base, and the inference mechanism with new rules. The database of the proposed system would consist of: * data on hardware devices, their characteristics and prices, * performance monitoring data, and * expected system workload data. The knowledge base of the proposed system is envisaged to consist of: * performance objectives and goals, * system functional specifications and goals * operating system profiles and characteristics, and * performance engineering tool profile and guide. The inference mechanism would replicate and bottle up the modeling, problem solving, and analysis expertise of an expert performance analyst. It is to noted that the domain of the above knowledge is very wide, and a single technique of coding and storing in the knowledge base may not serve every category of knowledge equally well. This may give rise to some new issues and challenges in the design of the knowledge base. A further point to note about the knowledge base is that it may become necessary to narrow down the knowledge on hardware, operating system, performance monitoring tools, and performance engineering tools to a few specific products. This would mean that a high-level expert or management decision would have already been made about the hardware, operating system, and performance monitoring and engineering tools. Although creation and maintenance of the database may seem straightforward, some difficulties are likely to arise in selecting a suitable database management system to successfully interface with the proposed knowledge base and inference mechanisms. Some expert system building tools claim to facilitate such an interface, but these claims have to be validated in the context of the proposed performance engineering system. Most of the challenges in the design and development of the proposed performance engineering system lie in the area of capturing the expertise of an expert performance analyst. How does an expert performance analyst conceptualize a performance problem in an existing system or derive the 'optimum' hardware configuration to achieve certain performance goals? In performance analysis of an existing system workload, characterizing the workload has always been difficult. The performance monitoring and job accounting systems generate reams of data that require expert interpretation before a reasonably representative workload model can be developed. This expertise will have to be captured and put into the expert system. In building the model, the expert analyst normally takes a view of the degree of detail to which the hardware components and the operating system features are to be represented. An expert analyst knows how the assumptions and approximations made in modeling the workload
and the system affect validity of the model results. In designing and configuring new systems, the starting referencepoints are the performance goals stated in tetras of response time, utilization, and system throughput. The expert analyst knows the dynamic interaction of these performance goals and objectives and economic constraints and is in a position to quickly advise whether the objectives can be achieved within the constraints. Once these goals and boundaries have been established, the expert analyst proceeds to build a detailed model of the configuration, explore scenarios, and arrive at an optimum hardware-software recommendation. All this knowledge, rules, and lines of reasoning will have to be built into the proposed automated performance engineering system. Yet another area of expertise in performance studies is the analysis and interpretation of modeling results, presenting the results in a usable form, error bounds, confidence levels, and statistical significance of results. This expertise is especially important in the design of new systems for which model validation is not possible. Let us now take a look at the process of building the model. The graphics-oriented modeling languages are a great help, but they lack intelligence. The proposed system envisions a modeling language that advises on different scheduling algorithms, the reasonable bounds of parameters to be used, and so on. For example, the proposed language would not allow building a model with exponentially distributed service time specified for a disk drive. Another feature envisaged in the proposed system is a library of submodels, which are used frequently in modeling exercises. As new models are successfully built, the system would automatically update the submodel library with any new submodels.
CONCLUSIONS Performance analysis of computer systems is highly specialized and requires a high level of expertise, and performance analysts with the required expertise are scarce. Expert system and AI technologies seem to offer the potential to capture the expertise of expert performance analysts and make it widely available. REFERENCES (I) Browne, J., et al., Graphical Programming for Simulational of Computer Systems, Proceedings of the Annual I F ~ Simulation Conference, 1985, pp. 109128. (2) Davis, G. B., and Olson, M. H., Managementlnforma-
tion Systems: ConceptualFoundations, Structure and Development. Second Edition, McGraw-Hill, New (3)
(4)
York, 1985. IRA, PAWS 3.0 - Performance Analyst's Workbench System User's Manual, Information Research Associates, Austin, Texas, 1985. Kurose, J. F., et al., A Graphics-Oriented Modeler's
Workstation Environmentfor the Research Queueing Package (RESQ), Proceedings of the Annual IEI~E (5)
Simulation Conference, 1986, pp. 719-728. Melamed, B., PerformanceAnalysis Workstation: An
Interactive Animated Simulation Packagefor Queueing Networks, Proceedings of the Annual IEEF~ (6)
Simulation Conference, 1986, pp. 729-740. Raman, K. S., Capacity Planning for Information Systems, Proceedings of the Seminar on Current Trends in MIS Research. Department of Information Systems and Computer Science, National University of Singapore, 1987, pp. 64-90.
ISA Transactions. Vol. 28, No. 1
29
DEFINITION
r
PROPOSALDEFINITION FEASIBILITY STUDY REOUZREMENTSANALYSIS
LPERFORMANCEOB3ECTZVES
L _,CONCEPTUAL DESIGN Functional Design ~ _ _ ~ Life Cycle
Performance Engineering Life Cycle
DESIGN
NATURE OF SYSTEM
PHYSICAL SYSTEM DESZGN
WORKLOAD CHARACTERISATZON
DATABASE DESIGN
SYSTEM CHARACTERZSATZON
SOFTWARE DEVELOPMENT
I
MODELING AND SIMULATION
PROCEDURE DEVELOPMENT
ANALYTIC MODELZNG
I
INSTALLATION AND OPERATION
SIMULATION MODELING
Co.vERS,O.
OPERATION AND MAINTENANCE , POSTAUDIT
VALIDATION
l
SCENARIO ANALYSIS IMPLEMENTATION
RMANCE i LOBJECTTVES REVT~ ~APACTTY DECISTONS~ Figure 1. Enhanced Life Cycle Model
/ DATABASE
PERFORMANCEDATA WORKLOADDATA i KNOWLEDGEBASE FUNCTIONAL SPECS | 0S PROFILES AND CHACTERXSTZC~
TOOL GUIDE AND PROFILE
KNOWLEDGEAGUISZTZON
|
EXPLANATZON~TUTORIAL
PROBLEMSOLVING
Figure 2. Architecture of Performance Engineering
30
ISA Transactions. Vol. 28, No. 1