Implementation issues for operations research software

Implementation issues for operations research software

CompuL & 0~s. Rer Vol. 13, No. 213. pp. 347-358. Printed in Great Ecitain 0305-0548/86 Pergsmon 1986 $3.00 + .oO Ioumals Ltd. IMPLEMENTATION ISSUE...

1MB Sizes 1 Downloads 137 Views

CompuL & 0~s. Rer Vol. 13, No. 213. pp. 347-358. Printed in Great Ecitain

0305-0548/86 Pergsmon

1986

$3.00 + .oO Ioumals Ltd.

IMPLEMENTATION ISSUES FOR OPERATIONS RESEARCH SOFTWARE JUDITH S. LIEBMAN+ Operations Research Laboratory, Department of Mechanical and Industrial Engineering, University of Illinois, 1206 W. Green, Urbana, IL 61808, U.S.A. Scope and Purpose-In order to effectively implement operations research algorithms on computers, we must first understand the many issues underlying this process. It is important for software developers to understand both the general impl~en~tion issues, as well as those issues specifically related to operations research techniques. In this study, implementation issues relevant to educationaf, research, and applications-oriented software are presented and discussed. It is hoped that this discussion will help developers to implement operations research techniques in ways that are natural and easy to use. Abstract-There are many issues which underly the effective implementation of operations research techniques on a computer. This study discusses general implication issues, including por~bility, user friendliness, and marketing, as well as issues specifically related to optimization. In the latter category, issues such as model input and solution output, sensitivity analysis, interactive modeling, and automated data collection are examined.

INTRODUCTION

Ideally, we should implement operations research algorithms on computers so that using operations research in our personal or professional lives is as natural as balancing our checkbooks. But, in order to do so, we must understand the many issues underlying effective implementation. This paper discusses both general issues of implementation and issues related specifically to optimi~tion. The general discussion covers portability, user f~en~in~s, marketing, maintenance, modification, and evaluation. The discussion of optimization issues includes stand-alone versus subprogram packages, model input and solution output, sensitivity analysis, interactive modeling, the use of artificial intelligence, automated data collection, interfacing with other commercial or public domain software, and the development of educational software. GENERAL

ISSUES

There are some issues in implementing software that are not specifically related to optimization but are more general.

It is important in operations research to be able to move easily between microcomputers, minicomputers, and mainframe computers. In both research and applications, we may wish to solve smaller problems on personal computers and use the same program to solve larger problems using more powerful computers. Hardware portability among microcomputers is most easily achieved by being IBM PC compatible. How best to achieve software po~ab~ity is less clear. ANSI standard Fortran 77 appears to be the most standardized language now running on the full range of computers: personal, mini, and mainframe. ’ Judith S. Liebman is currently Professor of Operations Research in the Department of Mechanical and Industrial Engineering at the University of Illinois at Urbana-Champaign. She received her B.A. in Physics at the University of Colorado at Boulder in 1958 and her Ph.D. in Operations Research at Johns Hopkins University in 1971. She is an author of many technical articles, mostly in the areas of engineering optimi~tion, translation systems, and health systems. Her experience in computers has spanned 27 years, beginning with her work as an engineer and programmer for Convair Astronautics in 1958. Over the past 14 years she has developed and shared many interactive computer codes implementing optimization methods, primarily to support educational and research activities.

347

348

JUDITH S. LIEBMAN

Even with standard or close-to-standard Fortran 77 there are inevitably points at which the code becomes machine dependent. Typically, this most often occurs during input and output: in opening and closing files, sending input to the display terminal, and receiving input from the keyboard. When faced with these differences, a useful tactic is to isolate machine dependent features (such as file opening statements) into small subroutines. Another approach, when available, is to use preprocessing to identify machine-dependent lines and to translate them automatically into the appropriate forms. The initial issue faced by prior users of mainframe computers is downloading previously developed programs from mainframe computers to microcomputers. Minor difficulties encountered include the language differences mentioned in the last paragraph and perhaps the need to reduce the program memory requirements by reducing the maximum size of the problems which can be solved. A major difficulty might arise in computational time requirements. Personal computers today are several orders of magnitude slower than mainframe computers. Clearly there are mainframe programs that could take hours to run on microcomputers just as there are supercomputer programs that could take hours to run on mainframe computers. If there is ready access to a mainframe computer, it may not be useful to tie up a personal computer for hours of computation. The issue is not one of dollar cost, but opportunity cost. Using microcomputers for computations that take long periods of time makes sense only if the answers can be waited for without inconvenience and if there is no other use for the computer during the time involved, or if mainframe access is inaccessible or inconvenient. A later issue faced by programmers is that program development on microcomputers with hard disks is much faster and easier than program development on mainframe computers. Hence the trend will become uploading new programs to mainframe computers in order to solve larger problems rather than downloading existing programs from mainframe computers to microcomputers.

User-friendliness

The advent of microcomputers has taught us that computer software should be friendly. But developing easy-to-use programs is not simple, and true software friendliness has many dimensions [ 1, 21. Style: menus vs. commands. Menus impose structure on the program code. This structure has been often missing if the code has been merely modified from a mainframe batch use version or developed with the operations research method and not the end user in mind. Menus also simplify using a program for the first time. However, after mastering a program, using a menu can become onerous. One possible solution is to allow programs to be used in either menu or command mode. Lotus l-2-3 offers another approach: an abbreviated menu remains at the top of the screen and choices may be made either by moving the cursor and pressing return or by simply responding with the first letter of the command. In contrast, the dBase III approach is primarily command oriented. Partly because of this basic command approach, many persons find dBase III harder to learn than Lotus l-2-3. In operations research software, menus can offer choices of basic problem setup activities (enter new problem, rerun last problem, edit last problem, etc.), choices related to the operations research method (such as method selection, changing method parameters, etc.), and selection of output levels and devices. Figure 1 is an illustrative menu requesting parameter changes. Windows. Windowing is a concept unfortunately not yet used very much because of the lack of easy graphic access to the display screen from the languages in which operations research methods are usually coded. Conceptually, the uses of windows are limited only by our imagination. Uses which come to mind include: (1) help information on commands, menus, and method parameters and options: (2) problem status during model solution, such as variable, constraint, and objective values (e.g., TK!Solver); and (3) a picture of the problem status, such as screen displays showing actual material flow and equipment status during a production facility simulation.

Implementationissues CURRENT 1) 2) 3) 4) 5) 6) (CR)

SELECT

PARAMETER

for operations

research

VALUES

CONVERGENCE WITHIN QUADRATIC SEARCH MAXIMUM NUMBER OF FITS = 10 CONVERGENCE WITHIN ROUNDS = .OlOO MAXIMUM NUMBER OF ROUNDS = 10 INITIAL STEP SIZES = 5% OF RANGE MINIMUM STEP SIZES = .5% OF RANGE IF NO CHANGES.

NUMBER

349

software

ELSE SELECT

OF OPTION

NUMBER

FOR CHANGE

Fig. 1. Menu for changing

parameter

= .OlOO

FOR CHANGE

values.

On-line help. Having on-line help available is far superior to having just a hard copy manual. Again, professionally produced microcomputer software packages have led the way in providing on-line help at the touch of a key. The use of windows greatly facilitates the provision of help. For example, dBase III has excellent help available but not in window form. So, although it is possible to easily pull up an explanation for any dBase III command, the help message disappears upon returning to the program. Unfortunately, the syntax of the dBase III commands is sufficiently complicated that it is sometimes hard to recall the details of interest after the help information vanishes from the screen. It is important to have on-line help information for all possible commands and menu options. For commands, users should be able to say “help xxxx” where xxxx represents the command name. Also, any time the microcomputer asks the user a question, ideally the user should be able to respond with a request for help in answering the question. Lastly, there ought to be a summary (overview) of the program available in electronic form so that it can be searched on key words, a much easier process than thumbing through a manual looking for how to do a particular task. Marketing

Other implementation issues include those of pricing and copy protection. Copy protection currently ranges from none to the type of systems in which a master disk must be in drive A to run the program. As more microcomputers have hard disks, there may be an increasing use of protection systems such as that used by dBASE III which can be installed and uninstalled from a hard disk. In these systems copies made (rather than uninstalled) from the hard disk or floppies will not run. For those of us in universities who wish to provide students with access to top-of-the-line commercial software, this type of system enables us to prevent casual student copying. Commercial developers of operations research software who wish their software to be used on a university campus but who are prepared to take legal action against widespread unauthorized copying must provide universities with versions that run on hard disks that can be protected from copying. Developers of commercial operations research software must also be prepared to provide extensive customer support. This ranges from hardware advice (how to install it on various types of computer-monitor-printer combinations) to advice on using the operations research method and interpreting the results. The provision of good manuals in addition to on-line help should reduce, but will not eliminate, the number of phone calls from customers in trouble. Because there will be users of operations research programs who have not had courses in operations research, the manuals should explain, as much as possible, the assumptions

and principles underlying

the technique implemented

by the program.

Maintenance, modification, and evaluation

The first line of defense against program errors, besides having good programmers, is to arrange for extensive testing of programs by a variety of users before programs are released (so-called Beta testing). But it is inevitable that there will be undiscovered errors even after Beta testing. Commercial developers must establish arrangements to handle calls from users reporting errors and to develop and release quick fixes. Accurate registration

JUDITH S. LIEBMAN

350

lists of authorized owners must be maintained to enable error corrections to reach these owners. Maintenance requirements for public domain programs are less clear. Here, program authors have the option to do nothing, although personal pride of authorship often generates some internal sense of responsibility. In either case, modular programs are more easily modified. To remain competitive, commercial codes must undergo frequent upgrades. Reasons for upgrades include interfacing with newer releases of operating systems or with additional operating systems, working on additional makes of personal computers, interfacing with additional types of peripheral hardware (printers, plotters, graphics boards, etc.), additions to the programs’ user-support functions (editing, report generation, etc.) and the adoption of a new operations research algorithm. Again, modular programs simplify upgrading. The importance of prerelease program evaluation is usually underestimated. Obviously programs should be checked to make sure, as much as possible, that the operations research algorithms are programmed correctly. But it is also necessary to evaluate the programs in the context of actual usage. One way to get user feedback is to observe individual users throughout the process of their learning to use a program. By watching this process, we can see how users absorb material from the manual, what type of information they look for but can’t find (or can’t find readily), what errors they make when interacting with the program, how they seek help from the program or manual, etc. The attitude of the observer should be “how can the program be redesigned to eliminate this user difficulty” and not “how can the user be educated not to make the wrong response.”

OPTIMIZATION

IMPLEMENTATION

ISSUES

There are implementation issues specific to optimization that are important. These range over a variety of topics, including how to make model input easy and flexible, encourage sensitivity analysis, aid research documentation, facilitate output analysis, support interactive modeling, and incorporate artificial intelligence. Stand-alone versus user-programmed implementations

There appear to be two main strategies for optimization programs. In one approach, the optimization program is a stand-alone package to which the model is input as data. This strategy has the advantage of requiring no programming skills of the user. It facilitates the quick solution of small and medium-sized problems, but lacks the capability of handling complex optimiation problems that cannot easily be expressed as data. GINO [3], for example, can easily handle nonlinear optimization models that involve a wide variety of algebraic functions; but it cannot solve models that require the solution of differential equations. The second main approach requires a user sufficiently knowledgeable to undertake some programming. This approach is flexible enough to handle a broad range of optimization problems, not just ones that can be expressed in algebraic form. In one variant of this approach, the optimization package is a main program and set of optimization subroutines which calls a user-written subroutine defining the problem. In another variant, the optimization package is a set of subroutines which may be called by a user-written program to perform one or more steps of an optimization procedure. There is also a third variant possible, in which the computer is programmed to write the required computer code, given some brief input from the user. The use of microcomputer object code linkers (such as Microsoft 8086 Object Linker, Version 3.2) facilitates linking packaged code and user code. Furthermore, not only can microcomputers be programmed to write the Fortran code that represents the model and to write and execute the batch file that compiles the code, they can also write and execute the batch file that links the compiled model code to the optimization program. Optimization model input as data

The most useful form of input for many optimization problems is the algebraic form used to write the model on paper. For nonlinear optimization problems, input in algebraic

Implementation issues for operations research software

351

form can be interpreted at run time to carry out the model computations as in GINO 131. For small models, the increased time resulting from the interpretive approach is not important, But for larger models it is useful to have the model in compiled form. One solution is to have the computer write and compile the Fortran code after the user has input the model interactively. There are significant advantages to having the model input in algebraic form. First, users can be provided with immediate feedback if there are syntax errors in the model description. Second, the development of gradient functions can be automated. There already exist programs which analyze algebraic functions and which write Fortran code to compute analytic derivaties for these functions. Including gradient functions in problem specifications enables gradient-based optimization codes to run much more efficiently because numerical differencing can be eliminated. Third, it is possible to use gradient info~ation to determine whether the model is entirely linear, to extract the linear coefficients, and then to solve the problem by linear rather than nonlinear programming. A fourth advantage of using the algebraic model form as input is that the problem size can be reduced by removing redundant constraints and by categorizing variables. This is done in GINO [3]. Exogenous variables are variables which need only be computed once, before the optimi~tion begins. Extraneous variables are variables which need only be computed once, after the optimization ends. Endogenous variables are internal model variables whose values are computed by the model at every iteration but which need not been seen as variables by the optimization algorithm. The only variables which need to be seen by the algorithm are the remaining, explicit, decision variables. Constraints which are redundant or used to compute the values of endogenous, exogenous, or extraneous variables also need not be sent to the optimi~tion algorithm. Often the structure of a problem determines special input options which are appropriate. For example, network problems can be created from the terminal by allowing the user to create nodes and arcs using names instead of node and arc numbers. The sequence of commands CREATE CHICAGO, CREATE URBANA, CREATE LINK FROM CHICAGO TO URBANA, etc. is closer to the way users conceptu~i~ network problems than is CREATE NODE 1, CREATE NODE 2, CREATE ARC (1,2). NETPAC [4] is a mainframe package which is an example of a program which allows input via names instead of numbers. In addition, since we think of networks visually, we may wish to interface our network algorithms with some of the computer-aided design software emerging for microcomputers to take advantage of flexible graphics input. Lastly, although it is possible to use a mouse or digitizing tablet to imput networks which exist already on paper as drawn diagrams, e.g., road maps, a better solution would be to develope an optical reader for maps, diagrams, or networks. Input which has been interactively generated should be stored in ASCII files for later editing or reuse. To help in reviewing and editing outside of the optimization software, program generated data files should be well st~ctured and labeled, as in the example in Fig. 2(a) from GINO and in Fig. 2(b) from TRANS (an interactive microcomputer program for solving transportation problems developed at the University of Illinois Operations Research Lab). The GINO example illustrates the usefulness of allowing comments to be placed anywhere in the data file. The TRANS example illustrates how a data file generated by a computer program can be formatted for clarity. Users should be protected against overw~ting existing files accidentally (e.g., by using the INQUIRE command in Microsoft Fortran to determine whether a requested file name already exists.) It is also important to have problem editing capabilities within the optimization code itself. Often users need to make temporary run-time changes in a model and may even wish to save the revised data in addition to, or in place of, the original data. Many operations research methods have parameters that must be set to select options and to define numerical quantities to be used as tolerances, convergence tests, etc. The computer code should set default values for these options and numerical test quantities. These default settings should be set within BLOCK DATA statements to facilitate changing them in the source code when moving from one machine implementation to another. Runtime resetting of these options and defaults should be made easy.

352

JUDITH S. LIEBMAN

MODEL ! Model of a queueing system with N servers, each costing $17/hour. ! Arrivals occur at rate 70 per hour in a Poisson Stream. ! Arrivals finding all servers busy are lost. ! A lost customer costs $35. ! The average time to process a customer is 5 minutes. ! Find N to minimize the cost of servers plus the cost of lost customers. MIN = SCOST + LCOST : ! Cost of servers: SCOST = 17 * N ; ! Cost of lost customers: LCOST = 35 l 70 l FLOST ; ! The fraction of customers lost: FLOST = PEL( 70 * 5 / 60 , N) ; END (GINO ignores material following ! on same line) Fig. 2.(a) Sample data tile for GINO.

Use of names and dimensions

Allowing the use of names for variables, constraints, and objectives makes using operations research models as decision support tools easier. LINDO [5] is an example having this capability. In addition, if units (e.g., dollars, pounds, cubic feet) are allowed, as in TK!Solver, then the functions specified for the model can be checked for consistency. A good way to check a model formulation is dimensional analysis [6]. There are essential two rules used in dimensional analysis. If [A] denotes the dimension of a quantity [A], then 1) A + B = C implies [A] = [B] = [Cl, and 2) AB = C implies [A] [B] = [Cl. These rules may be used to check for algebraic errors, which are present if the equations, inequalities, and objective functions of the model are not dimensionally homogeneous. They may also be used to convert quantities from one system of units to another. The computer should be programmed to carry out these functions for us. Solution output issues

It is important to provide users with the capability of routing the output to the printer and to disk files as well as the screen. The width and page sizes associated with these output alternatives should be flexible. Offering several levels of detail in the solution output is also important. For example, GRG2 [3, 71 offers seven levels of output, ranging from nothing but the value of the variables and objective function at optimality to an enormous amount of output tracking almost every calculation. In most situations, at least two levels of detail (summary and an intermediate level) should be provided. Also, different levels of output should be available on the different output devices, since users might wish to have only summary output on the screen and have detailed output put into a disk file for later perusal if required.

A SIMPLE TRANSPORTATION PROBLEM 2 SUPPLIES (ROWS) 3 DEMANDS (COLUMNS) SUPPLIES 5 10 DEMANDS 8 4 3 COSTS 1 2 3 4 5 6 Fig. 2.(b) Sample data file generated by TRANS.

Implementation

issues for operations

research

software

353

The importance of graphics and model animation, when appropriate, can not be overemphasized. For example, a window on the screen drawing a decreasing objective function value during optimization is far more effective than a sequence of numerical values. Network and graph-based problems are obvious candidates for graphics display, as are facility location, plant layout, and routing problems. For many scheduling problems, a picture of the schedule enables a visual evaluation for aspects not incorporated directly into the sched~ing algorithm, such as facility use, personnel utili~tion, schedule gaps, resource conflicts, etc. Support for sensitivity analysis

More users of operations research methods will use sensitivity analysis if we make such analysis easy. Our software manuals ought to stress the importance and u~~ness of sensitivity analysis. Our output solution reports should entice the user into asking “whatif” type questions. For example, the output of linear programming codes should label the reduced costs of variables and shadow prices of constraints as indicating the potential increase (or decrease) in the objective. Particularly large values should be flagged to draw attention. Summary reports can list in decreasing order the magnitudes of potential improvements in the objective per unit increases (decreases) in the ~ght-had side of constraints. As much sensitivity analysis as possible should be generated by the program. For example, GINO [3] provides sensitivity analysis for all coefficients as long as they are specified as exogenous variables. Consider the following input model for GINO: MAX=CX*X+CY*Y

;

AX*X+AY*Yo; cx

;

Y>O

;

= 2. ; CY = 3

;

AX = 5; AY =4 B=25

;

;

The advantage of using the above formulation, rather than MAX=2*X+3*Y

;

5*X+4*Y x>o;

< 25 Y>O

;

;

is that for the first formulation GINO provides dual variable information on CX, CY, AX, AY, and B. Thus, by visualizing a linear programming problem as a nonlinear problem, we automatically obtain important sensitivity information for all coefficients in the model. Interactive yodeling

Pollack [S] discussed the role of interactive models in operations research at a time before microcomputers had become common. His conclusion was that interactive modeling is likely to yield high payoffs, a conclusion more valid today in the era of microcomputers. The areas he mentioned as most promising included problems with multiple criteria and problems for which optimal solution algorithms are not available or practical. Incorporating intefhgence

As we develop a better understanding of how algorithm parameter settings affect algorithmic performance, we should provide our programs with some intelligence, enabling the programs to set such parameters as tolerances and convergence criteria automatically

JUDITH S. LIEBMAN

354

and change them when appropriate as the problem solution progresses. At some point we may begin to use expert systems as consultants, asking such systems for recommendations on model development, method selection, tolerance settings, convergence specifications, etc. Automated

Data Collection

When developing new methods for optimization, or modifying existing ones, we often try out various settings of the method parameters on many different problems. Sometimes these problems have the same structure and we are trying to “fine tune” the method to work well with that structure. At other times we are trying to set up the method to solve a wide variety of problems. In nonlinear optimization, for example, there are several commonly used sets of test problems [9, lo]. The result of multiple computer runs with many different parameter settings on different problems is usually an accumulation of the computer output in piles on desks and on the floor. At some point, when it is deemed time to write up the test results, a laborious search must be made through the piles of output to summarize what happened and develop conclusions. Not only is this approach arduous, it also runs the risk that important output will be misplaced. Sometimes running summaries are maintained, but inevitably important information is missing and must be retrieved from a pile or recreated from scratch. Another way to collect and maintain these data is to have the computer do it for us. Programs such as dBase III are well-suited to save and analyze these data and many of these data base manager programs accept ASCII files as long as they have a single record per line. All that is needed, then, is a procedure whereby these lines are automatically generated during optimization runs. A relatively easy way to do this is to have a data file specify the record format and to use simple subroutine calls to send the data from the program into the record file. The subroutines that put the data into the record file can be part of a subroutine library available at run time. These routines receive the calls from the optimization program, look into the data format file to determine how to store the data, insert the data into the record, and add the record to the ASCII file when the record is completed. Figures 3(a) and (b) show an example format table and the corresponding statements from a Fortran subroutine that sends data to an ASCII file in the requested format. The ASCII file can later be read into dBase III. Subroutines SENDC, SEND, and WRTFIL are part of a library of utility subroutines available at run time. Subroutines SEND sends numeric information to the database record and SENDC sends character information. The reason for needing both of these subroutines stems from the current Fortran requirement that a parameter in a subroutine call can not be a character variable at one time and a numeric variable at another. The calls to subroutines SEND and SENDC need only send the name of the parameter and its value. Subroutines SEND and SENDC then refer to the format specification file (Fig. 3a) to enter the data into the record correctly. The subroutine WRTFIL call is a request to add the record (filled out by the previous SEND and SENDC calls) to the ASCII database file. The advantage of this approach is that the database format can be easily modified without extensive programming. Sometimes problems are so large and difficult to solve that it is useful to store partial or intermediate solutions which may later be used as starting points. Large scale linear programming codes such as MPSX and APEX have long had this option. Interfacing

with other software

It is important to interface with other software such as word processing, database managers, and spreadsheets. Operations research models under development generally undergo a significant amount of changing. If editing capabilities are not built into the operations research program itself, then being able to interface with a word processor is essential. Furthermore, the output generated from operations research models is often put into reports, overhead slides, etc. Being able to generate ASCII files for both the model and the output facilitates the transfer of information. Since spreadsheet users working in the “what-if” mode are likely to venture into asking “what’s best,” being able to interface

Implementation DBASE

issues for operations

355

software

FORMAT (3)

(4)

(5)

1 3 10 17 19 22 31 40 42 44 56 68 80 92 104 116 128 140 143 145 147

2 7 7 2 3 9 9 2 2 12 12 12 12 12 12 12 12 3 2 2 30

I2 F7.4 F7.4 12 13 A9 A9 A30 A32 F12.5 F12.5 F12.5 F12.4 F12.5 F12.5 F12.5 F12.4 I3 I2 12 A30

(2)

(1) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21

research

PROB# ITER CNVRG ROUND CNVRG MAX # FIT MAX # ROUND TIME DATE METHOD DIAG CRITERIA XSTART(1) XSTART(2) XSTART(3) FSTART XBEST(1) XBEST(2) XBEST(3) FBEST # OBJ FUNC EVAL # AXIAL FIT EVAL # DIAG FIT EVAL COMMENTS

INTEGER REAL REAL INTEGER INTEGER ALPHANUMERIC ALPHANUMERIC CODED CODED REAL REAL REAL REAL REAL REAL REAL REAL INTEGER INTEGER INTEGER ALPHANUMERIC

Key: (1) Label; (2) Type; (3) Starting

point in record; information

Fig. 3.(a) File declaring

(4) Length

in record;

(5) Format

of

desired record format.

RECORD PROBLEM SETUP INFORMATION CALL SEND(‘PROB #‘,ID) CALL SEND(‘ITER CNVRG’,CNVRGI) CALL SEND(‘MAX # FIT’,MAXFIT) CALL SEND(‘MAX # ROUND’,NRNDS) CALL SENDC(‘TIME’,CTIME) CALL SENDC(‘DATE’,CDATE) RECORD COMPUTATIONAL PERFORMANCE CALL SEND(‘# OBJ FUNC EVAL’,NOBJ) CALL SEND(‘# DIAG FIT EVAL’,NDF) CALL SEND(‘# AXIAL FIT EVAL’,NAF) RECORD STARTING POINT AND BEST POINT FOUND DO 10 I=l,N WRITE (TEMP,940) ‘XSTART(‘,I,‘)’ + create variable name for XSTART(1) FORMAT(A7,I 1,A1 CALL SEND(TEMP,XSTART(I)) WRITE (TEMP,970) ‘XBEST(‘,I,‘)’ + create variable name for XBEST(1) FORMAT(A6,Il,Al) CALL SEND(TEMP,XBEST(I)) RECORD STARTING AND BEST VALUES CALL SEND(‘FBEST’,FBEST) CALL SEND(‘FSTART’,FSTART) RECORD COMMENTS AT RUN TIME WRITE(*,*) ‘ENTER ANY COMMENTS READ(l,‘(A30)‘,END= 55) COMMENT CALL SENDC(‘COMMENTS’,COMMENT) WRITE RECORD CALL WRTFIL

TO ASCII

OF OBJECTIVE

(LIMIT

FILE FOR LATER

FUNCTION

30 CHAR.)

DBASE

III INPUT

RETURN END Fig. 3.(b) Fortran

subroutine

statements

to send data into ASCII record.

356

JUDITH S. LIEBMAN

optimization programs with spreadsheets is also important. But the design of this interface is more difficult since there are several spreadsheet data file formats now in use. Educational software

One aspect of learning operations research is developing an understanding of the methods of operations research. For most of us this means some struggling through hand calculations to obtain a feel for how a specific algorithm works. In essence our learning process involves assimilating the operations research method step by step. Using computer codes such as LINDO which solve linear programming problems provide educational feedback on models but not on the simplex algorithm itself. ROWOP and LPSTEPS are interactive programs developed at the University of Illinois which enable students to learn the primal and dual simplex methods step by step. Their primary function is to do row operations; the user has to make critical decisions such as deciding to bring a variable into the basis and deciding which variable will leave the basis. What is needed is a similar stepby-step program for each of our operations research methods. The next level of computer-aided instruction (CAI) after developing a step-by-step program is incorporating help at all levels, so that students who do not understand the method being studied may obtain additional help. Instructional material describing how and why the method works may be made available on-line, perhaps replacing or supplementing lecture material. As easier-to-use systems for creating CA1 units become available, we will begin to see both public domain and commercial packages emerge for CA1 in operations research. Part of our duties when teaching a course will become selecting CA1 software as well as (or in place of) selecting a text. Self-paced learning using the Keller [ 1l] method became popular in the 1970s; but use of this method has decreased at some universities as schools realized the financial requirements imposed by providing continuously available tutoring and testing. At some point, computers will become capable of assuming the tutoring and testing roles previously belonging to student proctors. Computers have always been viewed as having the potential for being able to “drill” students; what will be available in the future is the use of computers to record student progress on more complex series of tasks. This accomplishment will not come without a significant effort on our part. We must understand better ourselves the nature of learning operations research. Another educational area of interest is the ability to generate homework and test problems automatically using programs on microcomputers. For example we could have programs that automatically generate linear programming problems of various shapes and sizes and characteristics (alternate optima, degeneracy, etc.), that can check whether these problems are feasible, and that can even draw graphs if requested. A much harder accomplishment will be the automated development of word problems. This might be achieved by using generic word problems in which the setting description, decision, objectives, and constrained resources are symbolic variables which are replaced by the appropriate words for the category of problem chosen. After carrying out the word substitution, the computer could be asked next to develop the appropriate problem data within the ranges specified by the user. All of us who are educators are looking forward to the time when computers can take over the grading of problems. Again, it will be necessary for us to understand better the ways in which operations research is learned in order for us to successfully automate the grading of student performance. A severe drawback of our current grading system is the lag between the time students finish homework or a test and the time when it is graded and handed back. By that time, students have little recollection of what their thought processes were when doing the assignment, and they rarely take the time to review the graded results in depth. Hence grading today is more bookkeeping towards a comprehensive course grade than it is useful feedback to students. If, however, the homework and tests were graded automatically by a computer immediately after being completed by students, the feedback into the students’ thought processes would be immediate. A major accomplishment in computer assisted instruction will be achieved when we have intelligent on-line coaching. Currently, artificial intelligence researchers are studying

Implementation issues for operations research software

357

how people learn. Prototype programs have been developed which provide on-line coaching in certain domains such as learning multiplication, logic, and medical diagnosis. These coaching models construct a computer-based model of what an individual student is learning so that wrong answers can be diagnosed and student misconceptions identified. Again, we need to understand more about how operations research is learned. Computer-assisted instruction based on microcomputers has enormous potential throughout the range of operations research education: graduate and undergraduate level education, high school education, continuing education, and adult education. Typically we have thought of operations research as a college and graduate school subject; but some topics in operations research could be easily taught in high schools. Furthermore, students introduced to operations research in high school are more likely to take one or more courses in operations research in college and are more likely to become users of operations research in their own professional fields. Teaching linear programming, minimum spanning tree, shortest paths, etc. in high school would provide more of our general population with at least some basic understanding of what operations research is. In a few high schools across the country some of these topics are already included. But the tremendous stumbling block to be overcome is that few high school teachers have had a course in operations research or are prepared to develop and provide a series of classroom lectures covering operations research topics. There was an attempt to update the science curriculum in the 1960s to provide an introduction to engineering science, including some topics in operations research. A high school text, The Man-Made World [12], was developed, and some workshops were held to educate interested science teachers on how to use the text. That flurry of activity died out, primarily because it was impossible to introduce enough high school science teachers to the topics without major changes in their college curricula. There was also little interest in college secondary education programs in incorporating these topics into their regular curriculum. Having available computer-assisted instructional units which can be used at the high school level will greatly help the early teaching of operations research and may overcome some of the difficulties encountered in The Man-Made World project. Because of the variety of operations research tools becoming available on microcomputers, there will be increasing demand for learning operations research. In adult education there will be a need for some simple introductory lessons in the basic comcepts underlying the more popular operations research packages. In addition, because of the increasing use of operations research in other professions, there will be an increased demand for computer assisted instruction in operations research through continuing education programs. Thus, a major issue in implementing operations research software on microcomputers is identifying the desired audience(s) and developing appropriately targeted program interfaces and educational packages. For many reasons, the impact of microcomputers on operations research education will be far greater than the impact of mainframe comuters has been. Microcomputers are personal computers in a sense never achieved by mainframe computers. Perhaps users of microcomputers feel more “in control,” or perhaps the quick response time of a microcomputer enables it to act more nearly as an extension of the brain. Microcomputer software has intentionally been designed to be user-friendly, an orientation rarely found in mainframe software. Whatever the reason, microcomputers are being used enthusiastically by individuals who were never attracted into using mainframe computers. There is also a convenience factor. There is an increasing trend for engineering students to own a microcomputer. Many businesses are providing personal computers for their managers, engineers, analysts, and clerical staff. Hence access to microcomputers is already far greater than access has been to mainframe computers. SUMMARY

AND RECOMMENDATIONS

How operations research software will be used dictates how it should be designed. Educational software must be designed to help students learn particular methods for solving problems, develop and evaluate models for small problems, and help instructors keep track

358

JUDITH S. LIEBMAN

of the students’ progress. Software used in research has a different purpose, usually to develop an understanding of how characteristics of different problems affect the performance of a particular algorithm or class of algorithms. Lastly, software developed for applications use must be designed for getting problems solved as efficiently and easily as possible. As discussed in this study, computers can use dimensional analysis to check for consistency, identify whether the models are linear or have network structures, develop analytic derivatives and the source code for these derivatives, automatically write the computer code that implements the models, compile the resulting source code, and link the resulting object code with an appropriate optimization code. For problems which are not optimization problems (e.g., discrete event simulation, decision analysis, Markov decision processes), computers can be used to help us identify our conceptual models, automatically develop the c,omputer statements implementing these models, and link the resulting models to simulators and decision support systems. Ultimately artificial intelligence may enable computers to build the specific operations research models that represent the problems we wish to solve. Marketing operations research programs commercially requires difficult decisions in pricing and copy-protection, as well as the establishment of procedures and staff to provide user-support, program maintenance, program upgrades, and program evaluation. Since potential users of operations research software have widely differing microcomputer systems and varying levels of computer expertise, portability and user friendliness are essential ingredients in good implementation. Furthermore, since these users have very different educational backgrounds in operations research, there is the danger that our operations research techniques will be misunderstood and misapplied. We have an obligation to provide adequate explanations of the assumptions and analysis underlying our techniques in addition to the obligation to provide quality computer code which is easy to use. Acknowledgemenb-The research supportprovided by the Army Construction Engineering Research Laboratory is gratefully acknowledged. Also appreciated is the equipment support received from NSF, the University of Illinois Research Board, the Intel Corporation, and the Alumni Association of the Department of Mechanical and Industrial Engineering of the University of Illinois. REFERENCES 1. M. F. Mehlmann, When People Use Computers Prentice-Hall, Englewood Cliffs, NJ (1981). 2. B. Shneiderman, The PsychoIogy of Software, Winthrop Publishers Inc., Cambridge, MA (19801. 3. J. S. Liebman, L. Lasdon, L. Schrage, and A. Waren, Modeling and Optimization with GINO. The Scientific Press, Palo Alto, CA (forthcoming in Fall, 1985). 4. J. P. Jarvis, D. R. Shier, L. D. Bodin, and B. L. Golden, Netpac: A Computerized System for Network Analysis. Working Paper Series MS/S 84-031, College of Business and Management, University of Maryland (1984). L. Schrage, Linear. Integer, and Quadraatic Programming with LINDO. The Scientific Press, Palo Alto, CA (1984). E. Naddor, Dimensions in operations research. Ops. Res 10, 508-514 (1962). L. Lasdon, A. Waren, A. Jain, and M. Ratner, Design and testing of a generalized reduced gradient code for nonlinear programming. ACM Transactions on Mathematical Software 4, 34-50 (1978). M. Pollack, Interactive models in operations research--An introduction and some future research directions. Comput. & Ops Res 3, 305-312

(1976).

9. D. M. Himmelblau, Applied Nonlinear Pmgmmming. McGraw-Hill Book Company, New York (1972). 10. K. Schittkowski, Nonlinear Pmgramming Codes: Information, Tests, Performance. Lecture Notes in Economics and Mathematical Systems, No. 183, Springer-Verlag, New York (1980). 11. J. A. Kulik, C. L. Kulik, and K. Carmichael, The Keller plan in science teaching. Science 183, 379-383 (1974). 12. Engineering Concepts Curriculum Project, Polytechnic Institute of Brooklyn, The Man-Made World, McGrawHill Book Company, New York (1971).