The status of the LAMPF control system upgrade

The status of the LAMPF control system upgrade

122 Nuclear Instruments and Methods in Physics Research A247 (1986) 122-125 North-Holland, Amsterdam THE STATUS OF THE LAMPF CONTROL SYSTEM UPGRADE ...

299KB Sizes 4 Downloads 65 Views

122

Nuclear Instruments and Methods in Physics Research A247 (1986) 122-125 North-Holland, Amsterdam

THE STATUS OF THE LAMPF CONTROL SYSTEM UPGRADE Stanley K. BROWN, Stuart C. SCHALLER, Eric A. BJORKLUND, Gary P. CARR, Roger A. COLE, Jamii K. CORLEY, James F. HARRISON, Thomas MARKS Jr, Patricia A. ROSE, Georgia A. PEDICINI and David E. SCHULTZ MP-1, Mail Stop 11810, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, USA

This paper discusses the present state of the LAMPF accelerator control computer upgrade. This includes a summary of our recent operational experience and the status of the application programs m the new system . In addition, we describe several optimization techniques which have been used to improve system-wide performance, and a limited VAX/VMS operating system environment which has been made available to the accelerator operators. We also describe our plans for upgrading the LAMPF control system computer network. 1. The status of the conversion

2. Software optimization issues

The LAMPF control computer upgrade project involves replacing an aging Systems Engineering Laboratory SEL-840 with a Digital Equipment Corporation VAX-11/780. Details of the conversion were discussed at the Accelerator Controls Workshop held at Brookhaven National Laboratory in January 1985 . A summary of the conversion project has been published elsewhere (see ref. [1]) . This spring the new control software played a vital part in the LAMPF tune-up leading to the turn-on of the Proton Storage Ring (PSR) at the Los Alamos Neutron Scattering Center . Among the LAMPF changes required to bring PSR on-line were : a new high intensity H - source, a new H - transport line to bring the H beam into the linac, and a completely redesigned beam switchyard to direct the H -, H + and P- beams to the appropriate areas. The new control system now handles these new linac facilities while still supporting the delivery of H+ and P- beams. The new high intensity H beam was delivered to PSR on schedule . At the present time, all but three or four of the application programs needed for complete support of the operation of the accelerator are finished . These yet-to-be-completed programs should be finished by the first of January which is also about the beginning of the next shutdown . We believe that the LAMPF turn-on scheduled for the late spring of 1986 will be accomplished using the new control system software exclusively . The SEL-840 will probably remain through that operating cycle as a security blanket or to cover the possibility that a program has been omitted. Following this cycle, the 840 will be removed.

One of the favorite topics in the design and implementation of a control system is : will it be fast enough? We have largely chosen to ignore this question at the time of design, following instead the precept that it is easier to make a correct program fast than it is to make a fast program correct . This is particularly true when a changing accelerator environment forces the inevitable software modifications . Once an application program is complete, we evaluate its responsiveness . If it is found to be too slow, several tools are used to determine where it is spending its time . We find that most programs spend the majority of their time in very limited areas of the code . These areas are then made more efficient. In some cases, we have found that these areas are in subroutines which are used on a system-wide basis, such as those supporting plotting and data-taking. Attacking inefficiencies here provides for increased efficiencies on a system-wide basis. Several applications that were plot-bound proved to be quite slow . We were using a set of graphics routines supplied and supported by the laboratory computing division. We found that they had taken one high level set of routines, the package supported by the National Center for Atmospheric Research (NCAR), and laid it on top of the set of routines that C-Division has supported for years (CGS). By removing this bottom layer, we doubled the plotting speed while keeping the standard NCAR interface. We found that programs which acquired accelerator data spent more time than expected in dynamically allocating and deallocating memory space for global data structures . By pre-allocating memory for some of the data structures, we reduced the CPU usage for data acquisition by a factor of two without sacrificing the

* Work supported by the US Department of Energy . 0168-9002/86/$03 .50 © Elsevier Science Publishers B.V . (North-Holland Physics Publishing Division)

S. K Brown et al. / The status of the LA MPF control system upgrade

diagnostic capabilities afforded by these globally accessible data structures . Since the data taking routines are part of the control system run-time library, optimizations in this area provided improvements in most of the application programs . Several application programs require data from a large number of accelerator devices on a repetitive basis. Programs which provide facilities for recording trends and signaling alarms as well as a program which displays the status of the RF system and accelerator operational status are examples . The latter program, for instance, acquires data from approximately 300 different devices once every three seconds. Because application programs must use symbolic device names for data acquisition, they are unable to make direct use of the hardware organization to optimize their data access . We provided a data acquisition routine which allows an application to combine a group of disparate devices into an "aggregate device". Reading an "aggregate device" acquires data for each component device . The data acquisition routines minimize the number of hardware references required to read the "aggregate device". This optimization has decreased the CPU time needed for the RF status program data-taking by a factor of three while retaining the hardware independence afforded by symbolic device naming. (More details on data acquisition optimizations can be found in ref. [21.) Another interesting optimization concerns the program which monitors the RF system and operational status. During beam production, the operators often run this program from several different consoles. Initially this capability was supplied by running a separate copy of the program at each console. Since each copy of the program was acquiring its own data and formatting its own display, much effort was being duplicated . The program was modified to check whether it is already running at another console when it is called . If not, it runs in a normal fashion acquiring data and formatting a color CRT display. If a copy is already running, the program determines the console where the original copy is running, and repeatedly copies the original color CRT display to the new console. The "slaved" copy of the program thus avoids all the data-taking and display formatting; each "slave" copy takes less than 5% of the CPU time required by the original . The explicit reliance on post-implementation optimization of programs has one negative side effect . It seems that once the accelerator operators have concluded that a particular application program is "too slow", they are psychologically resistant to recognizing improvements . We have seriously considered supplying initial versions of developing programs which do nothing but display generated data as rapidly as possible. This might help avoid the "first impression" phenomenon .

123

3. A limited VMS environment for accelerator operators There are several things for which we want the operators to use the VAX/VMS operating system rather than the control system . By using a VMS interface for these tasks, we find we do not need to provide special applications programs . Examples of these tasks include: displaying VMS and accelerator status information, displaying file directories, updating application program data files, and generating reports from the accelerator device description data base . These tasks can best be run in a standard VMS environment. We considered allowing the operators access to VMS as a normal (i.e . non-privileged) user, but, since this includes the ability to delete files, this idea makes even the operators uncomfortable. We wanted them to be able to use VMS but within limits . Touching a button on a touch panel now logs the operator in as a "captive" VMS user . The indirect command file which runs at operator log-in only allows the operator to execute a small subset of VMS commands . The only way to leave the indirect command procedure is by logging out. What the operator now sees is a very simple way to get into a VMS environment. Since only a few commands are available to him, he knows that he cannot accidentally damage the control system . 4. Future of the LAMPF control system network The present configuration of the LAMPF contgol system network is shown in fig. 1. The current LAMPF control system network consists of the VAX-11/780 and the SEL-840 plus several remote PDP-11 computers (labeled AREA A, AREA B, SWYD, TR, IDS, and ISTS) which communicate through locally designed CAMAC data-links using the NET-11 for message switching. These remote computers perform dedicated data acquisition and control tasks, often at interrupt level. They acquire data from the accelerator through CAMAC. (The RIU-11 computer provides access to locally-designed data acquisition hardware and to the LAMPF master timer. For the purposes of this discussion the RIU-11 is not considered part of the LAMPF control system network.) We decided to upgrade the remote computer network because of the many problems we encountered modifying, diagnosing, and maintaining the current network. Among the problems we faced were : the difficulty of adding new remote computers, the limited functionality provided by the locally-written operating system in them, the lack of symbolic device addressing in the remote software, poor error handling, and difficulty with maintenance and modification of the remote software which is mostly written in assembly language . We also have had hardware problems with both the data 1. OVERVIEW OF EXISTING SYSTEMS

124

S. K. Brown et al / The status of the LA MPF control system upgrade

P D P

Fig . 1 . Current LAMPF control system network .

links and the remote computers. The future hardware configuration is shown in fig. 2. This configuration calls for the replacement of all the remote computers as well as the replacement of the means of communication. A second VAX-11/780 will be added for use in program development and as a backup . The remote computers will be upgraded to micro-VAXes. The VAX11/780s and the micro-VAXes will be attached to an Ethernet LAN, The two VAX-11/780s will also be connected as a VAX cluster so that they can share disk files. (Because of environmental considerations, the micro-VAXes have no local disk storage. It is possible, however, to plug in a Winchester or floppy disk to perform hardware diagnostics.)

TR M I C R O

AREA A

-

AREA B

V A X E S

SWYD

ETHERNET

PROTON STORAGE RING DATA ANALYSIS CENTER CONTROL VAX-11/780

RIU-11 ISTS MT

Ethernet allows straightforward network expansion. We already plan to control the next generation of the LAMPF master timer (MT) with a micro-VAX, and several other micro-VAX additions are being considered . Since the Ethernet LAN itself provides message routing, we have been able to omit the NET-11 from the planned network. We are planning to use the VAXELN operating system in the micro-VAXes. VAXELN is a system for building memory-resident real-time systems which can be down-loaded over Ethernet to micro-VAXes. VAXELN is written in and supports EPASCAL, an ISO standard PASCAL with extensions for real-time processing. Programs running in the micro-VAXes will access

DEVELOPMENT VAX-11/780

Fig. 2 . Planned LAMPF control system network.

CONSOLES

VAX CLUSTER DISKS

CONSOLES

S.K Brown et al. /The status of the LAMPF control system upgrade

data through symbolic device names as is currently done by the VAX-11/780 programs . The device description data base will be maintained on the VAX11/780s and down-loaded as needed by the remote computers. Access between computers will be supported through a remote procedure call paradigm . The ion source test stand (ISTS) remote computer is an exception in that it is an essentially stand-alone system with no direct connection to the accelerator . We are considering adding local disk storage to the ISTS micro-VAX and running the micro-VMS operating system. We expect that portions of the VAX-11/780 control system can be used to handle the ISTS environment . The ISTS Ethernet connection will be used to transfer software and data bases. At the present time, the Ethernet cable has been purchased and is in place. The second VAX-11/780 has been installed, linked to the control VAX via Ethernet, and is being used for program development. The new remote computers have yet to be implemented. MicroVAXes have been purchased for the switchyard (SWYD), the master timer (MT), and the ion source test stand (ISTS) computers.

125

5. Summary The upgrade from the SEL-840 to the VAX-11/780 is all but complete . Most of the application program software is now in production . This software has been monitored to uncover response bottlenecks. Some optimizations have been implemented with good results. We intend to continue to monitor performance and make additional changes where they make sense. We have been able to provide for a limited VMS environment for the operators so that those tasks that are most easily accomplished by using VMS can be done safely . Finally, the control system network upgrade is well under way. We project completion of this task in about one more year. References [1] D.E . Schultz and S.K . Brown, Proc . IEEE Real-Time Symp . (December 1981) p . 78 . [2] S.C . Schaller, J.K . Corley and P.A . Rose, to appear in IEEE Trans. Nucl. Sci. (October 1985).

I. OVERVIEW OF EXISTING SYSTEMS