Nuclear Instruments and Methods in Physics Research A 352 (1994) 118-121 North-Holland
NUCLEAR INSTRUMENTS & METHODS IN PHYSICS RESEARCH Secl~on A
The CEBAF accelerator control system: migrating from a TACL to an EPICS based system + William A. Watson III *, David Barker, Matthew Bickley, Pratik Gupta, R.P. Johnson CEBAF, MS 1214, 12000 Jefferson Au., Newport News, Virginia, 23606, USA
CEBAF is in the process of migrating its accelerator and experimental hall control systems to one based upon EPICS, a control system toolkit developed by a collaboration among several DOE laboratories in the US. The new system design interfaces existing CAMAC hardware via a CAMAC serial highway to VME-based I / O controllers running the EPICS software; future additions and upgrades will for the most part go directly into VME. The decision to use EPICS followed difficulties in scaling the CEBAF-developed TACL system to full machine operation. TACL and EPICS share many design concepts, facilitating the conversion of software from one toolkit to the other. In particular, each supports graphical entry of algorithms built up from modular code, graphical displays with a display editor, and a client-server architecture with name-based I / O . During the migration, TACL and EPICS will interoperate through a socket-based I / O gateway. As part of a collaboration with other laboratories, CEBAF will add relational database support for system management and high level applications support. Initial experience with EPICS is presented, along with a plan for the full migration which is expected to be finished next year.
1. Introduction The Continuous Electron B e a m Accelerator Facility ( C E B A F ) is a 4 G e V electron accelerator under construction in Newport News, Virginia, planned to begin operation in the summer of 1994. The unique features of this facility are its continuous b e a m and high luminosity - ideal for experiments requiring large samples of events with minimal accidental coincidence rates. The accelerator consists of two 0.4 G e V superconducting R F linacs connected by two 180 ° arcs; each linac consists of 20 cryomodules, each containing 8 cavities. The 45 M e V injector contains an additional 2¼ cryomodules. The b e a m is recirculated through the machine for up to 5 passes, yielding an energy of 4 GeV. After the 5th pass, the b e a m may be split and sent to 3 halls simultaneously. These experimental halls will house a variety of complex detectors. The control systems for both the accelerator and the experimental facilities are in the process of being migrated from in-house developed packages to systems based upon E P I C S [1]. The following discussion describes the existing hardware and software, the new hardware and software, and a comparison of the T A C L and EPICS architectures, including the difficulties which led to the decision to convert from one to the
+ Work supported by the Department of Energy, contract DE-AC05-84ER40150. * Corresponding author.
other. Finally, an early analysis of the conversion effort will be given, as well as plans for the coming year.
2. The existing control systems at CEBAF The existing accelerator control system ( T A C L [2]) consists of a two layer network of workstations interfaced to C A M A C electronics via the G P I B bus. Operator interfaces such as the graphical display and control program run on the upper level of workstations, as do programs that manipulate the entire machine. The 6 first-level machines (called supervisors) are located in the machine control center, and typically have 2 to 4 attached displays. Control loops typically execute on the lower level of workstations, called local computers. These control algorithms are entered graphically with a CAD-like program using modular pieces of code as building blocks. The logic execution supports C A M A C C E B A F I / O through G P I B interfaces. Support is also available for communication with other processes via socket connections. T h e r e are 50 second level machines (2 installed) each supporting 2 to 8 C A M A C crates (see Fig. 1). For some systems, in particular the R F cavities and the magnet controllers, there is a third level of processing in the form of e m b e d d e d microprocessors. These microprocessors run local control loops at around 30 Hz and are interfaced to C A M A C through simple
0168-9002/94/$07.00 © 1994 - Elsevier Science B.V. All rights reserved SSDI 0 1 6 8 - 9 0 0 2 ( 9 4 ) 0 0 0 2 4 - 2
119
WA. Watson III et al. /Nucl. Instr. and Meth. in Phys. Res. A 352 (1994) 118-121
buffer cards. Each RF cavity (total: 340) currently has a dedicated microprocessor. The number of I / O channels in the completed machine will be around 30 000, or 200 000 when including "soft" channels, variables in the embedded processors such as calibration constants. The experimental halls will each eventually require complex control systems consisting of up to 10 000 I / O channels for the most complex detector. Prototypes of the detectors are currently operating using control software developed as part of the physics data acquisition system CODA [3], which, like EPICS, is based upon the VxWorks real-time kernel. This modular control package currently supports VME, CAMAC, and a proprietary interface to high voltage mainframes, with other devices planned. Graphical displays are supported, and control algorithms can be developed in C via a callable interface to the I / O system. An alarms and limits package is partially developed.
3. The new architecture
In the new architecture, there will again be three tiers of computing for the accelerator control system (2 work-station tiers plus the embedded micros). The uppermost tier consists of Unix workstations a n d / o r X-Windows displays, which will run operator interfaces
Supervisors
InjectorLocals
~ N LinacLocal ~s
CAMAC
CAMAC Transport
Locals
Cryo/vacLocals
SLinacLocals
Safety
<
Fig. 1. Existing TACL architecture showing all of the supervisory and local computers and C A M A C crates.
HP-7xx ~ o
~X
HP-7xx ~ . ~
terminal
~ o X terminal
Fig. 2. EPICS architecture showing several displays and several IOCs.
and global control and monitoring applications (see ref. [1] for a list of EPICS utility programs). The second tier of machines will be VME-based 68040 single board computers running the EPICS I / O controller (IOC) software and the VxWorks real-time kernel. Some of the faster local machines replaced by the IOC's (a total of 26) will become the new first tier machines in the upgrade, while others will be used for software development and test systems. The current plan is to install 13-15 IOCs to operate the accelerator. Each IOC will contain a VME-toCAMAC serial highway interface, and each CAMAC crate will have a type L2 serial highway crate controller (see Fig. 2). Fewer IOCs than local computers will be needed because of the better I / O handling in VxWorks, and because of the greater efficiency EPICS provides in scheduling work (see discussion below). The hardware architecture for the detector halls will remain unchanged in the upgrade as it previously used the same VME hardware as EPICS. All that is necessary is to migrate CODA device drivers into EPICS device drivers, and make an interface library between CODA calls and the EPICS network calls.
4. Converting from TACL to EPICS 4.1. Control algorithm execution
On the surface, the EPICS and TACL toolkits share many features. Starting at the lowest level, each system builds up control algorithms (herein referred to as a logic set) from modular pieces of code which may be linked together using a graphical entry tool. These control algorithms run on machines remote from the
II. STATUS & SYSTEMS ARCHITECTURE
120
WA. Watson III et al. /Nucl. Instr. and Meth. in Phys. Res. A 352 (1994) 118-121
operator interfaces, and different logic sets may communicate with one another across the network. This architectural similarity makes it easy to map the existing TACL designs onto EPICS by simply replacing all local computers (second tier machines) with EPICS IOCs. While the toolkits share basic similarities, most details favor EPICS. In the existing TACL systems, the control algorithms run on Unix and access CAMAC through GPIB device drivers. Unix introduces some non-determinism, and imposes fairly stiff context switching penalties upon execution of a multi-tasking design. The device driver imposes fairly large time penalties for each call, hence virtually precluding random read operations (all data is read by block scan at the beginning of each cycle). EPICS has considerably better I / O performance (10 to 100 times) since most I / O is performed in the context of the calling task through simple register-based interfaces. A second area in which EPICS outperforms TACL is in the scheduling of logic execution. In TACL, each local computer can only execute a single logic set organized as a 2-dimensional array of logic elements, evaluated top to bottom, left to right. Everything is executed once per cycle, although blocks of logic may be executed only once for a fixed number of passes, or explicitly disabled by an operator. In contrast to this, EPICS can process records (its building blocks for control) in a variety of ways. Records may be processed at one of many scan rates, upon a software event, upon an I / O event, or triggered by an operator or another record. In this way, many records are only processed on demand and thereby reduce the load on the IOCs. 4.2. Design entry
EPICS has two advantages in design entry and one disadvantage. On the plus side, EPICS records are of higher functionality than TACL logic elements, thereby requiring fewer components to assemble an algorithm. For example, the EPICS analogue input record can read hardware, convert to engineering units, generate an alarm or warning if out of limits and forward data to an operator. Each of these functions requires one or more TACL logic elements increasing design complexity and design entry time in TACL. A second advantage of EPICS is that designs can be entered hierarchically (to any depth), allowing low level details to be hidden by the abstraction of an intermediate complex "part". This feature is obtained in EPICS through integration with a commercial schematic capture program. Designs are exported to EPICS via an EDIF file translator. This CAD interface is also to EPICS' disadvantage in that it is not a fully integrated solution, as invalid values can be entered into the
design (the CAD program is unable to check for most design errors). By contrast, TACL contains a custom integrated design entry tool. Work is underway within the EPICS collaboration to produce a fully integrated tool. Transporting control designs from TACL to EPICS first involves organizing the TACL designs (substantially macro based) into a hierarchical layout and then re-casting the low level blocks into EPICS records. This second step is certainly the most manpower intensive step in the entire migration. Both "parts libraries" contain a subroutine callout, and in many cases the TACL subroutine may be used under EPICS with only minor changes to handle argument passing. 4.3. The network layer
Both TACL and EPICS have a client-server architecture, with name-based I / O and T C P / I P communications. In EPICS, name resolution is currently handled by a broadcast to which the correct server answers. In TACL there is a known name server, the STAR, which is the hub of all client-server communications. Both systems send data only on change, but EPICS further reduces the flow by sending data only when the change exceeds a specified deadband. Most fields of an EPICS record may be changed over the network, allowing greater flexibility than TACL in tuning an algorithm while in operation. In TACL, the selection of which values were accessible to an operator had to be specified at design entry, forcing more frequent editing of the logic diagram to change constants than was desirable. 4.4. State machines
TACL supports state machines written as simple tables of conditions and actions on named variables; each state machine runs as a separate Unix process. In EPICS, state machines are written in State Notation Language, and may also contain embedded C code. The state machine executor also supports run-time macro expansion of names, allowing a single state machine to be executed multiple times for repeating structures with different instance names. 4.5. Operator interface and utilities
TACL and EPICS each contain a graphical operator interface with text, mouse, and knob input. The TACL interface was built using a proprietary graphics library and is therefore not portable and not network accessible. An X l l driver is available, but is not used. EPICS contains 2 such interfaces, one based only on X, the other on Motif and a commercial widget set. All
W.A. Watson l l l et al. /Nucl. Instr. and Meth. in Phys. Res. A 352 (1994) 118-121
three tools are driven by display files which are produced using a WYSIWYG editor. In addition to the graphical interface, several other utilities are contained in EPICS which had not been developed in TACL, or were not sufficiently developed to meet CEBAF's requirements. These include an alarm package, an archive system, and a save/restore tool. 4.6. Migration strategy." running a mixed system
Because both EPICS and TACL are name-based systems, it is possible for a TACL client to communicate with an EPICS IOC through an emulation library or a gateway. For the migration effort at CEBAF, the gateway solution was adopted. This choice was taken because all T A C L clients connect only to the STAR process, which in turn connects to the servers executing logic. It was only necessary to modify the STAR to enable all existing clients to talk to EPICS. In particular, the existing operator displays can be used to control the system, postponing the effort needed to re-draw all displays, and allowing all efforts to be focused on the IOC installation. Furthermore, subsystems can be converted to EPICS one at a time, minimizing the effect upon the commissioning schedule. The combined TACL + EPICS architecture is shown in Fig. 3. Because of the ability to run a mixed TACL-EPICS system, a decision was made to convert a single subsystem (RF) in time for the injector commissioning (De-
EPICS Applications
TACL Applications
Graphical Display Alarms & Limits Logging / Revie~ Save / Restore Database Ctmfigure Sequencer
Graphical Display Accelerator Controls
Slar Logic TACL-EPICS interface
EPICS IOC (1/OController) GPIB CAMAC
CAMAC Module support CAMAC
CAMAC Module support
device support
Fig. 3. Software architectural view of the transitional TACLEPICS control system.
121
cember 1993), and then substantially finish the conversion in the first quarter of 1994, prior to and during the commissioning of the rest of the machine. This effort is now underway and has encountered no major difficulties. Experience indicates that it takes 2-4 weeks for a developer to become proficient in creating EPICS logic sets (called databases in EPICS terminology). The designs are typically 2-5 times smaller than the original TACL design, with 20% more records to support the old operator interfaces.
Summary A migration of the CEBA F accelerator control system from TACL to EPICS is well underway, and promises to produce many benefits for the laboratory EPICS has proved to have a well layered software architecture which allows modular development and software sharing by many laboratories and many developers. C A M A C support was easily integrated, as was the TACL to EPICS gateway. The TACL design shares a sufficiently large number of features with EPICS to allow a straightforward mapping of the control system design (and in many cases the code) from one to the other. The first major subsystem is currently being migrated and will be tested with beam in December of '93. When fully completed next year, the CEBA F control system will be the largest EPICS system in operation. Following the upgrade, CEBA F will have a uniform control system for both accelerator and physics detectors, increasing the inter-operability of the separate systems and reducing the overall software development and maintenance requirements.
References [1] Leo R. Dalesio, et al., The experimental physics and industrial control system architecture: past, present, and future, this conference. [2] R. Bork, et al., CEBAF control system, CEBAF-PR-89013. [3] William A. Watson III, et al., CODA: A scalable, distributed data acquisition system, 8th Conf. on Real-Time Computer Applications in Nuclear, Particle and Plasma Physics, Vancouver, Canada, 1993.
II. STATUS & SYSTEMS ARCHITECTURE