Fusion Engineering and Design 48 (2000) 31 – 36 www.elsevier.com/locate/fusengdes
A distributed real-time system for event-driven control and dynamic data acquisition on a fusion plasma experiment J. Sousa a,*, A. Combo a, A. Batista a, M. Correia a, D. Trotman b, J. Waterhouse b, C.A.F. Varandas a a b
Associac¸a˜o EURATOM/IST, Centro de Fusia˜o Nuclear Instituto Superior Te´cnico, 1049 -001 Lisboa Codex, Portugal UKAEA/EURATOM Fusion Association, UKAEA Fusion, Culham Science Centre, Abingdon, Oxon OX14 3DB, UK Received 1 July 1999; accepted 29 March 2000
Abstract A distributed real-time trigger and timing system, designed in a tree-type topology and implemented in VME and CAMAC versions, has been developed for a magnetic confinement fusion experiment. It provides sub-microsecond time latencies for the transport of small data objects allowing event-driven discharge control with failure counteraction, dynamic pre-trigger sampling and event recording as well as accurate simultaneous triggers and synchronism on all nodes with acceptable optimality and predictability of timeliness. This paper describes the technical characteristics of the hardware components (central unit composed by one or more reflector crates, event and synchronism reflector cards, event and pulse node module, fan-out and fan-in modules) as well as software for both tests and integration on a global data acquisition system. The results of laboratory operation for several configurations and the overall performance of the system are presented and analysed. © 2000 Elsevier Science S.A. All rights reserved. Keywords: Distributed real-time system; Dynamic data acquisition; Fusion plasma
1. Introduction The new generation of magnetic confinement fusion experiments aims at long-pulse or even steady state operation [1 – 4]. Control and data acquisition will be merged in a distributed realtime system permitting the implementation of dynamic experiment scheduling, event-driven dis* Corresponding author. Tel.: +351-21-8417819; fax: + 351-21-8417475. E-mail address:
[email protected] (J. Sousa).
charge control with failure counteraction and dynamic data acquisition [5,6]. Such control and data acquisition system consists on multiple nodes sharing plasma state variables, that are propagated through the interconnections of a low time latency network, which provides support for management and transmission of prioritised signals, alarms, events and other objects as well as trigger scheduling and synchronism distribution. Actual network links are oriented for bulk transfer of data having time latencies of no less
0920-3796/00/$ - see front matter © 2000 Elsevier Science S.A. All rights reserved. PII: S 0 9 2 0 - 3 7 9 6 ( 0 0 ) 0 0 1 2 3 - X
32
J. Sousa et al. / Fusion Engineering and Design 48 (2000) 31–36
than tens of ms and provide no-deterministic propagation of triggers and synchronism. These restrictions limit real-time operation since results may not be attained with acceptable optimality and predictability of timeliness. Even multimedia-oriented links, with its reserved bandwidth for time-critical tasks, cannot fully satisfy the required performance. A distributed trigger and timing system (TTS) has been developed in order to fill this gap by providing sub-microsecond time latencies for the transport of small objects as well as providing accurate simultaneous triggers and synchronism on all nodes in a large experiment campus where previous timing systems architectures are unsuitable. The architecture and hardware components of this system are described elsewhere [7,8]. This paper is organised as follows: Section 2 includes a short description of the system; Section 3 presents the technical characteristics of the VME hardware components; Section 4 contains the test results; in Section 5 the software implementation is described and in Section 6 the conclusions are presented. 2. System description The Trigger and Timing system has been designed in a tree-type topology, with a central unit providing time synchronisation and event distribution between all satellite nodes (Fig. 1). The interconnections allow bi-directional communication
Fig. 1. The topology of the trigger and timing system.
from one to all other nodes permitting them to synchronously share small data objects. The central unit contains at least one reflector unit (RU) holding a maximum of 16 event and synchronism reflector modules (ESR). An optical fibre cable connects each ESR to an event and pulse node (EPN) module. Expanding the number of connections up to 256 can be accomplished by using a maximum of 17 RU. Pre-defined and event dependent timing actions are performed in each of the EPN inserted in host crates scattered all over the experiment campus. Expansion of the input/ output capabilities of the EPN is carried out by fan-in and fan-out modules and through the external event bus (XBUS) which allows to connect a mix of EPN and special function cards in the same crate.
3. Technical characteristics of the VME hardware components
3.1. E6ent and pulse node module Each EPN produces the timing signals required for the operation of the experiment diagnostics and digitisers. It also performs the broadcasting, processing and recording of the occurrence of externally generated events for real-time control purposes. The EPN module contains an eight output channels timing unit (TU) [8], implemented in a field programmable gate array (FPGA) which is programmable through the host bus with a vector of timing parameters, defining several sequences of pulses per output channel, which can vary dynamically with time and/or with events, such as multiple frequency clock pulse trains for data acquisition, signals generated at predefined times to synchronise the diagnostic operation and sequences of gating signals of variable duration required for control functions. The TU includes a time counter that starts counting after receiving a START event allowing to record the time of occurrence of the events and to start the static sequences at predefined instants. The EPN also routes the events bi-directionally to the optical communication interface, the host bus, the XBUS and one-way from the eight local inputs and to the TU (Fig. 2).
J. Sousa et al. / Fusion Engineering and Design 48 (2000) 31–36
5.
6.
7. 8. 9.
Fig. 2. The sources and destinations of the events on the EPN and its routing paths.
The main characteristics of this module are: 1. It allows the generation of frames defined by a frequency, number of pulses, polarity and mode of operation (pulsed mode, level mode, and gated or continuous operation) up to a maximum of 64 k. These signals can be: 1.1. Multiple frequency clock pulse trains for data acquisition (frequencies up to 10 MHz and a maximum pulse count value of more than 16×106). 1.2. Signals generated at predefined times to synchronise the diagnostic operation. 1.3. Sequences of gating signals of variable duration required for control functions. 2. The output timing signals are defined with 32-bit resolution to produce pulses from 100 ns to a maximum duration of 430 s with a start time settable with a resolution of 1 ms during this period. 3. The outputs may provide pulses of predefined width (from 20 to 80 ns, in 20 ns steps) and its idle state when no frame is generated can be programmed as high or low level. 4. Gated ECL clock frequencies may be generated from 50 up to 800 MHz from a PLL, for externally clocked and triggered devices (gate
10.
11. 12.
13.
14.
15.
33
time is defined by TU channel 0). A buffered output is provided on a differential LEMO connector. The outputs can be set to a neutral state for testing purposes while the output indicators remain active. The EPN contains a PLL based clock recovery circuit for recovering the global 20 MHz clock from the communications link. It allows the detection of link failure with automatic switching to a local oscillator. It detects the START and STOP events for TU initialisation. The SYNC signal guarantees resynchronisation of the TU in case of a link failure during less than 10 ms (can be greater using a local oscillator with an accuracy of less than 50 ppm). An event source arbitration flag controls the pre-defined frame and immediate event processing. Each event can generate the same frame simultaneously in any of the output channels. The EPN contains an event recording unit to time stamp event occurrence times to a maximum of 256 k events. It includes a counter of the number of frames generated by each channel to a maximum of 64 k frames. It contains a 960 MBaud duplex optical communication interface based on 850 nm vertical cavity surface emitting laser with a standard plug, allowing 300 m links in 62.5/ 125 mm fibre optic cable. The timing unit has been implemented in a FPGA, which provides the ability to upgrade the design.
3.2. Reflector unit The reflector unit performs synchronous event distribution to several remote locations on the experiment, where they may initiate predefined local timing actions. The RU is designed in a modular approach and may contain a maximum of 16 ESR modules. The main characteristics of this module are:
34
J. Sousa et al. / Fusion Engineering and Design 48 (2000) 31–36
1. The managed events, coded as 16 bit words, are always stored in order to guarantee that none is missed when one or several collisions happen. 2. Each event can have one of four event priorities, corresponding to different average propagation time latencies for high event rates. 3. When more than 16 reflector modules are needed, the system must have at least three RU’s to guarantee the same propagation time between nodes. 4. The support crate is a slight modified, easy to obtain, 16-slot VME 3U unit without controller.
3.3. E6ent and synchronism reflector module This module is the basic unit of the RU. Events received on one card are broadcast to all the others in a hardware implemented priority-based scheme. Its main features are: 1. It contains a 960 MHz duplex optical communication interface working at 960 MBaud based on 850 nm vertical cavity surface emitting laser with a standard SC plug. 2. It allows up to 300 m links in 62.51/125 mm fibre optic cables. 3. Each link may have a total length multiple of 7 m. The synchronism between all nodes is guaranteed in each EPN by programming the delay of the START event in multiples of 25 ns. 4. The control logic for packet arbitration and broadcast is implemented on a fast insystem programmable logic device. 5. It allows several selectable sources for the 20 MHz clock reference on the master card. 6. It contains inputs for system enable and the local start/stop on the master card. 7. It provides 12/15 inputs for the generation of synchronism/normal events. 8. Automatic generation of the TU synchronism events on the master card.
3.4. Fan-out and fan-in modules Each EPN card is optionally connected to one or more modules that provide optical, galvanically
isolated electrical channels to the diagnostics or control systems. Their common technical characteristics are: 1. It contains eight optical channels via ST connector. 2. It has eight electrical galvanically isolated channels via LEMO EPL.00 series connector. 3. It includes eight electrical channels through the secondary host bus connector. 4. It provides optional Thevenin or RC terminations on all inputs. Moreover, the fan-in module has: 5. Programmable input active level. 6. Input enable/disable programmable by jumper. 7. Optional RC filters with Schmidt trigger on the outputs. 8. Works with a minimum active pulse width of 50 ns and a maximum dead time between pulses of 30 ns. while the fan-out module has: 9. Electrical galvanically isolated 10 Ohms outputs with optional serial terminations. 10. Electrical channels via secondary host bus connector input or output selectable by jumper. 11. The outputs supply the EPN timing signals and gated clocks (where the gate is generated by an EPN channel selectable by a jumper). 12. The provided output clocks are the EPN programmable clock (100, 50, 33 and 25 MHz) and the inverted (selected by a jumper) EPN programmable clock. 13. A push button to test the optical outputs.
4. Tests Tests to verify the system functionality have been performed both by computer simulation and on the real hardware by verifying the responses to supplied stimulus. Several quantitative tests related to time errors have been made. The most important results of these tests have been (Fig. 3): 1. The long-term time jitter measured in one output was less than 0.5 ns and between two outputs in the same or in different modules was in both cases less than 1.5 ns.
J. Sousa et al. / Fusion Engineering and Design 48 (2000) 31–36
35
2. The maximum propagation time of a single event from the Reflector to an EPN output was 1 ms. 3. The maximum propagation time of a single event from one EPN input to another EPN output was 1.9 ms. Fig. 4 depicts the distribution of the time elapsed between the arrival of an event (at 0 ms) and the output pulse on the same (between 0.8 and 1.2 ms) and on another EPN (between 1.5 and 1.9 ms). This uniform distribution results from the event sampling on the inputs at a fixed unsynchronised rate. Fig. 5. Statistical distribution of the random events used for system stressing.
Fig. 3. Time delay values for critical paths on the system.
4. Regarding the propagation time of a single event, when the system is stressed by the presence of several time random events parameterised by a range of average event rates, tests have shown that for an average rate of 4.5 Mevents/s (generated by four random signal generators each one producing the Poisson distribution depicted in Fig. 5) a single event has propagated throughout the system almost undisturbed. Since the resolution of the timing unit is 100 ns this figure means an additional delay of less than 100 ns. For the maximum aggregated system event rate (20 Mevents/s) the calculated worst-case additional delay is 800 ns. 5. Concerning the bit error-rate, the system was tested with a total of 1010 events sent between two nodes without any being corrupted or missed. Calculated error rate depends on the link length and it is usually less than 10 − 12.
5. Software
Fig. 4. Distribution of the time elapsed between the arrival of an event and the output pulse on the same and on another EPN.
A device driver and a test program have been developed in ANSI C language for the OS-9 operating system (OS) in a VME 68040 machine. The software structure allows the easy porting to another OS or other hardware platform since the OS and hardware dependent routines are well differentiated on the code.
36
J. Sousa et al. / Fusion Engineering and Design 48 (2000) 31–36
The device driver follows the structure of an usual data acquisition system software by providing entries for device setting, test, device triggering and data collection and is being integrated into the MAST Data Acquisition System [9]. The parameter setting task is accomplished through an X-Windows interface and may be done in a remote location.
Atomic Energy Community and ‘Instituto Superior Te´cnico’, has been also funded by ‘Fundac¸a˜o para a Cieˆncia e a Tecnologia’ and PRAXIS XXI. The work performed at UKAEA Fusion is jointly funded by the UK Department of Trade and Industry and EURATOM.
References 6. Conclusions The VME versions of a distributed real-time trigger and timing system have been developed. Tests have showed that optimal time latencies in the microsecond scale have been attained as well as a predictable propagation of triggers and synchronism, thereby allowing real-time operation of the system. The adaptation of the hardware and software to the CAMAC standard is under development. Several enhancements can be done for current and future designs in order to reduce event propagation time, implement absolute time representation and to improve real-time programming of the system. These goals can be achieved using more powerful programmable logic devices. This system will be implemented on the MAST tokamak, presently under construction at Culham Laboratory.
Acknowledgements This work, carried out in the frame of the Contract of Association between the European
.
[1] Y. Peysson et al., 16th IEEA Fusion Energy Conference, IAEA-CN-64/E4, Montreal, 1996. [2] Y. Shimomura, R. Aymar, V. Chuyanov, M. Huguet, R. Parker and the ITER Joint Central Team Home Teams, ITER Overview, Fusion Eng. Des. 36 (1997) 9 – 21 [3] G. Grieger, et al., Physics and engineering studies of Wendelstein 7-X, in: Proceedings of the 13th International Conference on Plasma Physics and Controlled Nuclear Fusion Research, Washington, 1990, IAEA, Vienna, 1991, p. 525. [4] S. Itoh et al. 16th IAEA Fusion Energy Conference, LAEA-CN-64/EP-6, Montreal, 1996. [5] C.A.F. Varandas, B.B. Carvalho, C. Correia, et al., On site developed components for control and data acquisition on next generation fusion devices, Fusion Eng. Des. 36 (1997) 177. [6] G. Raupp, K. Beffier, G. Neu, W. Treutterer, D. Zasche, T. Zehetbauer and the ASDEX Upgrade Team, Experience from ASDEX Upgrade Discharge Control Management for Long Pulse Operation, Fusion Eng. Des. 43 (1999). [7] J. Sousa, A. Combo, A. Batista, et al., A distributed system for fast timing and event management on the MAST experiment, Fusion Eng. Des. 43 (1999) 407. [8] J. Sousa, A. Batista, A. Combo, et al., The 32 bit Timing Unit of a real-time event-based control system for a nuclear fusion experiment, IEEE Transact. Nucl. Sci. 45 (1998) 2052. [9] J. Waterhouse, S.J. Manhood, MAST Data Acquisition System-System Architecture, in: Proceedings of the 2nd IAEA Technical Committee Meeting on Data Acquisition and Management for Fusion Research, Lisboa, 1999.