Implementation of rapid imaging system on the COMPASS tokamak

Implementation of rapid imaging system on the COMPASS tokamak

G Model ARTICLE IN PRESS FUSION-9310; No. of Pages 4 Fusion Engineering and Design xxx (2017) xxx–xxx Contents lists available at ScienceDirect F...

795KB Sizes 0 Downloads 57 Views

G Model

ARTICLE IN PRESS

FUSION-9310; No. of Pages 4

Fusion Engineering and Design xxx (2017) xxx–xxx

Contents lists available at ScienceDirect

Fusion Engineering and Design journal homepage: www.elsevier.com/locate/fusengdes

Implementation of rapid imaging system on the COMPASS tokamak Ales Havranek a,b,∗ , Vladimir Weinzettl a , David Fridrich a,c , Jordan Cavalier a , Jakub Urban a , Michael Komm a a b c

Institute of Plasma Physics of the Czech Academy of Sciences, Za Slovankou 3, 182 00 Prague 8, Czech Republic Faculty of Electrical Engineering, Czech Technical University in Prague, Technicka 2, 166 27 Prague 6, Czech Republic Faculty of Nuclear Sciences and Physical Engineering, Czech Technical University in Prague, Brehova 7, 115 19 Prague 1, Czech Republic

a r t i c l e

i n f o

Article history: Received 2 October 2016 Received in revised form 21 March 2017 Accepted 22 March 2017 Available online xxx Keywords: Camera Data acquisition Video processing Tokamak

a b s t r a c t The COMPASS tokamak has been recently equipped with two new fast (max. 800 kfps) color cameras Photron FASTCAM Mini UX100 operating in the visible spectral range. A new data acquisition system called RIS (Rapid Imaging System), including both node’s software and hardware, was developed for these cameras to ensure automatic and reliable operation and its integration to the control and data acquisition system of COMPASS. The node provides camera function control, parameter setting, data transfer from the camera to the node computer, demosaicing, encoding of a preview video, and data saving to the COMPASS database. Raw data that keep full information are saved to HDF5 files (up to 10 GB per shot). Moreover, the node encodes color video (less than 60 MB per shot) that is displayed on the COMPASS control room panel, in the logbook and the COMPASS database. The video processing toolchain also allows playing HDF5 files with raw data in almost every media player on simple single double click. All data are available in less than 2 min after the shot. © 2017 Elsevier B.V. All rights reserved.

1. Introduction Fast cameras observing visible light coming from an interaction of the edge plasma with a neutral gas (called gas puff imaging, if an active puffing in front of a camera is applied) or with plasma facing components are commonly used to monitor a plasma behavior during discharges of nowadays tokamaks. These observations, historically initiated by high speed photography [1], allow studying in details a plasma/wall interaction [2], tracking dust particles [3], and following fast magnetohydrodynamic (MHD) events such as filament structures of edge localized modes [4], blobs [5], or tearing modes and sawteeth [6,7]. Such observations have been already performed on the COMPASS tokamak using the two greyscale EDICAM-type cameras [8] of up to 116 kfps. Newly, COMPASS has been equipped with a new camera-based diagnostic called Rapid Imaging System (RIS). The RIS is a new powerful discharge-overview diagnostic on the COMPASS tokamak, based on two fast color cameras. These cameras were selected to provide additional information about plasma and dust composition hidden in colors. It is useful for fast dis-

∗ Corresponding author at: Institute of Plasma Physics of the Czech Academy of Sciences, Za Slovankou 3, 182 00 Prague 8, Czech Republic. E-mail address: [email protected] (A. Havranek).

charge analysis done by COMPASS tokamak operators immediately after plasma discharges and for subsequent physical investigations: plasma discharge overview, runaway beam monitoring, MHD activity observation/localization (tearing modes, sawteeth, ELMs, . . .), comparison of camera and probe data (e.g. emission vs. density and potential), H-mode studies (ELM filaments, etc.), strike point localization, strike point splitting during RMPs, disruption studies, dust particle tracking, tomography, plasma composition estimation, etc. The paper describes technical issues of the camera implementation on the COMPASS tokamak, its integration to the COMPASS database (CDB), data visualization in the frame of the logbook and of the control panel in the control room.

2. Rapid imaging system The RIS consists of the two identical Photron FASTCAM Mini UX100 cameras equipped with collection optics, and node. The collection optics is composed of a wide-angle lens combined with relay lenses. The node has a hardware part and a software part. The hardware part contains a control PC and network accessories. Node’s software is divided into three parts. The first part is the controller programmed in C++. It takes care of direct communication with cameras, their control, and a raw mosaic data collection and storage to HDF5 files. The second part − the interface written in Java − provides a shot sequence control, cameras setting transfer, a raw

http://dx.doi.org/10.1016/j.fusengdes.2017.03.129 0920-3796/© 2017 Elsevier B.V. All rights reserved.

Please cite this article in press as: A. Havranek, et al., Implementation of rapid imaging system on the COMPASS tokamak, Fusion Eng. Des. (2017), http://dx.doi.org/10.1016/j.fusengdes.2017.03.129

G Model FUSION-9310; No. of Pages 4

ARTICLE IN PRESS A. Havranek et al. / Fusion Engineering and Design xxx (2017) xxx–xxx

2

Fig. 1. Rapid Imaging System (RIS) scheme.

and video data transfer to the COMPASS database and a local disk space management. The third part – the video processing toolchain – demosaics raw data, processes and encodes a video (Fig. 1).

2.1. Cameras The cameras named RISEye 1 and RISEye 2 are of the Photron FASTCAM Mini UX100 type. They have 12-bit CMOS sensor with 10 ␮m square pixels covered by a Bayer color filter array. Full frame resolution of 1280 × 1024 pixels can be used up to 4 kfps, High Definition resolution (1280 × 720 px) up to 6.4 kfps and a reduced resolution of 640 × 8 px up to 800 kfps. The global electronic shutter allows a minimum exposure time of 1 ␮s. ISO 12232 Ssat standard qualified light sensitivity is ISO 5000. Images are collected to the internal memory of the camera, which limits the maximum recording time. The cameras have been purchased with a memory size of 4 GB (maximum size is 16 GB) to cover the whole COMPASS discharges (∼1 s). Minimal record time at maximal frame rate is from 0.45 s to 1.12 s. Data are transferred over a 1 Gb/s Ethernet. The cameras are equipped with TTL triggers and synchronization inputs that can withstand high overvoltage. Both the cameras should be located outside strong magnetic fields due to their sensitive electronics. Therefore, the endoscope of the cameras usually consists of a wide angle lens (e.g. f = 4.8 mm) to obtain a large overview of the vacuum vessel (reaching up to 180◦ is possible) and relay optics allowing placing the camera outside toroidal field coils. If this is still not far enough, the cameras can be eventually magnetically shielded. The final field of view for our typical set-ups is limited by the diagnostic port and reaches 93◦ angle of view. The resolution is about 2–3 mm at 1 m distance. Additional spectral filters (e.g. interference) are not used.

2.2. Node’s hardware Node’s hardware should ensure a safe and reliable RIS operation and provide enough processing power for both cameras. The cameras are set, and data are collected via a metallic Ethernet interface, what makes them very easy to use, however, it does not avoid a camera and LAN ground mixing. Therefore, the cameras are connected to a galvanically isolated switch that uses optical fibers for a connection of the cameras to the control PC. As a result, the cameras have their own sub-network and do not share network resources with others devices. The sub-network ensures a stable and fast data transfer; maximal throughput for one camera is 67 MB/s. The switch is connected to PC with a 1 Gb/s link but there is a possibility to double this speed by using two identical channels in a VLAN switch configuration. Data from the cameras to PC are transferred in a raw format to keep full information contained there. The camera data are 12-bit but the camera driver extends it to 16-bit. This operation is made on the control PC in a single thread. This behavior could not be changed, and therefore, a processor with a high single thread performance was selected (Intel Core i7–6700 K). Raw data (16-bit) are continuously saved to HDD. A RAID0 configuration with two HDDs is used to achieve a sufficiently large file writing speed. The control PC has a large and fast 64 GB DDR4 RAM. As a result, demosaicing and post processing can be handled entirely in RAM. 2.3. Node’s software The node software integrates the RIS to the COMPASS CODAC (Control, Data Acquisition, and Communication) system [9]. It is connected through CORBA (Common Object Request Brocker Architecture) to FireSignal − the highest level of the experiment control system. The setting of RIS parameters is managed within a FireS-

Please cite this article in press as: A. Havranek, et al., Implementation of rapid imaging system on the COMPASS tokamak, Fusion Eng. Des. (2017), http://dx.doi.org/10.1016/j.fusengdes.2017.03.129

G Model FUSION-9310; No. of Pages 4

ARTICLE IN PRESS A. Havranek et al. / Fusion Engineering and Design xxx (2017) xxx–xxx

ignal GUI. The parameters themself are provided to FireSignal by the node. The discharge sequence starts when the operator gives the command to FireSignal; then the settings are distributed to the different nodes. The parameters are handled by the interface part (Java). Parameters related to the cameras are saved to a JSON formatted file. Then the controller (C++) is run with the JSON file and the shot number as command line parameters. The controller is a console application that communicates with the cameras, controls their operation and saves collected raw data (16-bit) to an HDF5 (Hierarchical Data Format) file (open format, excellent for archiving). When the data are saved, the application finishes and the control returns to the interface. Now, the third part of the node − a video processing toolchain − starts. When the video is created, the control is returned to the interface again. The data are then stored to the CDB. Data files are copied directly to the CDB cache while metadata with information about the data files, camera settings, and CDB related information are stored in CDB through a jython-python CDB interface. At the end, FireSignal is notified that the data acquisition is finished. [10] 2.3.1. Controller The controller − a console application written in C++ is as reliable as possible. It performs following operations: it reads the JSON setting file, initializes the camera driver, detects the status of connected cameras, validates and checks camera parameters, starts recording process, collects and saves data into HDF5 files, and finally it cleans up and finishes. The parameter control is implemented here, therefore, if wrong camera parameters were read from the JSON setting file, these parameters are replaced by the closest valid parameter there. The resulting settings are saved to the JSON file, this file is reread to the interface, and the corrected settings are transferred to FireSignal and saved to the CDB with metadata. Camera data are acquired concurrently after shot in a raw format to RAM as fast as possible. Each pixel is transferred as 12-bit, but in RAM it is stored as a 16-bit value. This extension of the data is encapsulated in the camera driver and cannot be changed. Simultaneously, the data are continuously saved to a HDF5 file frame by frame. There is also a possibility to share the data with the video processing part of the node using Window’s named pipe mechanism. A raw data format was chosen because it is lossless and the data are not processed by an unknown algorithm like a color balance. This way allows keeping full information in the data. 2.3.2. Interface The second part of the node is an interface written in Java. The reason for using Java instead of C++ is because a compilation of CORBA modules on Windows 10 (the OS that we use) is challenging, and a camera driver for Linux does not exist. In addition, it is easier to use the existing Java implementation also for the CDB integration. This node part takes care of communication with the FireSignal central server, parameter handling, shot sequence, data management, saving to the CDB, and a disk space management. The interface connects to FireSignal after it is started, transfers parameters with their description and previously set values. These parameters with updated values are transferred from FireSignal to the node when a shot sequence starts. The node saves the parameters into a JSON file and starts the controller. After the controller finishes the existence of data files is checked. Scripts for a video processing are created, and a video processing task is started. Optionally, it can be started with the first node part − named pipes are used allowing on the fly video encoding. If a video trim is enabled, then the plasma current of the actual discharge is loaded from the CDB and the discharge end time is obtained from a simple thresholding. The compressed video is trimmed without recompression. The video and the raw data files are saved to the CDB as

3

soon as they become available. The last step is a local free disk space check. If the free disk space is under the threshold, then the oldest video and raw data files are deleted from the control PC. Currently installed capacity of 4 TB provides space for more than 360 records at the maximal data size. 2.3.3. Video processing toolchain Video processing of our camera data uses the following tools: Avisynth+, avs4 × 264mod, x264, MP4Box and our source filter. The interface starts a video processing when raw data files are saved. The first step is a generation of the Avisynth script file. AviSynth is free open-source software. It works as a frame server. Operations with the video are described inside the AviSynth script file. The AviSynth script (15 lines only) can rotate, flip and crop video. The current frame number and time are added as a subtitle to the video. Then the data format conversion is done. AviSynth cannot work with HDF5 files directly. This functionality is provided by our in-house plug-in HDF5 source filter. This plug-in (source filter in AviSynth) reads raw data from an HDF5 file and demosaics them. A Malvar-He-Cutler demosaic algorithm optimized for a camera Bayer mask arrangement and integer arithmetic are used [11]. This algorithm is based on a linear filtering. It produces high-quality outputs, and it is extremely fast. The MATLAB function demosaic is based on the same algorithm. The demosaic algorithm interpolates a raw image with only one number per pixel to a full-color image with three numbers per pixel (RGB coding − a standard format of a color image representation). The HDF5 source filter supports raw data reading from the pipes created by the controller. Generally, AviSynth handles many more video operations, e.g. addition of the COMPASS logo and filtering. Another nice feature is that the AviSynth scripts can be played in almost every media player. Therefore, with our HDF5source filter for AviSynth and some additional settings a double click on the HDF5 file is only required for playing the HDF5 file with the raw data, even if the HDF5 file uses a compression. Currently, we are using AviSynth+ because it supports multithreading and 64-bit architectures. After the AviSynth script is generated, then avs4 × 264mod is run with the AviSynth script and x264 options as parameters. The avs4 × 264mod is a simple AviSynth pipe tool for x264. This tool starts the x264 video encoder with the AviSynth output as input. The video is encoded with the H.264/MPEG-4 AVC codec. This codec was selected due to its large support by media players and web browsers. The encoder is x264. The x264 is a free software library developed by VideoLAN. We are using a build with an AviSynth and GPAC support. This build allows saving a compressed video to a MP4 (MPEG-4 Part 14) digital multimedia container format. Videos are compressed to sizes as small as 20–30 MB (at bitrate 2000 kbps). To preserve compatibility with web browsers, the “Main” profile is used. The x264 is the most advanced encoder: it is fast and it gives a wide range of the output video quality control. We can adjust the compression for our needs, i.e. optimize it for an almost static scene, including more I-frames for a better movement in the video. If the trim video option is enabled, then the interface runs MP4Box that trims the compressed video without any recompression. Trim is done at the closest I-frame. This operation is fast because it works with small files (20–30 MB). MP4Box is a multimedia packager tool provided by GPAC. GPAC is an Open Source multimedia framework used mainly for research and academic purposes. Finally, the video is saved to CDB (Fig. 2 Video 1). There it was added as a new file type − .mp4. A web interface to CDB, WebCDB, supports direct downloads of mp4 files and raw HDF5 files from the cameras without any transposition or data type change. The MATLAB CDB client was also updated to support the work with

Please cite this article in press as: A. Havranek, et al., Implementation of rapid imaging system on the COMPASS tokamak, Fusion Eng. Des. (2017), http://dx.doi.org/10.1016/j.fusengdes.2017.03.129

G Model FUSION-9310; No. of Pages 4

ARTICLE IN PRESS A. Havranek et al. / Fusion Engineering and Design xxx (2017) xxx–xxx

4

Overview color videos created after a discharge contains a lot of analysable visual information about discharge evolution, what helps to optimize a tokamak operation. The COMPASS database client was improved for a comfortable high volume data processing. The presented RIS system is optimized to use the full camera performance. Acknowledgment The work at IPP Prague was supported by the Ministry of Education, Youth and Sports CR (MEYS) project No. LM2015045. Appendix A. Supplementary data Supplementary data associated with this article can be found, in the online version, at doi:10.1016/j.fusengdes.2017.03.129. References

Fig. 2. Disruption end, shot #11425, t = 1242 ms, 6.25 kfps.

raw camera data in the same way. The HDF5 files with raw data are stored in the permanent CDB storage with the ZFS filesystem with a GZIP compression enabled. The compression factor is 65%. The video from one of the RIS cameras is loaded from CDB and placed to the Control panel in the COMPASS control room and also to the Logbook. 3. Conclusion The Rapid Imaging System is a fully operational diagnostic that is regularly used in the COMPASS tokamak since January 2016 (shot #11416). It produces high-speed color video data that are synchronized with other data acquisition systems and diagnostics. It has been clearly recognized early after the RIS installation on COMPASS; it is extremely useful in many areas of plasma physics investigations. Moreover, a long-term influence of harsh conditions near fusion devices (strong magnetic fields, X-ray, and neutron irradiation) on cameras can be studied.

[1] D.H.J. Goodall, High speed cine film studies of plasma behaviour and plasma surface interactions in tokamaks, J. Nucl. Mater. 111–112 (1982) 11–22. [2] N. Nishino, K. Takahashi, H. Kawazome, S. Yamamoto, et al., High-speed 2-D image measurement for plasma-wall interaction studies, J. Nucl. Mater. 337–339 (2005) 1073–1076. [3] S. Bardin, J.-L. Brianc¸on, F. Brochard, J. Bougdira, et al., Investigating transport of dust particles in plasmas, Contrib. Plasma Phys. 51 (2–3) (2011) 246–251. [4] A. Kirk, B. Koch, R. Scannell, M. Walsh, et al., Evolution of filament structures during edge-localized modes in the MAST Tokamak, Phys. Rev. Lett. 96 (18) (2006) 185001. [5] S. Zweben, R. Maqueda, D. Stotler, V.J. Mastrocola, et al., High-speed imaging of edge turbulence in NSTX, Nucl. Fusion 44 (1) (2004) 134–153. [6] M.A. Van Zeeland, J.H. Yu, M.S. Chu, W.P. West, et al., Tearing mode structure in the DIII-D tokamak through spectrally filtered fast visible bremsstrahlung imaging, Nucl. Fusion 48 (9) (2008) 092002. [7] J.H. Yu, M.A. Van Zeeland, M.S. Chu, R.J. La Haye, et al., Fast imaging of transients and coherent magnetohydrodynamic modes in DIII-D, Phys. Plasmas 16 (5) (2009) 056114. [8] A. Szappanos, M. Berta, M. Hron, S. Zoletnik, et al., EDICAM fast video diagnostic installation on the COMPASS tokamak, Fusion Eng. Des. 85 (3–4) (2010) 370–373. [9] M. Hron, F. Janky, J. Pipek, D. Valcarcel, et al., Overview of the COMPASS CODAC system, Fusion Eng. Des. 89 (3) (2014) 177–185. [10] J. Urban, J. Pipek, M. Hron, A.S. Duarte, et al., Integrated data acquisition, storage, retrieval and processing using the COMPASS DataBase (CDB), Fusion Eng. Des. 89 (5) (2014) 712–716. [11] H.S. Malvar, L. He, R. Cutler, High-quality linear interpolation for demosaicing of Bayer-patterned color images, Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing 3 (2004) 485–488.

Please cite this article in press as: A. Havranek, et al., Implementation of rapid imaging system on the COMPASS tokamak, Fusion Eng. Des. (2017), http://dx.doi.org/10.1016/j.fusengdes.2017.03.129