Nuclear Instruments and Methods in Physics Research A 453 (2000) 440}444
The BELLE DAQ system Soh Yamagata Suzuki*, Masanori Yamauchi, Mikihiko Nakao, Ryosuke Itoh, Hirofumi Fujii National High Energy Accelerator Research Organization, Oho 1-1, Tsukuba, Ibaraki 305, Japan Accepted 21 June 2000
Abstract We built a data acquisition system for the BELLE experiment. The system was designed to cope with the average trigger rate up to 500 Hz at the typical event size of 30 kB. This system has "ve components: (1) the readout sequence controller, (2) the FASTBUS-TDC readout systems using charge-to-time conversion, (3) the barrel shifter event builder, (4) the parallel online computing farm, and (5) the data transfer system to the mass storage. This system has been in operation for physics data taking since June 1999 without serious problems. 2000 Published by Elsevier Science B.V. All rights reserved.
1. Introduction The BELLE experiment at KEK B-factory aims for the "rst observation of CP-violation in the Bmeson decay and a precise determination of the CKM matrix elements [1]. The design luminosity is 10 cm\ s\, and the beam crossing interval is about 2 ns [2]. Under this condition, the expected rate of physics events is about 100 Hz, and the nominal event data size is about 30 kbyte. The background rate is estimated to be about the same level, but it may increase up to a few hundred Hz depending on the beam condition. Therefore, we designed our data acquisition system to be able to take the data up to 500 Hz. 2. DAQ system components 2.1. Unixed TDC readout system and SVD yash ADC readout system The BELLE data acquisition system reads the data from a silicon vertex detector (SVD), a central * Corresponding author.
drift chamber (CDC), aerogel Cherenkov counters (ACC), time-of-#ight counters (TOF), an electromagnetic calorimeter (ECL), a K and muon de* tector (KLM), and a pair of extreme forward calorimeters (EFC). All detectors except SVD are read out by TDC-based uni"ed readout systems [4]. In this system, the analog signals from CDC, ACC, TOF, ECL and EFC are digitized by the Q-to-T conversion: the charge of the detector signal is converted to a pulse that has two edges. The leading edge shows the signal timing while the duration between two edges is the measure of the charge (Fig. 1). This technique is very e!ective in reducing the number of cables. The timing is read by LeCroy LR1877S multihit TDCs. The signal from ECL has a very wide dynamic range, so we split the signal into three Q-to-T systems with different gains, and logically merge their output in such a way that the range is distinguished by the number of edges at TDC. The KLM signal has the hit information only, which is serialized via the time multiplexor and read by TDC. All the subtrigger timing information from Trigger (TRG) subsystem is also read by TDCs.
0168-9002/00/$ - see front matter 2000 Published by Elsevier Science B.V. All rights reserved. PII: S 0 1 6 8 - 9 0 0 2 ( 0 0 ) 0 0 6 7 9 - 3
S.Y. Suzuki et al. / Nuclear Instruments and Methods in Physics Research A 453 (2000) 440}444
441
Fig. 1. The signal is converted to timing data.
The uni"ed TDC readout system consists of a FASTBUS part and a VME part. The FASTBUS part is equipped with multihit TDCs and FASTBUS Processor Interfaces (FPI), while the VME part with a system controller and dual-port memory modules connected to FPIs (Fig. 2). The trigger is received by a Timing Distribute Module (TDM), and distributed to the TDCs. The TDC treats trigger as a TDC common stop or start signal, depending on the detectors. The controller reads the TDC data via the dual port memory of FPI, and clears the busy #ag. This system is tested up to 1 kHz trigger rate and 64 kbyte data size. The readout system for SVD has a di!erent structure, in order to handle a large number of channel of 82,000. All the channels are digitized by #ash ADC modules, and then sparsi"ed by on board DSPs [3]. The SVD readout system has four VME crates to distribute the data to the event builder (Fig. 3). To reduce the latency, the transmitter node of the event builder reads the #ash ADCs directly.
Fig. 2. A schematic view of the uni"ed TDC readout system. The data from the Q-to-T system is read out by TDC and sent to the event builder.
Fig. 3. A schematic view of the SVD readout system. The data is read out in 4 readout crates.
2.2. The event builder We built the "rst barrel shifting type event builder for the real experiment [5]. For the data #ow, our system uses the GLINK that uses 1.2 Gbps ECL signal (Fig. 4). The GLINK transceiver is implemented as an S-Bus module, and controlled by a VME SPARC CPU module.
Fig. 4. The event builder has two type links.
To control the tra$c globally, the External Control Module (ECM) is used. This module uses TAXI links which connects all GLINK
8. ELECTRONICS, TRIGGER . . .
442
S.Y. Suzuki et al. / Nuclear Instruments and Methods in Physics Research A 453 (2000) 440}444
Fig. 6. The online farm send sampled data to data quality monitor. Fig. 5. The virtual 12;6 switch is made from 4;4 and 2;2 switches.
transceivers. The timing of transmission is strictly controlled by this module. To connect 12 data readout systems and 6 nodes of the online farm [6], we built a virtual 12;6 switch using 4;4 and 2;2 Glink switch modules (Fig. 5). The required through-put is 15 MB/s as the whole system. We measured the performance besides data copying from VME, and the total throughput of our event builder is 160 Mbyte/s [5]. The switching latency is about 200 ls which is negligible for us. The data copying time from VME via DMA limits the performance of the event builder to 4}5 Mbyte/s per node. But the total throughput is more than 15 MB/s in a realistic condition of the data taking. 2.3. The online computer farm and the storage system The online computer farm consists of 96 RISC CPUs. The roles of the online computing farm are the data formatting for o%ine reconstruction, the fast event reconstruction [7] and the level 3 trigger. Some events are sampled on the online farm and sent to a PC server via Fast Ethernet for the data quality monitoring (Fig. 6). The sampled events are monitored by accumulating various histograms which can be seen online. The event display is also implemented in the same scheme. Our mass storage is a digital video tape library (SONY Petasite) which equips SONY DIR-1000
Fig. 7. The storage system is 2 km away from the experimental hall.
tape drives, and it can write the data at 15 MB/s [8]. This tape library is located at the computing center which is 2 km apart from the experimental hall (Fig. 7). We are using three optical links to transfer the data. The tape library has three drives, and they are used alternately to save the time for mounting and unmounting tapes.
3. Performance in the physics run In December 1998, we started steady data taking to calibrate detectors with the cosmic rays under
S.Y. Suzuki et al. / Nuclear Instruments and Methods in Physics Research A 453 (2000) 440}444
443
Fig. 9. An example data sizes from a typical physics run.
Fig. 8. The deadtime of the calibration runs. It is proportional to the trigger rate up to 500 Hz. When we reach the rate limit, the deadtime seems increase rapidly. There are also runs taken with a large data size, which give a large deadtime at low rate.
the nominal magnetic "eld (1.5 T). We also tested the performance of the DAQ system in these runs. We already had a full set of the trigger sources at this early stage, which included the TOF multiplicity and back-to-back triggers, the CDC r- track trigger, the ECL energy and cluster counting triggers and the KLM muon trigger. The nominal trigger rate for calibration was around 100 Hz, but we could easily change the condition from a few to 500 Hz. Under the high trigger rate condition, most of events have no track, so the data size and the dead time are small (Fig. 8). We also tested the stability for the large size data, when SVD sparsi"cation code had changed and the data size of SVD was dominated by noise hits. When we tested this system at a 500 Hz trigger rate in the cosmic ray run, the deadtime is maintained to be about 10% as designed. When the trigger exceeds 500 Hz, the accept rate saturates. In such cases, the dead time is about 10%, so the accept rate is about 450 Hz. The result seems that stable operation needs the trigger rate to be smaller than
Fig. 10. Total data size of physics run. The term from August to October is summer shutdown.
400 Hz, and the dead time is proportional to the total throughput. Since June 1999, this system is working in the physics run, and there is no serious problem. The typical event data size is 28 kbyte (Fig. 9), and the average data size varies from 20 to 30 kbytes. We already took 400 physics runs in two run periods, and recorded 100 M events which corresponds to 2.9 Tbytes data (Fig. 10). The intrinsic deadtime mainly comes from the data reading time.
4. Summary In November 1999 physics run, the trigger rate was always smaller than 250 Hz, and the dead time was about 2}4 % (Figs. 11 and 12). Our DAQ
8. ELECTRONICS, TRIGGER . . .
444
S.Y. Suzuki et al. / Nuclear Instruments and Methods in Physics Research A 453 (2000) 440}444
system performs as expected at the designed trigger rate in the current physics runs. Although the luminosity is still much lower than the design, we expect no signi"cant increase in the background for a higher-luminosity operation, and this system can easily cope with it.
References
Fig. 11. The trigger rate and deadtime of a typical physics run. The deadtime is always lower than 2%. As the trigger rate goes down, the deadtime becomes smaller.
Fig. 12. The average deadtime through one physics run. It is proportional to the trigger rate and the data size is about 30 kbyte, so the deadtime is proportional to the throughput.
[1] BELLE Collaboration, Belle technical design report, KEK Report 95-1, 1995. [2] KEK accelerator group, KEKB B-factory design report, KEK Report 95-7, 1995. [3] M. Tanaka et al., A control and readout system for the BELLE silicon vertex detector, Nucl. Instr. and Meth. A 432 (1999) 422. [4] M. Nakao et al., Data acquisition system for the Belle experiment, IEEE Transactions on Nuclear Science, Vol. 47, No. 2, 1999, pp. 56}60. [5] S.Y. Suzuki et al., The BELLE Event building system, IEEE Transactions on Nuclear Science, Vol. 47, No. 2, 1999, pp. 61}64. [6] R. Itoh, BBM Q } computer farm for KEK B-factory, Proceedings of the International Conference on Computing in High Energy Physics, 1992, p. 475. [7] T. Higuchi et al, Event Processing at the BELLE Experiment, Proceedings of the International Conference on Computing in High Energy Physics, 1998, available from http://www.hep.net/chep98/PDF/512.pdf. [8] H. Fujii et al., KEK-PREPRINT-97-6, 1997.