Electronics and data acquisition system for the ICAL prototype detector of India-based neutrino observatory

Electronics and data acquisition system for the ICAL prototype detector of India-based neutrino observatory

Nuclear Instruments and Methods in Physics Research A 701 (2013) 153–163 Contents lists available at SciVerse ScienceDirect Nuclear Instruments and ...

2MB Sizes 0 Downloads 80 Views

Nuclear Instruments and Methods in Physics Research A 701 (2013) 153–163

Contents lists available at SciVerse ScienceDirect

Nuclear Instruments and Methods in Physics Research A journal homepage: www.elsevier.com/locate/nima

Electronics and data acquisition system for the ICAL prototype detector of India-based neutrino observatory A. Behere a, M. Bhuyan c, V.B. Chandratre a, S. Dasgupta c,n, V.M. Datar b, S.D. Kalmani c, S.M. Lahamge c, N.K. Mondal c, P.K. Mukhopadhyay a, P. Nagaraj c, B.K. Nagesh c, S. Pal c, Shobha K. Rao c, D. Samuel c, M.N. Saraf c, B. Satyanarayana c, R.S. Shastrakar a, R.R. Shinde c, K.M. Sudheer a, S.S. Upadhya c, P. Verma c a

Electronics Division, Bhabha Atomic Research Centre, Mumbai 400085, India Nuclear Physics Division, Bhabha Atomic Research Centre, Mumbai 400085, India c Department of High Energy Physics, Tata Institute of Fundamental Research, Mumbai 400005, India b

a r t i c l e i n f o

abstract

Article history: Received 3 October 2012 Received in revised form 2 November 2012 Accepted 2 November 2012 Available online 10 November 2012

The India-based Neutrino Observatory (INO) collaboration has proposed to build a 50 kton magnetized Iron Calorimeter (ICAL) detector with the primary goal to study neutrino oscillations, employing Resistive Plate Chambers (RPCs) as active detector elements. A prototype of the ICAL detector has been built in order to develop and characterize the intrinsic sub-systems, like RPCs, gas system, electronics and data acquisition system, etc. This paper describes in detail the readout electronics as well as the VME-based data acquisition system for the prototype detector. & 2012 Elsevier B.V. All rights reserved.

Keywords: India-based Neutrino Observatory INO ICAL Electronics Data acquisition VME

1. Introduction The India-based Neutrino Observatory (INO) collaboration is planning to build a 50 kton magnetized Iron Calorimeter (ICAL) detector to study atmospheric neutrinos and to make precision measurements of neutrino oscillation parameters [1]. Around 28,800 Resistive Plate Chambers (RPCs) will be used as sensitive detector elements to search for muon neutrino induced charged current interactions inside the detector mass. A prototype of the ICAL detector has been constructed to develop and study the individual detector components, including RPCs, gas system, front-end electronics and the data acquisition system. The prototype detector (without the magnet) consists of 12 layers of RPCs of 1 m  1 m lateral area, and is continuously tracking cosmic ray muons [2]. The front-end electronics for the prototype detector has been developed in-house. During the initial phase of the prototype development, the back-end of the data acquisition system was built using CAMAC [3]. However, in order to overcome the underlying limitations posed by the CAMAC hardware, the back-end has been upgraded to a VME-based system. The data acquisition software is also upgraded along with the hardware and is automated to a large extent. In this paper, various features

n

Corresponding author. E-mail address: [email protected] (S. Dasgupta).

0168-9002/$ - see front matter & 2012 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.nima.2012.11.007

of the front-end readout electronics and the hardware as well as the software components of the VME-based data acquisition system are described. The functionality of other sub-systems, such as the power supply systems and the ambient parameter monitoring system, is also discussed.

2. The ICAL detector The ICAL detector is envisaged as a detector for atmospheric neutrinos as well as a future far detector for a neutrino factory beam. It will use magnetized iron as the target mass and Resistive Plate Chambers (RPCs) as the active detector medium. One of the major physics goals of the ICAL detector is the unambiguous estimation of the neutrino oscillation parameters. The detector will be magnetized to a field of 1.3 T which will also facilitate the study of matter effects for nm (muon neutrino) and nm (muon antineutrino) through electric charge identification of muons in charged current interactions. This will lead to the possible determination of the neutrino mass hierarchy. The detector will consist of three identical and adjacent modules made up of 151 horizontal layers of 56 mm thick low carbon iron plates interleaved with 40 mm gaps to house the RPC units. Fig. 1 is a schematic of the ICAL detector with a view of the top of the magnet coils. Table 1 lists some important parameters of the ICAL detector structure.

154

A. Behere et al. / Nuclear Instruments and Methods in Physics Research A 701 (2013) 153–163

Fig. 1. Schematic diagram of the ICAL detector (48 m  16 m  14.5 m) with a view of the top of the magnet coils.

Table 1 ICAL detector parameters. Modules Module dimension Detector dimension Iron layers Iron plate thickness RPC layers Gap for RPC units RPC units/layer/ module RPC units/module Total RPC units Magnetic field

3 16 m  16 m  14.5 m 48 m  16 m  14.5 m 151 56 mm 150 40 mm 64 9600 28,800 1.3 T

3. The prototype detector A magnet-less prototype of the ICAL detector has been developed as a precursor to building the final detector. The prototype detector consists of 12 layers of 1 m  1 m RPCs. Each RPC is made of two float glass plates, of thickness 3 mm, with an intermediate gas gap of 2 mm. The glass plates are coated with a semi-resistive paint and a differential voltage of 74.9 KV is applied across the

glass plates using copper tape contacts on the coating. The RPCs are operated in avalanche mode using a gas mixture of Freon (R134a), Isobutane and SF6 in the proportion of 95.5:4.2:0.3 by volume. The average chamber current is about 100 nA. Orthogonal pick-up panels are mounted on either side of the gas gap, each having 32 pick-up strips with a strip pitch of 30 mm, which provide the particle hit coordinates in a horizontal plane. A detailed description of the activities related to the fabrication and the characterization of glass RPCs, undertaken during the development of the prototype stack, can be found in Ref. [4]. Fig. 2 shows the prototype detector with the front-end readout electronics on either side. The data acquisition system records the RPC strip hit profile and the timing information following the passage of a charged particle through the detector provided the track hit pattern satisfies the trigger criteria set by the user. The data collected is used to characterize the detector performance by estimating RPC efficiency, multiplicity, timing resolution, etc. In addition, the detector noise rate and the ambient parameters are periodically recorded by the data acquisition system in order to monitor the long term stability of the detector. These results are also used to optimize a number of parameters related to the RPC design, gas mixture and readout electronics.

A. Behere et al. / Nuclear Instruments and Methods in Physics Research A 701 (2013) 153–163

155

Fig. 3. Signal flow in the data acquisition system.

Fig. 2. The prototype detector with the front-end readout electronics on either side.

4. Signal flow in the data acquisition system Each RPC has 32 pick-up strips per readout plane. The data acquisition system therefore must deal with a total of 768 electronic readout channels. The signals from the orthogonal readout planes are treated as two independent and identical systems. The signal flow in the data acquisition system is illustrated in Fig. 3. A detailed description of the trigger system of the prototype detector can be found in Ref. [5]. This paper describes the functionality of the remaining components of the data acquisition system.

high speed, low noise pre-amplification before any further processing. A two-stage Hybrid Micro Circuit (HMC) based preamplifier of gain around 80 is used for this purpose. The RPC produces signals of negative and positive polarity at the anode and cathode, respectively. Thus, two different types of first stage HMCs, BMC 1595 and BMC 1597, have been designed for the respective electrodes to produce an output signal of negative polarity with a nominal gain of 10. The second stage HMC, BMC 1513, amplifies the output of the first stage further by a nominal factor of 10. Fig. 4 shows the circuit diagram of the negative input preamplifier. The circuit for the positive input preamplifier is identical except that the BMC1595 chip is replaced by the BMC1597 chip. The design specifications for the first and the second stage HMCs are listed in Table 2. The gain vs. input voltage characteristics for the preamplifier is shown in Fig. 5. Each preamplifier board can process signals from eight pick-up strips and therefore four such boards are needed per readout plane of each RPC.

4.1. Front-end electronics The signals from the RPCs are processed through a chain of readout electronics in order to convert them to a format legible to the back-end data acquisition system. All the constituent modules of the front-end readout chain are custom made. 4.1.1. Preamplifier Typical signals produced by RPCs operated in the avalanche mode, have an amplitude of about 2.5 mV across a 50 O load, and a rise time of about 1 ns [6]. Hence, these signals need to undergo

4.1.2. Analog front-end The output of the preamplifiers is fed to the Analog front-end (AFE) which consists of two sections, as listed below.

 Discriminator: The amplified analog RPC signals are converted



to differential ECL pulses using discriminator circuits with adjustable threshold (up to 500 mV). Typically, an AFE threshold of  20 mV is used. Level 0 trigger: The discriminator output signals from every eighth channel are logically OR-ed to generate the Level

156

A. Behere et al. / Nuclear Instruments and Methods in Physics Research A 701 (2013) 153–163

Fig. 4. Circuit diagram for the negative input preamplifier used to amplify the RPC pulse collected from the positive electrode by a factor of about 80. The circuit for the positive input preamplifier is identical except that the BMC1595 chip is replaced by the BMC1597 chip.

0 trigger signals, which are then shaped to pulses of width 100 ns.

Table 2 Specifications of the first and the second stage HMCs used in the design of the preamplifiers. Parameter

First stage HMC

Second stage HMC

Input impedance Input dynamic range Nominal gain Bandwidth Rise time Power supplies Power dissipation

50 O 100 mV 10  300 MHz  1:2 ns 76 V 120–140 mW

50 O 200 mV 10  250 MHz  2 ns 76 V  150 mW

Each AFE handles 16 strip signals and thus each RPC requires two such boards per readout plane.

4.1.3. Digital front-end The discriminated signals and the Level 0 trigger signals are received by the Digital front-end (DFE). The discriminated signals are first translated to TTL pulses and are shaped to a width of 750 ns in order to overcome the trigger latency. Hereafter, three different sections in the DFE board deal with different tasks related to the signal processing and are described below.

 Level 1 trigger: The Level 1 trigger signals are formed by M-fold

90 80 70 Channel 1

Gain

60

Channel 2

50

Channel 3

40

Channel 4 Channel 5

30

Channel 6 20

0



Channel 7

10

Channel 8 0

10

20

30

40

50

60

70

80

90

Input voltage (mV) Fig. 5. Gain vs. input voltage characteristics for each channel of the HMC preamplifier board used to amplify the signal from the RPC pick-up strips. All the channels exhibit a net gain of about 80 for an input voltage in the range of RPC pulse (  2:5 mV).



(M ¼1, 2, 3, 4) coincidence of the consecutive Level 0 trigger signals. In order to generate the 2-Fold, 3-Fold and 4-Fold trigger signals, the Level 0 trigger signals are translated to TTL pulses and are fed to a Xilinx CPLD (XC9536-5PC44C) which contains the necessary logic for generating the M-fold signals. The ECL Level 0 trigger signals are logically OR-ed outside the CPLD to form the 1-Fold trigger signal. The four Level 1 trigger signals are used for generating the final trigger signal at the back-end. The ECL 1-Fold signal is also used at the back-end for the measurement of the timing of the RPC signals. Event readout: This section handles the latching of the strip hit information. The strip hit data are registered in a latch and are flushed out serially, being invoked by the back-end, on the generation of the final trigger signal. Monitoring: The noise rates of the 32 strip signals, the four Level 1 trigger signals and four fixed frequency signals are monitored periodically. The monitoring period is set by the user. The channels to be monitored are selected sequentially, initiated by the back-end, using a multiplexer in the DFE. The

A. Behere et al. / Nuclear Instruments and Methods in Physics Research A 701 (2013) 153–163

counting of the noise rates is done by the scaler at the backend. The fixed frequency signals are generated in the DFE and are monitored to be used as a reference to ensure reliability of the monitoring process. Both the event and the monitor data from a DFE are identified by its unique board identification code preset using on-board static switches. These sections are implemented using a Xilinx CPLD (XC95288-10HQ208C). Each RPC uses a single DFE per readout plane as one DFE can handle 32 strip signals. Fig. 6 illustrates the signal flow in the AFE and the DFE boards.

4.1.4. Signal routers The back-end data acquisition system communicates with the front-end readout electronics through daisy-chained links via two signal routers, as listed below:

157

4.2. VME back-end A VME-based data acquisition system at the back-end records the information collected from the detector after being processed through the front-end readout chain. The back-end communicates with the host PC via a PCI-VME bridge [7] and consists of commercial as well as custom modules.

4.2.1. Final trigger module This module receives the Level 1 trigger signals formed in the DFEs and generates the final trigger signal if the event topology satisfies the user-specified trigger criteria. The final trigger signal initiates the data acquisition system to record the event data, which comprises the RPC strip hit pattern and the timing information. The Final Trigger Module (FTM) is custom designed using the CAEN V1495 general purpose module [8]. The design features of the FTM as well as its performance validation in the prototype stack are described in Ref. [5].

 Control and data router: This routes the control and the data



signals between the back-end controller and the DFEs. The strip hit data from all 12 RPCs are transferred serially through two event daisy chains, one for each readout plane. In order to enable continuous monitoring of the RPC strip signals, 24 monitor daisy chains, one per RPC per plane, are used. One Control and Data router (CDR) is used for routing the signals belonging to one readout plane of all RPCs and thus the system requires two CDR boards. Trigger and timing router: This routes the Level 1 trigger signals from all the DFEs to the final trigger module at the backend, for generating the final trigger signal. The ECL 1-Fold signals from all RPCs are also transferred from the respective DFEs to a commercial time to digital converter (TDC) module at the back-end, to record the RPC timing information. A single Trigger and Timing router (TTR) board is used for routing the trigger and the timing signals for the entire system from the front-end to the back-end.

4.2.2. Control and readout module This module sends out the control signals and reads the strip hit data from the DFEs and is custom designed using the CAEN V1495 general purpose module [8]. The module comprises three sections with different functionality, as described below:

 Event control: On receiving the final trigger signal from the



FTM, this section fans out the event trigger to other back-end modules. At the same time, it sends the appropriate control signals to the DFEs for latching the RPC strip hit profile. Event readout: Each DFE, on being initiated by the back-end, flushes out the strip hit information serially through the event daisy chain for the corresponding readout plane. The serial data is received by the event readout section of the control and readout module, converted to 16-bit parallel data and written into FIFO memory buffers of depth 256 words. There is one FIFO buffer for storing the strip hit data from each readout

Fig. 6. Schematic diagram of signal flow in the Analog Front-end and the Digital Front-end boards. The analog RPC signals from the preamplifier output to the AFE input are shown at the top. The daisy-chained signals from the previous station are shown to enter the DFE from the left side and exit to the next station from the right side. The timing signal and the Level 1 trigger signals generated by the DFE are also shown on the right side.

158



A. Behere et al. / Nuclear Instruments and Methods in Physics Research A 701 (2013) 153–163

plane of all the RPCs. Data from both the buffers are read by the data acquisition software through the VME bus. Monitor control: This section sends the relevant control signals to the DFEs to invoke the sequential monitoring of the noise rates of the individual RPC strips, four reference signals and the Level 1 trigger signals. It also sends the monitor trigger to the scaler periodically, according to the monitor period set by the user.

Commercial sensors [12] have been used to record the ambient temperature and relative humidity, with accuracies of 70.2 1C and 72.5%, respectively. An interface has been developed to continuously acquire these data from the respective sensors. This is stored in an appropriate format and is used for analyzing the detector performance in the long term.

6. Power supply systems The signal flow in the Control and readout module is schematically represented in Fig. 7. 4.2.3. TDC A commercial multi-hit TDC of 100 ps resolution [9] records the timing information from both the readout planes of each RPC, using the ECL 1-Fold signals generated in the respective DFEs. As mentioned in Section 4.1.3, one timing signal is generated per RPC plane by OR-ing the 32 strip signals on that plane. The TDC, on receiving the event trigger from the control and readout module, acquires the timing information and generates an interrupt over the VME bus. 4.2.4. Scaler The noise rates of the individual RPC strips, four reference signals and the Level 1 trigger signals are counted sequentially and periodically using a commercial 32-bit scaler [10]. On receiving the monitor trigger from the control and readout module, the scaler records the noise rate information and generates a VME interrupt. 4.2.5. QDC The charge information of analog RPC pulses is recorded using a commercial 12-bit QDC (charge to digital converter) [11]. The amplified analog RPC signals, used as input to the QDC, are fed directly to the module after being delayed appropriately in order to make them coincident with the event trigger. This module is used optionally in the data acquisition system.

5. Ambient parameter monitoring system Monitoring of ambient parameters, like temperature and relative humidity as well as the RPC operating parameters, such as applied high voltage and chamber current, is essential for evaluating the long term performance of the detector. The RPC characteristics, in terms of efficiency and noise rates, are correlated with the ambient conditions and the operating parameters in order to understand the dependence of RPC performance on these factors. This is necessary to determine the optimum operating conditions for the detector.

Two types of power supply systems are necessary for the prototype stack, high voltage power supply system for the RPCs and low voltage power supply system for the front-end electronics. The high voltage to be applied across the RPC glass electrodes in order to establish the required electric field is mentioned in Section 3. Ramping up or down the high voltage in a step-wise manner is essential to undertake RPC plateau studies and also to prevent damaging the detector during operation. Moreover, since the health of an RPC can be judged by measuring the chamber current, the high voltage power supply system is required to have the provision to monitor the supply voltage and the load current per channel. The front-end electronics for the detector stack, consisting of the preamplifiers, the analog and the digital front-ends and the signal routers, need d.c. power supplies of 76 V and 78 V. Average load currents on these power rails are 25 A and 90 A for 76 V and 15 A and 25 A for 78 V respectively. Precise control of the supply voltages at the load points and monitoring of the output voltages and the load currents are essential features of the low voltage power supply system. The high voltage as well as the low voltage power supply system for the prototype detector has been built using commercial components. The high voltage power supply system comprises two 12-channel high voltage supply modules, one for each polarity, plugged into an SY2527 power supply mainframe [13]. The low voltage power supply system consists of several supply modules mounted on an Easy system crate [14] and connected to the SY2527 mainframe through a branch controller. The 48 V raw d.c. power is supplied to the Easy system using a three-phase AC/ DC converter. A power supply distribution scheme has been designed to feed power to the individual boards of the front-end electronics from the low voltage power supply system. Multiple bus-bars made of thick copper strips are used such that the ohmic voltage drops across the connecting conductors are minimized. The input channels from the power supply system are connected to one end of the bus-bars while the power supply for the front-end boards are directly tapped from the bus-bars. The bus-bar based low voltage power supply distribution system has also helped to improve the overall ground quality of the front-end electronics.

Fig. 7. Signal flow in the control and readout module.

A. Behere et al. / Nuclear Instruments and Methods in Physics Research A 701 (2013) 153–163

Both the power supply systems can be connected to a host PC through the Ethernet port and support control and monitoring of essential parameters like output voltage and load current. A suitable interface has been developed to set and monitor the voltage and the load current on different output channels of the high voltage as well as the low voltage power supply system. Continuous monitoring of the output voltage and the load current per channel of the power supply systems is useful in studying the long term stability of the RPCs. The control and monitoring interface for both the power supply systems can also be accessed remotely via intranet.

7. Data acquisition software The data acquisition software has been developed in Cþþ using Qt libraries [15] for designing the front-end Graphical User Interface (GUI) and ROOT libraries for data storage and analysis. The software has a multi-threaded architecture running behind an intuitive GUI. This provides the user prompt access to the desired information with the processes running in the background. The software runs on a machine with 2.80 GHz Intel Xeon CPU and 1 GB RAM on an open-source 32-bit Linux operating system. A brief description of the data acquisition software in the incipient stage can be found in Ref. [16]. This paper describes the upgraded version of the software which is highly automated and offers greater user flexibility. 7.1. The data acquisition process Prior to actuating the data acquisition process, the user selects the appropriate settings in the front-end GUI. The software checks if the necessary modules are present in the VME crate and initializes them with the user-defined settings. The system then waits for an event trigger from the FTM and simultaneously starts the noise rate monitoring cycle. The event data as well as the noise rate data are displayed in real-time. The collected data is stored in the ROOT file format which ensures seamless integration with all plotting and analysis features offered by ROOT. Appropriate error messages, warnings and other necessary information are made available to the user at each step of the data acquiring process. 7.2. Front-end GUI Fig. 8 demonstrates screenshots of the front-end GUI of the data acquisition software, showing the event display and the noise rate data display. The DAQ panel is customized with the settings for different VME modules as well as other data acquisition parameters. The floating panel, when minimized, exhibits the minimal settings required for data acquisition while the settings for the VME modules can be accessed by maximizing the same. The central region of the GUI contains a multiple document viewing interface where the live muon tracks, RPC noise rate data, plotting and analysis results, etc., are displayed. The top and the left side panels provide access to several ROOT-based graphing and data analysis tools. While the data acquisition process is going on, the current data files are listed on the top right side of the GUI. Selecting a particular file lists all the branches stored in the tree in that file. Clicking a branch plots its contents in a ROOT canvas and further analysis can be done using various options offered by the ‘Plot’ toolbar in the center. The error messages, warnings and other information are printed in the output log at the bottom.

159

7.3. Back-end threads The back-end of the data acquisition software facilitates the execution of multiple jobs in a prioritized fashion using a multithreaded framework. This ensures exhaustive integration of data collection with live plotting and analysis. The threads have been implemented using the QThread class [17] offered by Qt, which provides the simple mechanism of ‘Signals and Slots’ for communication between the threads and the GUI. A total of four threads, as listed below, run in the background of the data acquisition process. (1) (2) (3) (4)

Interrupt service thread (IST). Event thread. Noise rate monitor thread. Web display thread.

7.3.1. Interrupt service thread The sequence of operations carried out by the interrupt service thread (IST) is illustrated in Fig. 9. Once the data acquisition process starts, the IST waits for a VME interrupt. On receiving an interrupt, it disables any further interrupt, identifies the source of the interrupt, i.e., whether it is generated by the TDC or the scaler, and acknowledges the same. The IST then reads the data from the appropriate VME modules and appends them to shared memory. Following this, the IST invokes the Event thread or the Noise rate monitor thread, depending on the type of the interrupt, enables further interrupts and returns to the waiting state. The shared memory is a circular buffer allocated in the heap during the initialization phase of the program. Two shared memories, one for each of the event data and the noise rate data, are used. 7.3.2. Event and noise rate monitor thread The Event thread and the Noise rate monitor thread are functionally identical but they differ in the type of the data they deal with. Fig. 10 describes the order of the tasks executed by each of these threads. The Event thread or the Noise rate monitor thread, on being initiated by the IST, appends the latest data from the respective shared memory to the corresponding ROOT file for storage as well as live plotting and analysis. Each of the threads also writes the respective data in a Javascript file periodically, according to the data update period set by the user, which is used for live data display. The data is displayed graphically using Flot libraries [18]. A Javascript-based data display is advantageous as this reduces the code overhead of the data acquisition software and the same file can also be used for displaying the data in realtime on a web server. 7.3.3. Web display thread The Javascript files written by the Event thread and the Noise rate monitor thread are periodically transferred to a web server by the Web display thread and the data is displayed graphically on the server using Flot libraries [18]. The data transfer period as well as the server address are specified by the user. The real-time data display for the prototype detector can be accessed at Ref. [19]. 7.3.4. Thread prioritization and synchronization The operating system schedules the execution of a thread depending on its priority. Fig. 11 depicts the multi-threaded architecture of the data acquisition software and the priorities assigned to different threads. The IST performs a time-critical operation and should be scheduled as often as possible in order to

160

A. Behere et al. / Nuclear Instruments and Methods in Physics Research A 701 (2013) 153–163

Fig. 8. Screenshots of the front-end GUI of the data acquisition software showing (a) the event display and the (b) the noise rate data display.

ensure that the probability of missing an interrupt is minimal. Therefore, the IST has been assigned critical priority, i.e., it will be scheduled most often. The data transfer from the shared memory to a file is not a time-critical operation and hence the Event thread

and the Noise rate monitor thread are assigned normal priority, i.e., the default priority of the operating system. The transfer of the Javascript files to a web server for live data display can be safely stopped or executed later without hampering the process

A. Behere et al. / Nuclear Instruments and Methods in Physics Research A 701 (2013) 153–163

161

Fig. 9. Sequence of operations executed by the Interrupt Service Thread in the data acquisition software.

Fig. 10. Sequence of operations executed by each of the Event thread and the Noise rate monitor thread in the data acquisition software.

of data acquisition. Thus, the Web display thread has been assigned low priority, i.e., it is scheduled less often than the threads with normal priority. Thread synchronization is of utmost importance in a multithreaded program to prevent multiple threads access the same object simultaneously as it can lead to undesirable situations. In case of the data acquisition program, for instance, while one thread is writing data from the shared memory to a file, another thread

should not be able to access the same file for plotting. Such access conflicts are avoided by synchronizing the threads and protecting the data within Inter Process Communication (IPC) objects called mutex. The QMutex class [20] from Qt has been used to implement the mutexes. A thread checks the status of the relevant mutex before accessing a particular object. If the same object is being accessed by another thread, the thread waits until the current job is finished and then proceeds with its own operation.

162

A. Behere et al. / Nuclear Instruments and Methods in Physics Research A 701 (2013) 153–163

Fig. 11. Prioritization of the threads in the multi-threaded architecture of the data acquisition software.

7.4. User facilities The data acquisition software is equipped with several userfriendly features, as described below:

 Live plotting and analysis: The utility of live plotting and







analysis of relevant parameters, like strip hit information, time spectrum, noise rates, etc., is extremely useful in the debugging and commissioning of the detector and the associated systems without interfering with the data acquisition process. Real-time data display: The graphical display of the data in real-time provides the user with an instant visualization of the particle trajectory through the detector as well as an overview of the health of the detector. The web-based live data display enables remote monitoring of the status of the detector. Auto stop and restart: Once the data acquisition process is initiated by the user, it offers the facility of stopping after recording a certain number of events or after a certain period of time, either of the limits being specified by the user, and restarting. The whole sequence of operations can be executed and repeated multiple times without the need of any human intervention. Auto analysis: The data acquisition software is coupled with a ROOT-based analysis framework which analyzes the data files automatically and provides information regarding various detector parameters, like efficiency, multiplicity, strip hit profile, time resolution, noise rates etc., and zenith angle and azimuthal angle distributions of cosmic ray muons. The analysis job is accomplished using PROOF [21], the parallel computing facility offered by ROOT.



While the data acquisition process is not running, the software can be used for analyzing any data file in ROOT format, using the PROOF [21] facility. Auto synchronization with server: The data acquisition software is also synchronized with a web server where the raw and the analyzed data files are transferred automatically. This facility enables the user to access the run files and the analysis results remotely. The analysis summaries are displayed graphically on the server using TRevolution.js [22], the Javascript-based utility offered by ROOT for viewing the histograms stored in a ROOT file on a web server.

8. Conclusion A prototype of the ICAL detector has been set up to serve as a testbench to develop, validate and optimize different detector components. The characteristic features and the functionality of the constituent elements of the electronics and the data acquisition system of the prototype detector have been presented here. The front-end readout electronics is designed and developed inhouse. The migration of the back-end data acquisition hardware from a CAMAC-based system to a VME-based system has enhanced the maximum trigger rate handling capability of the system from  5 Hz to  300 Hz. The supporting software has been developed using a number of high-end tools available opensource and is equipped with several user-friendly attributes. The system has delivered an overall stable and satisfactory performance and has been useful in studying and characterizing the detector. The expertise gained in the hardware as well as the software fronts, during the evolving phase of the prototype

A. Behere et al. / Nuclear Instruments and Methods in Physics Research A 701 (2013) 153–163

system, has provided us with the necessary confidence to proceed toward building the data readout system for the final detector.

Acknowledgments The authors would like to thank Sarika Bhide, Sonal Dhuldhaj, Darshana S. Gonji, Satyajit Jena, S.R. Joshi, K.K. Rao, L.V. Reddy and Noopur Srivastava for their valuable contributions in the work reported in this paper. References [1] INO Project Report, vol. 1, /http://www.ino.tifr.res.in/ino/OpenReports/ INOReport.pdfS, 2006. [2] M. Bhuyan, et al., Nuclear Instruments and Methods in Physics Research Section A 661 (2012) S68. [3] A. Behere, et al., Nuclear Instruments and Methods in Physics Research Section A 602 (2009) 784. [4] V.M. Datar, et al., Nuclear Instruments and Methods in Physics Research Section A 602 (2009) 744. [5] S. Dasgupta, et al., Nuclear Instruments and Methods in Physics Research Section A 694 (2012) 126. [6] M. Bhuyan, et al., Nuclear Instruments and Methods in Physics Research Section A 661 (2012) S64. [7] CAEN S.p.A. Mod., V2718 Technical Information Manual, rev. 9, /http://www. caen.it/csite/S, 2009.

163

[8] CAEN S.p.A. Mod., V1495 Technical Information Manual, rev. 11, /http:// www.caen.it/csite/S, 2012. [9] CAEN S.p.A. Mod,. V1190A Technical Information Manual, rev. 12, /http:// www.caen.it/csite/S, 2012. [10] CAEN S.p.A. Mod., V830 Technical Information Manual, rev. 4, /http://www. caen.it/csite/S, 2007. [11] CAEN S.p.A. Mod., V792 Technical Information Manual, rev. 18, /http:// www.caen.it/csite/S, 2010. [12] Pico Technology, RH-02 Temperature and Humidity Data Logger, /www. picotech.com/discontinued/RH-02.htmlS, 2009. [13] CAEN S.p.A. Mod., SY2527 User’s Manual, rev. 16, /http://www.caen.it/csite/S, 2012. [14] CAEN S.p.A. Mod., EASY3000 User’s Manual, rev. 15, /http://www.caen.it/ csite/S, 2012. [15] Nokia Corporation, Qt—cross-platform application and UI framework, /http://qt.nokia.com/S, 2012. [16] M. Bhuyan, et al., Nuclear Instruments and Methods in Physics Research Section A 661 (2012) S73. [17] Nokia Corporation, Threads and QObjects, /http://qt-project.org/doc/qt-4.8/ threads-qobject.htmlS, 2011. [18] Google Code, Flot—attractive Javascript plotting for jQuery, /http://code. google.com/p/flot/S, 2011. [19] India-Based Neutrino Observatory, iDaq-Live Monitoring Interface of TIFR 1 m  1 m RPC Stack, /http://www2.ino.tifr.res.in/iarchive/DAQDATA/plots/S, 2012. [20] Nokia Corporation, QMutex Class Reference, /http://doc-snapshot.qt-project. org/4.8/qmutex.htmlS, 2012. [21] The ROOT Team. PROOF, /http://root.cern.ch/drupal/content/proofS, 2012. [22] The ROOT Team. TRevolution.js, /http://root.cern.ch/drupal/content/ trevolutionjsS, 2012.