RD21: test of a silicon data-driven trigger processor

RD21: test of a silicon data-driven trigger processor

Nuclear Instruments and Methods m Physics Research A 351 (1994) 228-235 FI ..SFVIhEt NUCLEAR INSTRUMENTS & METHODS IN PHYSICS RESEARCH Section A RD...

642KB Sizes 0 Downloads 30 Views

Nuclear Instruments and Methods m Physics Research A 351 (1994) 228-235

FI ..SFVIhEt

NUCLEAR INSTRUMENTS & METHODS IN PHYSICS RESEARCH Section A

RD21 : test of a silicon data-driven trigger processor

R. Dzhelyadin ', I , S . Erhan a,2, E.P Hartouni b,3, M . Medinnis a,4, J .G. Zweizig a,s a University of California, Los Angeles, USA h University of Massachusetts, Amherst, MA, USA IHEP-Serpukhov, Protvmo, Russian Federation

For the RD21 Collaboration [ 1 ] Abstract RD21 is a continuation of an R&D program to demonstrate the use of a planar silicon microvertex detector for triggering on the topology of heavy flavor events in a forward hadron-collider experiment . The RD21 project builds upon the successful P238 test of such a detector run very close to the circulating beams at the SPS-Collider. The specific goal of the RD21 project discussed here was to interface the P238 silicon system to a data-driven processor capable of performing complex calculations in real time and test the viability of the COBEX heavy-flavor topology trigger. We describe the processor and report on some results from a May 1994 test run.

1. Introduction Exploitation of the large B-production cross sections of hadron colliders requires a trigger capable of discerning the desired events from the much larger number of background events . In 1990, at the SPS-Collider, the P238 collaboration succeeded in demonstrating [2] that a silicon microvertex detector, with the planar geometry appropriate for a forward collider B experiment, such as COBEX [3,4], could be run sufficiently close to the circulating beams to measure small forward angles . The next step towards demonstrating the viability of a topology trigger which uses silicon data was to interface the P238 silicon system to a data-driven processor, capable of reconstructing tracks and making vertex tests in real time . This was the goal of RD21 . RD21 was approved by the Detector-R&D-Committee at CERN to carry out this development by implementing a charm trigger in a fixed target configuration in the North Hall in the 450 GeV H8 proton beam . Fig. 1 shows the RD21 target-silicon layout presently installed in the H8 beam line . The first silicon plane in the P238 detector is replaced by two 200 gm-thick copper targets, separated by 1 cm . The upper and lower silicon systems in Fig. I were mounted in the P238 Roman pot configuration with the 200 ,um-thick aluminum RF shields removed, thus allowing the full gap 1 E-mail dzhelyadm@mx ihep su 2 E-mail [email protected] ch 3 E-mail hartoum @umahp4 .phast.umass edu . 4 E-mail mad@uxucla cern ch 5 E-mail 3ohn@uxucla cern.ch Elsevier Science B V SSD10168-9002(94)01115-X

between the first active strips of upper and lower silicon detectors to be reduced to 2.0 mm. 1.1 . Data-driven-processor overview The data-driven processor used for RD21 is capable of performing complex, rapid computations on large quantities of data . It is general-purpose in the sense that it can be configured to perform almost any computationally-intensive online calculation such as a topology or muon trigger. The processor can be characterized as follows: - No centralized control: There is no central sequencer in the system . The processor consists of an array of function modules, in which the arrival of data at a module initiates an operation. Data are clocked through the system using a common clock for all modules. Many processing units work simultaneously on a given problem. - Parallel : Several data streams are processed in parallel . In COBEX, track finding in the silicon detectors would be done in parallel in 8 identical processors (4 quadrants x 2 views) . - Pipelined: Several events are processed simultaneously, but at different stages of the algorithm. For example, one event may be in the track finding subroutines at the same time that other events are undergoing duplicate rejection, vertex fitting, etc. - Expandable : The processor may be enlarged to solve an arbitrarily complex problem by the addition of more modules . The absence of shared resources such as central memory or common I/O paths, means that bottlenecks can usu-

R Dzhelvadin et aLlNucl Instr. and Meth in Phvs Res. A 351 (1994) 228-235

229

to the data-driven processor. If it were necessary, the ten Read-Out-Control modules could have been configured to read all 40 detectors in parallel, but the modest bandwidth requirements of RD21 allowed serial readout on one bus by way of a single Read-Out-Link module . 2.2 . Data-driven trigger processor

Fig. 1, Side view of the RD21 installation in H8 beam line The upper and lower silicon systems are mounted in Roman pots, allowing them to be positioned as close as 0.5 mm from the beam. Each of the five silicon-strip detector planes consists of four 4.5 cm square single-sided quadrants with 50 am readout pitch, with x- and y-planes separated longitudinally by 2 mm . The planes are 3 8 cm apart. Two 200 um-thick copper foil targets are separated by 1 cm, with the closest 6 cm upstream of the first silicon plane. The dark horizontal elements in the figure are the vacuum bulkheads. The bellows, which allow the pots to be vertically displaced during beam manipulations, are represented by the zig-zag lines

ally be avoided. A judicious increase in the number of processor modules yields a proportional increase in processor speed. 2. Apparatus 2.1 . Silicon microvertex detectors and readout The RD21 detector configuration sketched in Fig. 1 consists of 40 silicon detectors each containing 896 active strips spaced 50 ,um apart. They are configured in 5 planes, with 4 quadrants, each with x- (horizontal) and y- (vertical) views back-to-back . For the P238 (and RD21) configuration, the LBLdesigned SVX-D chips [5] were used for charge collection, preamplification and readout. Although these chips would be inappropriate for a high-rate collider application, they were the best available at the time this system was built and were adequate for both the P238 and RD21 tests. A special readout system was designed and built for the RD21 test which exploits the sparse-data readout mode of the SVX-D chips. The system consists of two board types: - Ten "Read-Out-Control modules", each of which reads out and digitizes the pulse heights for the four silicon detectors of one half-plane (x and y, left and right quadrants) . - A "Read-Out-Link module" which reformats the data generated from the Read-Out-Control module and interfaces

2.2 .1 . Introduction The RD21 trigger algorithm is a modification (see Section 2.2 .2) of the COBEX topological trigger [3] . It is presently implemented in the Nevis data-driven processor architecture [6] . In this scheme, algorithms are defined by the functional operation of the chosen modules, their interconnections, the contents of any on-board memories, and the control and name tags which accompany every data word . Name and the control tags influence the function performed by the receiving processor module . The functional elements are roughly equivalent to the instructions of a central processor (memory loads, arithmetic instructions, memory lookup, etc.) . Each element is constructed on a single printed circuit board using small-scale integration ECL logic components . Where possible, modules have a uniform input, output and control structure with one or two input ports, and one output port . Each output register is followed by a normally transparent latch. Module functions can be modified to some extent by patches and jumpers. The RD21 configuration uses 18 different module types. In normal operation, data are latched into input registers by a central clock. Latched data typically undergo an operation during a single clock cycle the result of which is latched into the output register on the next clock, becoming available on the output cable to downstream modules. If several steps are necessary to perform an operation, the intermediate results from each step are latched into internal registers before finally advancing to the output register. When a module is not ready to receive data from an upstream module, it generates a "hold" which causes data from the upstream module to be held in its (otherwise transparent) output latch. The "hold" can propagate upstream, ensuring that data is not lost . A facility also exists to abort execution of a subroutine when an error condition, such as insufficient buffer space, is detected . The processor modules plug into a crate with a simple backplane bus used for diagnostics and initialization . All registers, memory locations and counters are accessible . The entire processor configuration, or a subset of it, may be run in a single-step mode with non-destructive interrogation of the contents of all registers after each clock cycle for diagnostic purposes . 2 .2 .2 . The trigger algorithm and implementation The RD21 trigger algorithm is a somewhat simpler fixedtarget version of the more general algorithm developed for V. DETECTORS/BEAMS/TRIGGERS

230

R. Dzhelyadin et al. /Nucl. lnsir. and Meth. in Phys Res A 351 (1994) 228-235

the SPS-Collider. It exploits the facts that most tracks pass through all five silicon detector planes, and that the beamtarget intersection effectively presents a point-like source : - A 4-point line-finder, which requires that tracks have a hit in the first plane, was used in place of the more general 3-point line-finder of the collider algorithm. - Two subroutines, designed and emulated (and, in one case, constructed) for the collider processor configuration were notused : the Duplicate Rejector and the Primary Vertex Approximator. - The X2 -minimization was performed in two (x and y), instead of three dimensions . A block diagram of the RD21 implementation is shown in Fig. 2. It is organized as three independent "subroutines" : the Hot Strip Remover, the 4-Point Line-Finder and the A,2_ Calculator. The subroutines are isolated by tiers of buffers which allow each subroutine to process data from different events or from different parts of a single event simultaneously. The buffers provide data storage, thus smoothing out the Block Buffer

Control

Hot Strip Remover

processing load and keeping all subroutines busy . They also ensure event-synchronization and avoid mixing data from successive events . 2.2.2 .1 . Control structures The buffer tiers are controlled by data propagating down the control stream (on the left in Fig. 2) . Each subroutine inserts a trigger decision code into this stream which can be used to determine further processing . The first control tier receives the result of an event integrity check (correct number of data blocks, correct data tags, etc.) and allows only those events with no readout problems to pass into the Line-Finder subroutine . Defective events are removed from the data stream. The second tier controls the input of the X2 -Calculator. Only events with at least 4 tracks found in both x- and yviews are passed into this subroutine. Independent of track multiplicity, all raw data as well as slopes and intercepts of found tracks are passed directly to the third control tier with the rejected events flagged as such (the processor is programmed such that these events are also written to tape for diagnostic purposes) . The last control tier interfaces the data-driven processor with the Nevis-Transport-System bus [7,8] . For the RD21 test, all events passing the first level integrity checks are written to tape (including raw data and calculation results), independent of the trigger decisions. 2.2.2 .2 . Hot-Strip Remover subroutine All input data are directed through this subroutine. It serves three functions: - Sort the detector data so that hit strips from the same view and quadrant are fed to the Line-Finder sequentially. Reroute the analog information into a separate data stream . - Remove strip numbers which are previously determined to be unusable from the incoming data stream . - Check that the block structure of events is correct . Events found to have incorrectly formatted data blocks (roughly one event in one thousand) are eliminated from the data stream at this point.

Control Tier 3 Control, Trigger

Block Buffer

Analog Data

Block Buffer

Digital Data

Block Buffer

Slopes, Intercepts , NZ /~ i

Block Buffer

Fig 2. A block diagram of the RD21 trigger processor. It consists of three independent "subroutines" (Hot Strip Remover, 4-Point Line-Finder and X Z -Calculator) isolated by tiers of buffers

2.2.2 .3 . 4-Poin t Line-Finder subroutine Strip addresses from the first four detector planes are passed into this subroutine where line-finding is performed separately in the x-z and y-z projections. First, strips are grouped into clusters, after which all combinations of clusters from the four planes are checked. When clusters from the two inner detectors fall within 50 Am of the line joining the outer detector strips, they are kept as the defining points of tracks . Slopes and intercepts of found tracks are calculated at the position of the nearest target (z = 0) from clusters in the first and fourth planes . Only tracks which point to the target area and have x- and y-intercepts less then 1 mm at z = 0 are kept. The trigger decision from this subroutine requires that at least 4 valid tracks are found in both x- and y-views .

R Dzhelyadin et A/Nucl Instr. and Meth . i n Phys Res A 351 (1994) 228-235

2.2.2 .4 . Xz -Calculato r subroutine Events passing the Line-Finder trigger cuts are directed into this subroutine . The algorithm is a simplified version of the COBEX collider algorithm, in that the primary event vertex is assumed to occur at z = 0. The vertex fit then degenerates into a weighted sum of impact parameters in each view and the vertex XZ is the weighted sum of impact parameters of all tracks to the average x, y point at z = 0. The track weights are determined from slopes using an average momentum vs . angle curve found from Monte Carlo simulation . There are three iterations of the X2 calculation. In each iteration, except the first, the x- or y-track projection which contributes the most to the previous XZ calculation is removed. The XZ from the third iteration is compared to a predetermined cut value, which is a function of the number of found tracks . Events with a large XZ are flagged as charm candidates. 2.3 . Emulation and test software The construction and maintenance of a highly complex processor configuration can only be accomplished, with acceptable investments of manpower and time, by using emulation software run on a conventional computer. Such an emulator, consisting of a library of FORTRAN subroutines, each of which performs a register-level emulation of one type of processor module, is available for the Nevis processor system . A processor configuration is completely specified by a "Configuration File" which contains a list of the modules used, their types, slot addresses, the state of anyjumpers and on-board patches, the contents of all memories in the processor system, register initialization values and all cabling information . The Configuration File is used for initializing the emulator and processor hardware. After initialization, the main emulator subroutine calls all the relevant module subroutines for each (pseudo) clock cycle, and thus determines the processor state at the beginning of the next cycle. In a debug mode, the state of the hardware processor can be determined at any clock cycle by reading the contents of all registers, and comparing them with the corresponding emulator state . This procedure permits rapid isolation of bugs and non-functioning or "flakey" modules. 3. 1994 test run The combined detector-processor system was tested in the H8 beam line over a 4-day period starting May 12, 1994 . The beam energy was 450 GeV and the intensity varied between 5 x 105 and 106 protons per pulse. The trigger was a coincidence between two scintillators positioned downstream of the Roman pot, above and below the beam and approximately covering the silicon detector areas. A third scintillator, positioned upstream of the Roman

700

-

(a) Left-y

231

000 300

600

(b) Left-x r

SCO = 40C 300 20 1, 100

4000 [~_ 3500

_, (c)

i

Right-y

3000

25oc Ç 2000 1500 1000 500

1000

° 0

half-channel number

1000

Fig . 3 . Hit distributions for x- and y -view detectors from the bottom half of detector plane-1 (a) left y-view, (b) left x-view, (c) right y-view ; (d) right x-view, in units of half channels . The highest channel numbers are closer to the beam.

pot but below the beam, provided a partial veto against upstream interactions. The counting rate of this three-counter system was 0.37% of the beam rate, with the target in place. The target-out rate was 0.14% of the beam rate . When running, the SVX chips repeatedly perform a sample and hold cycle. When a trigger is detected within the 10 As gate time, the channel addresses of hit strips are read out and the associated pulse heights are digitized. The dutycycle of the sample and hold cycle introduces a dead-time of about 70%. Sparsified SVX data from the ten Read-Out-Control modules were read sequentially by way of a single Readout Link Module which added name and control bits and transferred the results into a Block Buffer module in the Nevis Transport System . The data were then routed to the trigger processor. The trigger processor was configured to find tracks and compute vertex XZ S . It did no event selection. Data from all events were written to tape, including the results of processor calculations . The system throughput was limited by the single readout path from the detector to processor to a maximum of about 300 events per spill. The detector-processor system functioned for a period of V. DETECTORS/BEAMS/TRIGGERS

23 2

R. Dzhelyadm et aLlNucL Instr. and Meth . in Phys. Res A 351 (/994) 228-235

42 hours during which 1 .2 x 10 6 events were written to tape . Fig. 3 shows the hit distributions from the x- and y-view detectors in the bottom half of the first plane. Since the beam was not well centered horizontally, the number of hits in left and right quadrants are not equal. The distributions are smooth and the numbers of hit strips in x- and y-views of the same quadrants agree to within 3% (note that the outer

SVX chip in the right x-view was not functioning) . With the exception of two detectors, the hit efficiencies were greater than 90%, with the majority greater than 95%. The display of a typical high-multiplicity event is shown in Fig. 4. All clusters which survive the processor "hotstrip" remover are shown. The tracks shown were found by the oft-line track-finding algorithm which differs from the online algorithm primarily in that all 3-point tracks in all five planes are found, while the online algorithm finds only 4-point tracks in the first four planes . While some noise hits can be seen, this event, like most, is clean and allows for accurate track-finding. In an offline analysis of the data, primary vertices are found with an iterative procedure in which a fit is made to all tracks by minimizing a weighted sum of the squares of impact parameters . If the X2 contribution of the track making the largest contribution to the fit X2 exceeds 6, the track is discarded and the fit is repeated until no track contributes more than 6 units to the X2 . Weights are a function of track slope and extrapolation distance to the target. The resulting primary vertex distributions are shown in Fig. 5. Roughly half of all events have vertices in the target region . The processor algorithm assumes that interactions took place within about 300 gm of x = y = 0. Since, at the time of 22500 20000

tn

17500

N 15000 U 12500 (U 10000 7500 5000 2500 0

Fig. 4. Display of the top and side views of a typical high-multiplicity event. Each silicon detector is represented by a line (horizontal m the x-view, vertical in the v-view) . All clusters are shown as perpendicular lines, whose length is proportional to the cluster pulse height and whose width is proportional to the cluster width. In the x-view (horizontal), the solid lines are tracks in the upper detectors and the dashed lines are tracks m the lower detectors In the v-view (vertical), the solid lines are tracks m the left side detectors and the dashed lines are tracks in the right side detectors

Fig. 5 Primary vertex distributions. (a) Vertex x-positron (horizontal), (b) vertex y-position (vertical) ; (c) vertex z-position (longitudinal) Clear signals are seen at the two target positions .

R Dzhelyadm et al lNucl. Instr and Meth in Phys Res. A 351 (1994) 228-235

16000

14000

ID ~ 'i

T

Entries Mean RMS

120001 187804

23 3

U) Y

9 .511 6 .591

O

to to

mU

12000

O JCL

10000

w

simulator tracks

8000 ',20

1

6000 -

18 14 i 12 i O 10 U) U) 8 (1) U 6 O Q 4

4000 -

2000

10

20

30

Processor tracks

40

50

Fig. 6 The number of tracks found by the processor for events with primary vertices at distances less than 200 Am m y and 300 Am m x and in the target region .

the data-taking, the beam was wider than this (the beam can in principle be made smaller), we selected those events with vertices in the target area and at distances less than 200 Am in y and 300 Am in x to check processor performance . The distribution of the number of tracks found by the processor for these events is shown in Fig. 6. The average number of found tracks is about 9. "Box" plots of the number of tracks found in the processor vs . the number of tracks found by the offline simulator are shown in Fig. 7. The offline simulation is seen to agree well with the processor. This good agreement also extends to the track parameter estimates, as illustrated in Fig. 8. Fig. 8a is the distribution of differences in impact parameter (track intercept) between offline and the processor calculations (o- .^: 3 Am) . Fig. 8b is the distribution of the differences in slope estimates between the two calculations (o- .^s 0.026 mrad) . These (small) differences are due to the precision of the processor hardware, and do not contribute significantly to the impact parameter resolution (in a future system, they could be made even smaller, if so desired) . To demonstrate that the results of the calculation are sensitive to track ongins, Fig. 9 shows the distribution of Xz values calculated by the processor after three iterations, for events from the first and second targets. The dark histogram is for those events with primary vertices in the upstream target and the light histogram is for events with primary

2 0

Fig 7 . The number of tracks found m the processor vs. the number of tracks found in the offline (FORTRAN) simulator . (a) x-view tracks, (b) v-view tracks, A smaller number of tracks is found in the v-view since one y-view silicon detector had poor efficiency This is correctly accounted for m the simulation .

vertices in the downstream target. Since the processor algorithm was tuned for interactions at z = 0, interactions in the upstream target result in tracks with larger apparent impact parameters and thus considerably higher XZ S. A longer run than we had in 1994 would allow us to isolate a sample of charm-like (topological) events and use them to measure the charm enhancement of the trigger. Monte Carlo estimates indicate that a charm enhancement in the range of 30-40 could be obtained, while maintaining a charm efficiency of - 20% . 4. Rate capability and extrapolation to the LHC The average processing time per event was about 1500 clock cycles and the total latency was well balanced between the three subroutines with about 500 clock cycles each . Since the three subroutines were pipelined, the processorproduced a trigger decision about every 500 clock cycles . The RD21 data-driven processoroperated with a 20 MHz clock during the beam tests, but could operate at 40 MHz with some additional tuning . Such a processor could handle an input event rate of 80 kHz. In extrapolating this result to COBEX V. DETECTORS/BEAMS/TRIGGERS

234

R. Dzhelyadin et al. /Nucl Instr and Meth. i n Phys. Res A 351 (1994) 228-235

4000 3500

3300

3000

Y U 2500 'W 2000

2500

1500 1000 500 0 0 -0 002 -0 001 ,

0 001

0 002

Impact parameter difference (cm)

4500 4000 3500 rn 3000 Y U 2500 CZ 2000 1500 1000 500 0

tn C

2000

N

w

isoo

100 ,

500

-0 2

-0 1

0

0 1

Slope difference

02 3 x 10

x2

/ D.O.F . (arbitrary units)

Fig 8 . (a) Differences m impact parameter between online (processor) and offline (software) calculations The discrepancies seen, with rr Z~ 3,am, are due to the limited precision of the processor (b) Differences in slope estimates between processor and offline calculations (o ;~ : 0 .026 mrad) .

Fig . 9 The,y2 per degree-of-freedom distribution (arbitrary units) ofevents with primary vertices m downstream (light shading) and upstream (dark shading) targets . The d,2 is calculated m the processor, m the third iteration, after the two tracks with largest ,y2-contributions are discarded.

at the LHC, it should kept in mind that the LHC trigger algorithm is more complex, since the detector is longer and the algorithm for a line-source [3] must be used . The expected COBEX Level- I trigger rate at the LHC is about 1 MHz . Therefore, the decision rate of the existing data-driven trigger processor must be increased by about a factor of 12, if the same algorithm were used . In principle, this could be accomplished by going to a more parallel structure and dividing the pipeline into smaller steps . For example, the 500 clock cycle latency of the hot-strip removal subroutine could be drastically reduced by going from the single data path of RD21 to one data path per detector. In another example, the latency of the Line Finder subroutine could be reduced by close to a factor of 8 by building 8 identical units in place of the single Line Finder of RD21 : one for each view and quadrant. A further performance improvement could be achieved by pipelining multiple events into the same subroutine. The average latency of the X2 -Calculator is well matched to the first two subroutines in the RD21 implementation . Therefore, no further optimization was needed : all three iterations of the X 2 -calculation were performed by the same processor modules . These three iterations could be divided into three pipelined subroutines, each of which works on a different event simultaneously . This does not decrease the

total latency but does increase the average throughput . Further optimization could be obtained by carrying the x- and y-view X 2 -calculations in parallel . Thus, a I MHz input rate seems attainable with the Nevis data-driven processor. However, since most of the gain comes by increasing the number of parallel computation paths, the module count would increase by more than an order of magnitude. Moreover, the larger number of planes in the LHC COBEX detector could only be processed at the required rate by further increasing the number of parallel line-finding subroutines to some 14 triplet finders per view and quadrant. The resulting system would be unmanageable if only existing modules are used . However, it is possible to redesign the system by expanding the function library and adding more highly integrated modules based on ASIC (Application-Specific Integrated Circuit), FPGA (FieldProgrammable Gate Array) and DSP (Digital Signal Processor) techniques, while retaining the essential architectural features of the present processor. Using an implementation with one or more of these technologies could decrease the types of boards and the overall board count, thus reducing the complexity of the hardware configuration to a reasonable level .

R . Dzhelyadin et al INucl Instr and Meth in Phys Res. A 351 (1994) 228-235

5. Conclusions We have demonstrated that the processor-detector system can be made to work together smoothly and that the processor line-finding efficiency and parameter estimates are well understood . Operation at the LHC, however, will require a large increase in processing power, even if the processor is moved from the Level-1 to the Level-2 trigger. The simplicity of the Nevis functional elements allows for highly parallel systems to be built, which are capable of performing some 10 1° operations per second, while maintaining a degree of flexibility not available in a hardwired processor. However, in an LHC application, continued use of the present hardware would lead to a overly large and unmanageable system . Thus, we envisage a future data-driven system based on modem high-density logic elements . Acknowledgement The UCLA group is greatly indebted to Bruce Knapp and Bill Sippach for considerable help and support .

235

References [ 1 1 R. Dzhelyadin 5, S. Erhan l , E Hartoun1 3, W Hofmann4, C. Joseph 2, M. Kreisler3, M Mediums', RE Schleim, M.-T Tran 2, J G. Zweizig' 'University of California, Los Angeles, USA 2Lausanne University, Switzerland 3University of Massachusetts, Amherst, USA 4MPI- Heidelberg, Germany 51HEP-Serpukhov, Protvino, Russia [21 J Ellett et al (P238 Collaboration), Nucl . Instr and Meth . A 317 (1992) 28. [31 S Erhan et al . (COBEX Collaboration), Nucl Instr and Meth A 333 (1993) 101 [41 S Erhan, M. Medmnis and J Zweizig, these Proceedings (2nd Int Workshop on B-Physics at Hadron Machines, Le Mont Samt Michel, France, 1994) Nucl Instr and Meth A 351 (1994) 132. 15] S A Kleinfelder et al , IEEE Trans. Noel Sci NS-35 (1989) 171 t61 G Benson et al ., Proc . 1978 Isabelle Summer Workshop, BNL, July 17-28,1978, W. Sippach et al ., IEEE Trans. Nucl . Sci. NS-27 (1980) 578 W Sippach, Proc . Workshop on Triggering, Data Acquisition and Computing for High Energy/High Luminosity Hadron-Hadron Colliders, Fermtlab, Nov. 11-14, 1985 ; Y.B . Hsiung et al , Nucl . Instr. and Meth . A 245 (1986) 338; E P. Hartouni et al , IEEE Trans Nucl . Sci. NS-36 (1989) 1480, B C. Knapp Nucl Instr and Meth . A 289 (1990) 561 [71 J A Crittenden et al , IEEE Trans. Nucl Sci. NS-31 (1984) 1028 181 B Stern, Ph D. thesis, A Search for Charmed Particles m 1528 GeV Neutron-Proton Interactions, Columbia University, Nevis Laboratories, Nevis-266 (1988)

V. DETECTORS/BEAMS/TRIGGERS