xclear Instruments and Methods m Physics Research A247 (1986) 1-7 orth-Holland, Amsterdam
Section I. Overview of existing systems
ACCELERATOR COMPUTER CONTROLS : PAST, PRESENT, AND FUTURE Melvin R. STORM Fermilab, Batavia, Illinois 60510, USA
A survey is made of various computer control systems from their inception about 1963 until the present. Initially such systems began with data collection and analysis in an effort to guide the conduct of experiments done on Van de Graaffs, however, it did not take long to realize the usefulness of building fully automatic accelerator controls . The size and complexity of today's high energy accelerators has advanced to the point where computer control is essential. A brief history of the 16-year evolution in control systems at Fermilab is presented. Some attention is devoted to hardware networks, but the principal discussion is centered on software systems.
1. Introduction The extent to which digital computer control has become an integral part of accelerator operations may erroneously suggest that it has always been that way. Yet, we all know that high-energy accelerators predate the modern digital computer and, therefore, could not always have been operated automatically with the aid of a computer. Control by analog computers was attempted in the late 50s and early 60s, typified in such specialized controllers as the TRITEC and ARPAC used in tuning the Bevatron at Berkeley [1]. Earlier, in the 40s, cyclotrons were operated manually from awkward, oversized control panels . Of course, for such prehistoric accelerator control models, one must rely on information dredged out of scant literature or view relics in museums for any helpful picture of the past. The image I gather is that of an octopus-like operator with eight long arms extending from his shoulders, each actively turning knobs, pushing buttons, or flipping switches, while watching row upon row of display meters . Over the years operators became sufficiently adept with their panels so that when computers came about they often responded to the query of automatic control with, "Who needs computers on-line?" Yet, in a relatively short time the effect of computers on data analysis and control systems has been enormous. The question whether this accelerator or that laboratory was the pioneer in computer control systems would doubtless open up more debate than the subject warrants. It is irrelevant, since today's advancements have made us all ardent proponents of the newest and most
* Work supported by Universities Research Association under contract with the US Department of Energy . 0168-9002/86/$03 .50 © Elsevier Science Publishers B.V. (North-Holland Physics Publishing Division)
colorful systems imaginable . You may well ask yourself, "Why look back anyway?" There are a number of reasons, but such historical analysis certainly helps recapture the old-time intimacy each operator, system designer, and user had with the computer control system before sophistication and specialization depersonalized controls . As an illustration of where we at Fermilab have deviated, let us consider that in 1970 about a dozen hardware engineers and software designers formed the entire computer controls effort ; 15 years later we have 150 individual user accounts on the VAX developmental system networked to the central control. In addition, several turnkey accounts available on an open-shop basis make it conceivable that well over a hundred people could impact on the control system at any given moment . It is no longer possible to huddle all relevant hands around a single control console or the computer front panel and make things happen . Digital computer control of a high-energy accelerator continues to be a complex, valuable application of computers, with new areas continually being developed . Every system keeps expanding beyond the previous one and rarely is there an avid movement toward simplification . Somewhere between the thousands of megabytes of disk storage and several gigabytes of memory we hope to find the ultimate system of the future . A wealth of control experience has been accumulated from other areas such as industrial control systems, space explorations, and the nuclear reactor field, yet there is more to learn and improved techniques are needed. We see hardware miniaturization continuing unabated, but still computer rooms keep getting larger . Inwardly though, software people keep hoping that the hardware explosion might cease just long enough for the software development to catch up . But there is little chance of that happening as long as the hardware costs continue to decrease . I . OVERVIEW OF EXISTING SYSTEMS
M. R. Storm /Survey of accelerator computer controls 2 . Problem areas There were hurdles to overcome before digital computers would be suitable for accelerator control. In part, credit for eliminating some obstacles must go to the process control environment and chemical plants who were well on their way toward meeting the demand for better measuring instruments [2] . Indeed, the kinds of tolerances needed left plenty of room for the discovery of smart analog-to-digital devices . The increased accuracy of these measuring devices made computer control very attractive . A computer is not capable of distinguishing whether an instrument is, in fact, measuring exactly what it is supposed to and not something else. Without reliable measurements a general mistrust of computers might have developed and the move away from manual control methods been severely delayed. Instead, accurate analytical instruments accelerated the way toward computer control . On-line computers operating at microsecond speeds made sense only if they had to handle a substantial number of monitoring devices from all parts of the accelerator. Connecting such devices would require designing compatible controller interfaces that matched the input formats of the computer . There would become a time when the electronics engineer would be viewed with enhanced status . Mechanical engineers had long been taken for granted as sanctified members of a scientific team engaged in building an accelerator. Now the electronics professional came to be recognized, though more slowly than his mechanical counterpart. I am reminded of a statement attributed to William Brobeck, one such fine mechanical engineer, who expressed the sentimental rivalry in a proverbial way : "Never do anything electrically if you can do it mechanically ." A third area of concern in accelerator operation had to do with timing. Unlike other real-time process control functions that may not neccessarily suffer from severed time constraints, accelerator functions require very precise timing. An inappropriate response to a critical time event can make the difference between successful operation or complete failure . Therefore, even at the very beginning, it was recognized that a computer with a good external priority interrupt scheme was highly desirable. The ability to break into a computer's routine processing and service critical events quickly and efficiently is paramount . Computer manufacturers recognized the need and soon offered several schemes from which one could choose. Another area demanding hardware ingenuity was the manner of accommodating the continually expanding number of data points and set points . As accelerators grew so did the number of gauges, sensors, contact closures, scalers, meters, and whatever else one uses for monitoring and controlling this large complicated scien-
tific tool . It did not take long to calculate that having a separate path for each and every point would create a tunnel or enclosure completely filled with wire cables and no room for anything else. Fortunately, our electronics engineering groups came forth with timed-sharing multiplexed transmission hardware so a major reduction in cable runs could be achieved . The furor over today's networking systems plainly indicates that such activity is continuing. Widespread acceptance of computer control would come only after the development of good output media that could present the massive amounts of monitored data gathered from all over the field by the burgeoning computer. Our first small 8, 12, or 16-bit minicomputer was, almost without fail, equipped with the noisiest ASR-33 teletype made and this was expected to function as the primary input/output device . The low price was certainly a bargain, but the output speed meant one had to be highly selective on what and how much to record . Relentless patience was required whenever a runaway situation developed that only a complete reload of the system could cure . If any honor is due these clumsy teletypes, it is the fact that they forced us to get attractive CRT displays in as early as possible . In many cases the first systems had paper tape input/output and for a long time these worked out extremely well . The use of magnetic tape was popular, yet, analysis of information always meant transporting the tape to a large data processing center . Thus, an everlasting search continued for more on-line storage space by installing the best random access storage devices known . Drums and disks soon became definite requirements to supplement those small minicomputer memories and today's minicomputers are rarely without one . These are some of the hurdless the pioneers and skeptics knew must be jumped in order to make on-line computer control workable. Little did those pioneers realize what seemed like extraordinary difficulties did not pose any real hardships at all toward going on-line ; instead the amazing hardware developments that came along staggered our imaginations . In the final analysis it was always the software, floundering in a constant morass of evolution, that turned out to be a greater bottleneck than we software professionals would care to admit .
3 . Early applications The use of digital computers for accelerators started around 1960 principally for on-line acquisition and analysis of experimental data, but the amount of direct control was limited . Once experience was gained with data collection in experimental areas, the rush toward full computer control took off in earnest around the mid-sixties . Previously, I had computer experience asso-
M. R . Storm /Survey of accelerator computer controls ciated with two Van de Graaff systems at Argonne National Laboratory . In one instance, dual CDC-160A computers, sharing a common 8K memory bank, controlled the major parameters of a 3 .5 MeV Van de Graaff accelerator concerned primarily with neutron experiments [3] . The other, a Van de Graaff of the tandem type, was used for low-energy nuclear physics experiments done in the Physics Division [4] . That system used a small ASI 21-bit computer built by a struggling Minnesota Computer Company . It was connected primarily to gather data from pulse-height scalers and subsequently to display it on a large CRT screen equipped with a light pen . In retrospect, less than five parameters were settable by the computer and the only other direct on-line control was that of moving a beam shutter either into or out of the beam path . Apart from control, this small computer had an interesting high speed link connection to the Laboratory's Computer Center CDC-3600 where fast service of data reduction could be obtained while the actual experiment was in progress . After 15 min intervals of data collection, the on-line ASI computer could interrupt the large computer across the link, stop all tasks there, and take over processing at a high priority for no more than one minute. Results of the calculations were immediately sent back to the on-line ASI computer, where it could reset parameters and guide the running experiment into new phases of study . The use of that system lasted only long enough to demonstrate its feasibility because the software system was persistently under development. The lesson learned, I suggest, was that linking to a large distant computer for support is a clever idea but one to be avoided if it is possible to do everything you desire on-line . Judging by the direction computer control systems moved in the seventies, ensuing developers found this to be self-evident . Two accelerator projects in the mid-sixties era which were to have a major impact on the future course of control was the ZGS at Argonne and the two-mile SLAC machine associated with Stanford University . Both enjoyed the position of being under development at that time and, therefore, the computer control systems could be designed and implemented along with the accelerator construction . The ZGS group at Argonne wanted to go all-out with central control by using a CDC-924A computer while those on the West Coast took a more conservative approach by concentrating first on the beam switchyard area with a SDS-925 and later a PDP-9 in the central control . Efforts as these that go in parallel with accelerator construction offer more freedom in trying out novel techniques and introducing new equipment which otherwise might be difficult . The LAMPF facility at Los Alamos is a fine example of the benefits of this kind of experimentation . A digital computer control system was proposed, in 1965, as essential for that project [5] . Subsequently, a
SEL-810A was selected for central control and later expanded to include Data General Nova's and the Super-Nova. In that period, our country's two largest accelerators, the Bevatron at Lawrence Berkeley Laboratory and the AGS synchrotron at Brookhaven National Laboratory on Long Island were machines already operating before serious computer control became reality and their systems were introduced pursuant to operation . The Lawrence Berkeley group was involved in designing a 200 GeV accelerator, a machine that eventually would be constructed and enlarged at Fermilab . As part of that design study, a prototype system was built onto the Bevatron around a Digital Equipment Corporation PDP-5 to investigate on-line computer control . At the same time it provided the group with a vehicle to get started in the digital control of the Bevatron and, naturally, this fostered more control with other newer computers being added . The AGS group at Brookhaven experimented for a short time with a home-built computer they dubbed Merlin and then upgraded to a PDP-6 . Eventually they would go to PDP-8s and a hookup to a larger PDP-10 as a source of more computational power . There was also a link to the Laboratory's large 6600 data processor . It is better to avoid such links, unless one requires special needs other than control. Besides the five installations just mentioned above there were a number of others now joining the band wagon to try digital computers. Some included in this group are the Princeton-Pennsylvania project adapting PDP computers and the Cornell Synchrotron using an IBM-1800, while in Europe the CERN laboratory also attached an IBM-1800 to the CPS. It should be kept in mind that these were essentially their initial systems . Time and space do not permit me to present all the constant changes these laboratories and others have made to their current systems which now may bear no resemblance to their original form . A good approach in one's formative years to computer use is to act a bit tentative. Many followed that advice and implemented the introduction of on-line computers in progressive stages . To this day I rarely find anyone who admits his establishment has reached its final system .
4. Fermilab history The founders of the Fermilab control system began a close study, in 1969, of these earlier accelerator control systems . We took note of the fact that the Berkeley Bevatron system was evolving into several PDP-8 minicomputers each dedicated to a particular subsystem . Elsewhere, at Argonne steps were being taken to implement a Sigma-2 computer control at the ZGS experimental area [6], thus providing another distinct control I . OVERVIEW OF EXISTING SYSTEMS
M. R. Storm /Survey of accelerator computer controls
system separate from their central overall system . Experimental beam lines generally form a logical division in most accelerator complexes and one that was, certainly, to hold true in our development as well . In contrast, the LAMPF control system began with a single central processor to which all subsystems responded. Such a system appeared to have greater flexibility and allowed a single operator to coordinate all component aspects from a single point . In time, even that system would diversify into a network of computers and follow the pattern that was emerging . SLAC with an XDS-925 in place and working on a PDP-9 in the central control was making plans to go further along the multi-processor route. Then, at Fermilab, the prospect of merging both the centralized control idea and the dedicated subsystems looked intriguing and we proceeded to build a system often referred to as a distributed control system . Elaborate plans for a central control room had been made, so we were committed to centralized operator control, but one that allowed for expansion in many other places . Every data point, with few exceptions, known to a subsystem was required to be transmitted back to the central computers . Because commercial hardware to connect varying computer systems was unavailable, we were forced into producing our own intercomputer links [7] . At the subsystem level, MAC-16 minicomputers manufactured by Lockheed Electronics were used and slaved to Sigma-2 hosts via the in-house network. The design plans, also, called for all three of the central Sigmas to communicate with each other, thereby providing access to any point in the complex by the central operator . Such provisions look good on paper, but in practice we never accomplished the task of linking these three central console computers . Consequently, one console and computer serviced the Linac injector, two consoles and the 2nd computer serviced the Booster injection stage, and 3 consoles and the 3rd computer provided all the services for the main accelerator and switchyard area . In outward appearance all console systems looked nearly alike, but internally each had a slightly different software operating system that was written in-house . That had the drawback of requiring separate software specialists to be knowledgeable in each system. The system had hardly been given a chance before it was declared inadequate and a sizeable upgrade to three Xerox-530s took place . That system, then, would continue to be the control system for nearly a decade with constant improvements being made in the software methods and application programs . Such features as fast-time plotting in real time, acceptable alarm reporting methods (including voice), and data logging were important additions . Into our console hardware were innovations of colored CRT monitors and the trackball for quick cursor movement to any part of the screen .
Today such items are standard . In addition, we had the traditional knobs and switches to preserve the flavor of things operators remembered from the manual control panels. By the fall of 1979 installation of our next accelerator, the Tevatron, was moving along rapidly and the controls hookup was proceeding in tandem. It was well understood that expanding the existing Xerox-530 computers and connecting MACs was unthinkable, since that system had reached saturation years before. Therefore, a plan was adopted where an entirely new system would be brought in not only to operate the new Tevatron, but systematically to include all the former systems as well . It would be a monumental conversion task, but one we had decided upon and were prepared to undertake. Preliminary conversion experience had already been gained as an entirely new distributed Linac computer system was being assembled, and mostly working, using at least 12 MC68000 microprocessors networked in a SDLC transmission loop . Naturally that same fever mushroomed through every other controls project as literally hundreds of microprocessers appeared over the whole complex in an exceedingly short time . Uppermost in our thinking about this next control system was to network all phases so that a single operator in the central control room had command over anything and everything . Two VAX 11/780 computers were acquired to serve as central hosts that would communicate initially with four PDP11/34A computers at the subsystems level . The subsystems were planned to act as concentrators ; one serving the myriads of microprocessors in the Tevatron, a second interfaced to the collection of MACS on the former 400 GeV main ring, a third to the new 68000 microprocessor-based Linac system, and a fourth to the 10 GeV Booster stage . Ample expansion possibilities were included and two additional subsystems have since been added while plans are underway to install more . Replacement already of some 11/34's with 11/44's and, more recently, with 11/84's is living proof in our industry that budget constraints never allow you to put in a system according to your needs . At the operational control console level an equal amount of _expandability has been provided . A major departure in the manner of serving consoles required that each console be supported individually with its own PDP-11 computer . We were determined this time around to ensure, at least in principle, that the entire complex could be controlled from exactly one console if needed . While the initial acquisition stipulated 8 console control computers, that number has grown to 17 after a three year period, and where it will end borders on speculation .
M. R. Storm /Survey of accelerator computer controls
5. Software design considerations Software development over the years has experienced the same growth as hardware, though its accomplishments may not be as visible. Initially minicomputer memories were small so that a single program was about all that worked well . Today, the large number of computers employed in controlling a single accelerator requires an elaborate operating system in order to make sense out of the multitude of tasks running concurrently. It is not out of the ordinary to have more than 25% of the memory space taken up by an executive . When building a software system from the start it is convenient to try out several approaches or test different operating systems. Such experimentation was certainly the case at Fermilab . In the early days, two different software control systems were being produced side by side and, before long, a mayor decision would become necessary to choose the one showing the most promise . These facts are little known because most reports, especially at conferences, tell only of glowing success stories . Either system we were developing had merit but only one was needed. Most systems can fit into one of the two types we were pursuing. One type I choose to call an interactive conversational system and the other type a menu-driven system . The latter presents the operator with a directory (i .e . index) of all functional programs. A selection is made and the selected application program in turn operates again under directions given it via the console's CRT display in a cookbook fashion . The other type of system requires a command language with the ability to group commands into procedure files and execute them . It is easier to learn but slower in execution because of the abundance of query and response messages inherently required . Less sophisticated users can work effectively with command languages while menu-driven systems tend to demand more skilled operators. Without too much debate we chose the menu-driven version for the main control center because of its fast response time and migrated a form of our unwanted system to the experimental beam lines. Many considerations go into designing a software system for accelerator computer control. In reflection I believe we at Fermilab covered our share if measured by the number of committee discussions which took place . One reason we had for looking at the conversational command language approach was to investigate the possibility of slow degradation and still be able to run the accelerator. It was a wonderful concept but one that did not materialize. When some portion of the central computer system breaks you go and fix it quickly rather than sending a host of operators to each of the subsystems . What saved us in this regard was the availability of "hot" standbys at both the central and subsystem level computers .
5
Another software deliberation we had was how much data to collect and how often each subsystem should routinely refresh it . Equally important was the question of shipping all the data to a central datapool or sending it selectively as the demand arises . Because most users insisted that an immediate response to any point in the accelerator complex was essential, a massive core resident datapool was refreshed on every beam cycle and in some cases even more often . That is a tremendous amount of data which is never looked at, but it was there post-haste when anyone wanted it . The question of coupling a computer intimately with the accelerator operation through closed loops is often brought up . Our Linac complex and the main accelerator RF system have functions which lend themselves nicely to closed loop control, but others programmed in the switchyard area were abandoned after a time. We learned a rule of thumb for closed loops that says they should not be used as a substitute for circumventing things which have occasional big drifts or components that are not working well. With large complex systems there are always many things not working well or else not clearly understood . Thus far, software closed loops have not been a favorite in our evolving systems . The compacted single-board microcomputer control module, now available, could soon change all that . If there was a software area in our design of the X530-Mac computer system that was given insufficient attention, it was the alarm reporting . Not because there was not one, but rather it was so late in coming. Engineering people can quickly install hardwired alarms that work well at the start, but after a while they fall short of what is needed. Then trying to put a belated computer alarm display system into an well-running control system does not win one many converts.
6 . Current systems The extraordinary growth exhibited in present-day control systems (i .e. since the advent of the microprocessor) makes it difficult to pinpoint any identifiable trends of these modern systems . I would like to offer two characteristics which appear to be standard . First, it is a network system comprised of many more computers than anticipated and, secondly, the software operating system must be a commerically well-advertised multitasking one that is guaranteed to surprise you on the throughput it achieves . Also, a third characteristic might be that we tend to congregate upon the same computer vendor. I find very few installations who are without a VAX or a PDP-11 computer . The move to multi-tasking operating systems was, in my opinion, a desirable one . It forced a number of diehards, like myself, away from developing such systems in-house and putting rightful dependence upon I . OVERVIEW OF EXISTING SYSTEMS
6
M.R. Storm /Survey of accelerator computer controls
commercial vendors to supply them . The same is true now for networking systems which are no longer economically feasible for a research laboratory to produce on its own . High-level programming languages like FORTRAN and PASCAL were slow coming into use in the early stages but now are present everywhere in console applications . Subsystems, also, are tending to use these highlevel languages, but there still exists a reluctance to give up the traditional assembly language usage . The SPS at CERN had an interpretative language incorporated into its control system with the idea that operations personnel and system support groups could develop their own control programs in an easy and straightforward way . While experience showed they achieved considerable success in that regard, yet it was desirable to augment the operating system as the multicomputer network expanded [8] . Since today's control systems are judged primarily upon their speed of response rather than ease of use, I do not see interpretative languages as being in demand on forthcoming systems unless they can produce the kind of response times provided by other high-level compiler languages. Much progress is seen in present-day console workstations that generally include one or more color video terminals . These devices have demonstrated their effectiveness in giving operators an extensive overview of the accelerator status through color-coded displays. By using different colors for readback, set point, and limit values one can see at a glance much more about the nature of a device than with a mere number . Standard on most terminals are blinking modes and a means to produce foreground and background video . These facilities provided by terminals have reached the point where the only limit is a human's ability to concoct the best formats that will please the majority of operators . The present workstation at Fermilab is outfitted with a primary color terminal and keyboard for interactive alphanumeric control, with a second independent color unit reserved for graphics . To those organizations who can afford a second color graphics unit, I recommend making such a move, as it allows the display of many more appealing system-flow diagrams than with a shared unit. Our additional color display unit at the consoles has allowed a programmer to use the device in every application program and in whatever form seems appropriate. Even in traditional curve plotting it is preferred to the Tektronics storage scope unit, simply because one can put more curves on the same page, contrasting them with a different color . Years ago, one argument against color graphics was the lack of a fast reliable hard copy device. Such units are easily obtainable in today's marketplace and can be cost effective if many color display units share the same copy equipment . Touch-sensitive screen panels are commonplace in accelerator console control systems and form a sub-
stitute for the thumb-wheels, flip switches, and pushbutton switches of the past . Since they first appeared and, even now, their acceptance has met with mixed reactions . As early as the 1971 Particle Accelerator Conference, SLAC representatives [9] reported with much excitement their plans for introducing these devices . The glass plate was divided into a 10 by 13 matrix giving access to 130 possible functions on a single display . In addition to switches, the touch panel was praised for providing convenient tree structures that could allow an operator to zero in quickly on any particular accelerator component without resorting to lengthy keyboard instructions . When I heard the proponents elaborate further on their grandiose visions with the touch screens, I became convinced we were slowly reducing our operating crew into the world of magic . Not until installation of our latest control system at Fermilab were consoles first equipped with touch screen displays . From what I observe thus far is that they accomplish their function as switches in a reasonable way, but I wouldn't exactly boast over any other marvelous uses .
7 . Future systems The evolution in computer control systems has been so dramatic that it is hard to see where it is going in the future. Each year's progress makes it impossible to ascertain where one generation ended and another started. Whenever I ask someone more knowledgeable for clarification I am left dangling anywhere in the fourth to the seventh generation, depending whether the subject is network protocols, programming languages, or computer architecture . The profound computer hardware growth has just about erased the distinction between the mini computer and micro computer . What once was considered an ideal job for a minicomputer can be accomplished as well and sometimes better by a micro-computer . Rather than clarifying the situation, it is likely more confusion will arise from the labels "supermini" or "supermicro" . On a more serious note, I do see some definite areas in our business that hold promise for the future, even though they have received attention in the past . The schemes for good alarm reporting, for instance, have not yet crystallized into a form operators can call standard . There exists a certain skeptism that alarm systems are unreliable and generally not to be trusted . Nothing discourages an operator more than a clutter of unimportant alarm states or unclear messages spread over a display screen . Future alarm systems, I predict, will experiment with new clever ways of reporting and, out of necessity, help to bring back some creditability to alarm displays . Datalogging, like alarms, leaves a lot to be desired.
M. R. Storm /Survey of accelerator computer controls
Much effort goes into the recording and saving of data, but the deluge of information made possible by the multitude of computers can lead to a chaotic situation. The software to log massive amounts of accelerator parameters on some central disk-storage is simple, since the network access mechanism already there for the operational database is so convenient. The intentions may be noble, but the productive usage of logged data is absurdly small in comparison to the amount recorded . Unfortunately, logging systems must provide enough flexibility to do extensive data collection for fear of losing valuable information and for that reason, they naively pack away much more than required . Programs intended to compress and analyze the mountain of data are forced into doing a cursory examination at best . I believe, in the future, we will take the distributed approach with respect to datalogging. One reason for this is the popular personal computer that is coming in more and more as an adjunct to the central system . Rather than shipping all information laboriously through the network to some central location, logged data will be retained at the lowest subsystems level. From here a network of personal computers can access this data parasitically and do the analysis for the user off-line . Some changes will occur in the layout of a control console. Basically, the advances that can be made technologically to the multi-color TV units are reaching a peak . However, the display formats still show signs of evolving to friendlier types. The future display views will incorporate icons and windows. Both items are much in vogue with the majority of PC vendors and we will not be able to escape them. Finally, the dream is ever present to reduce an operational control console to its very minimum. At
7
most there might be two knobs, half a dozen buttons, an accelerator pedal, some steering mechanism, and several tiny color video displays. The ideal would be to have most of the control reside in the push-buttons with such functions as start, stop, abort, pause, and resume . From there an operator might switch to autopilot, where the complicated work is automatically carried out behind the scenes in as many 32-bit microprocessors as it requires . Such ridiculously simplified control is not possible in the foreseeable future but it would be exciting to try. In order for you to appreciate my concern for simplification I am proposing a single control console for the future as illustrated by the peculiar diagram. After all, who can tell how far away we might be from a one rotary selector switch accelerator? References
[21 [31 [4] [51 [6]
[8]
R.W . Allison et al., Digital control of the Bevatron-Injector trajectory, Proc . 1966 Linear Accelerator Conf. (October 1966) p. 483. J.L .W . Churchill, J. Sci. Instr. 42 (August 1965) 551. J.F . Whalen, R. Roge and A.B . Smith, Nucl. Instr. and Meth. 39 (1966) 185. D.S . Gemmell, Nucl . Instr. and Meth . 46 (1967) 1 . T.M . Putnam, R.A . Jameson and T.M. Schultheis, IEEE Trans. Nucl . Sci. NS-12 (June 1965) 21 . E.W . Hoffman et al ., IEEE Trans. Nucl . Sci. NS-16 (June 1969) 871. S.R . Smith et al., IEEE Trans. Nucl . Sci. NS-20 (June 1973) 536. F. Beck, M.C . Crowley-Milling and G. Shering, IEEE Trans. Nucl . Sci. NS-24 (June 1977) 1674. D. Fryberger and R. Johnson, IEEE Trans. Nucl . Sci. NS-18, 3 (June 1971) 414.
I. OVERVIEW OF EXISTING SYSTEMS