Control Engineering Practice 6 (1998) 239—245
Invited Paper
Ethics on the feedback loop Roland Schinzinger* Electrical and Computer Engineering Department, University of California, Irvine, USA Received August 1996; in revised form December 1997
Abstract The design, manufacture, and supervision of complex systems usually proceeds as an iterative activity involving multiple feedback loops. During each iteration, as physical parameters are re-examined for possible changes, the system’s societal impacts should receive as much attention as does the system’s purely physical performance. ( 1998 Elsevier Science ¸td. All rights reserved. Keywords: Design and feedback; software safety; experiment; responsibility; engineering ethics
1. Introduction Designs of systems and devices are usually carried out in an iterative manner. Each iteration is an attempt to improve the performance of the product by modifying physical parameters, while at the same time forcing the satisfaction of imposed constraints. Usually, one cannot progress in an uninterrupted manner straight through the many stages involved in designing and manufacturing a product or system. The design phase includes conceptual design, definition of detailed goals and specifications, prototyping and preliminary testing, followed by detailed design and preparation of shop drawings. Manufacturing involves scheduling the manufacture of parts, purchasing materials and components, fabricating parts and subassemblies, and finally assembling and performance-testing the product. Selling is next (unless the product is delivered under contract), and thereafter either the manufacturer’s or the customer’s engineers perform maintenance, repair and geriatric service, and ultimately recycling or disposal. Each stage presents engineers with a number of options, resulting in many possible ways of proceeding. So it is natural that engineers usually make an initial stab, stop along the way when they hit a snag or think of better solutions, and return to some earlier stage with improvements in mind. This retracing constitutes a feedback operation. Such a feedback path does not necessarily
*Corresponding author. E-mail:
[email protected] 0967-0661/98/$19.00 ( 1998 Elsevier Science Ltd. All rights reserved PII S 0 9 6 7 - 0 6 6 1 ( 9 8 ) 0 0 0 0 7 - 0
start and end at the same respective stages during subsequent passes through the design and production processes because the retracing is governed by the latest findings from current results, tempered by the outcome of earlier iterations and experience with similar product designs. All too often the engineer considers only the physical aspects of the product. The impacts of its use on the owner, user, community, and the natural environment are conveniently assumed to be covered by standards (often outdated in rapidly developing technologies), design specifications, or by ‘‘other specialists’’ somewhere in the organization. No wonder important side-effects of products are often not considered until either it is too late or the necessary changes become prohibitively expensive. Fortunately, the opportunity for improving the product’s social and environmental impacts can occur during the iterations in the design and production-planning processes. The integration of physical and societal considerations in the iterative process allows engineers to realize their professional obligations more fully. One can label this approach Ethics on the Feedback ¸oop. The idea of treating engineering ethics problems as if they were design problems was advanced by Caroline Whitbeck (1990) as a way to interest engineering students in ethical decisionmaking. Michael Rabins (Harris et al., 1995) emphasizes the role of feedback in this connection. It may be appreciated then that engineers, educated and used to applying design iterations, can readily apply this approach to address in a holistic manner not only concerns of purely
240
R. Schinzinger/Control Engineering Practice 6 (1998) 239—245
physical performance, but also concerns of an ethical nature. The process must, however, take into account that the properties of the materials as actually delivered, the shop procedures as actually carried out, and even the user’s application of the final product, may not exactly coincide with what the designer had specified or expected. The need to examine such uncertainties and their effects on a product’s actual performance, will be examined next. It leads to the view of engineering as an experiment with human subjects (Martin and Schinzinger, 1996, Ch. 3).
2. Uncertainties and experimentation An engineered product (device or system) results from the interaction of human activity, tools of manufacturing, and materials. Each of these is beset by uncertainties. The designer can never know all the pertinent physical laws and properties of materials, even in their ideal states. It is also difficult to foresee all the possible uses to which a customer may subject a product. Finally, the materials and parts used, as well as the manufacturing process, may have hidden defects. Testing may not detect them, just as testing of the final product does not tell how well it will withstand unforeseen stresses, particularly when a test does not anticipate a peculiar defect, or when it does not carry the product to destruction. Moving on to the use of the product, the limitations of the product may not be clear to the original or later owners of the product. Similar gaps may exist in the knowledge of the end-user regarding suitable means of disposal. These are but a few of the many occurrences which can make a product less than useful, if not outright dangerous. The significance of these uncertainties is enhanced when one regards engineering as an experimental activity, even though engineering usually lacks the involvement of control groups in the usual sense. In a restricted sense, however, one may even regard the design process itself as an experiment. Each iteration in the perfection of a design, whether one starts over again from an earlier design on paper, or after a simulation run or a prototype test, is an experiment which uses the preceding design as a ‘‘control’’ design against which one calibrates improvements in efficiency, cost, satisfaction of constraints, and achievement of goals. The physical realization of a design through the manufacture and assembly of parts is likewise in many ways an experiment, as is the selling and commissioning of the final product. Here again, one must demand the close attention of the experimenters (the engineers). They ought to monitor the experiment (the product) throughout its life and terminate the experiment (for instance, by recalling the product) when safety can no longer be assured. As with any experiment involving human sub-
jects, all those who could possibly be affected by the experiment should be contacted and afforded the opportunity to give or withhold their informed consent as participating subjects (Martin and Schinzinger, 1996, p. 84). The parties affected might be workers and testers on the shop floor, shareholders of producer and buyer, operators (say, pilots) and indirect users (airline passengers), or mere bystanders (those living below a flight path). And even after the product has ended its useful life, the health of people living next to recyclers, landfills, and incinerators must be considered. In today’s complex technology, few engineers could be expected to continually keep track of a product’s many actual and possible uses and all the individuals affected, but the engineering-as-experiment paradigm serves as a reminder of what the engineer needs to keep in mind as the product design undergoes yet another iteration following preliminary design reviews and tests. The task of predicting all the ways in which a product may actually be misused by its owner is daunting, and no engineer can be expected to be responsible for all unforeseen applications. Section 8 will take up some special circumstances involving control engineering. The emphasis here is on at least taking the time to imagine possible system failures or misuses, and to provide generic safety and escape measures. What has been said so far will now be illustrated by means of a case study.
3. Case study: A medical electron accelerator In the 1980s a series of tragic accidents resulted from the use of a new radiation-therapy machine, the Therac25 medical electron accelerator. (See Jacky, 1989; Leveson and Turner, 1993; Rose, 1994; Peterson, 1995.) The Therac-25 is a dual-mode linear accelerator for therapeutic use. In mode ‘‘X’’ its electron beam is directed at a target at full output level (25 MeV) to produce Xrays for the treatment of deep-seated tumors. Shallow tissue treatments are carried out in the electron mode ‘‘E’’, where the beam is shaped and attenuated to variable energy levels of 5 to 25 MeV. In a third mode, an ordinary light beam and a mirror are used to simulate the treatment beam so the patient can be positioned properly. A turntable is employed to produce the desired treatment modes as follows: for mode X the turntable moves into the path of the electron beam a tungsten target, a cone to produce a uniform treatment field, and an ion chamber to measure the delivered X-ray dose. For mode E, it replaces the above by scanning magnets to shape the beam, and another ion chamber to measure the delivered dose of electrons. The position of the turntable is sensed by three microswitches and reported to the machine’s computer. By 1987, Atomic Energy of Canada Ltd. (AECL), the manufacturer of the Therac-25, had sold and installed six
R. Schinzinger/Control Engineering Practice 6 (1998) 239—245
units in Canada and five in the US. Some of the model 25s had apparently been functioning normally since 1983, with hundreds of patients treated, when the first known malfunction occurred in June 1985, resulting in a lawsuit filed five months later. By January 1987, six patients had suffered massive overdoses of radiation during treatment with Therac-25s. Three of these patients died as a consequence of overexposure. Others lingered in pain, and one underwent an otherwise avoidable hip replacement before dying of cancer. The first incident occurred during radiation treatment following a lumpectomy to remove a malignant breast tumor. During the twelfth session, the patient felt intense heat. A dime-sized spot on her breast, along with a somewhat larger exit mark on her back, indicated penetration by an unmodified, strong electron beam. Tim Still, the radiation physicist at the hospital, queried AECL, only to be told that the Therac-25 could not possibly have produced the specified E-mode beam without attenuating and spreading it as required. The oncologist then prescribed continued treatment on the same Therac-25. When the cause of the initial burns was clearly identified as radiation burns due to one or more doses in the 15,000 to 20,000 rad range instead of the usual 200 rad, the patient initiated a lawsuit against the hospital and AECL. Eventually the patient had to have her breast removed, lost control of her shoulder and arm, and was in constant pain. Later she died in an automobile accident. This type of machine malfunction was to occur again, but AECL was unable to replicate the events and delayed warning other Therac-25 users of the machine’s apparently erratic behavior. According to Rose (1994), Dr. Still had discussed his early observations with colleagues in the profession, with the result that AECL had warned him not to spread unproven and potentially libelous information.
241
duplicate all the existing hardware safety mechanisms and interlocks. This approach is becoming more common as companies decide that hardware interlocks and backups are not worth the expense, or they put more faith (perhaps misplaced) on software than on hardware reliability.’’ (Leveson and Turner, 1993) As it turned out, the malfunctions of the Therac-25 occurred because of one or more software errors. One arose because of certain race conditions that can accompany rapid input and changing of instructions for multitasking operations. Thus, when set-up data was entered via the computer terminal, the mode (X or E) had to be specified. Since X was the more common treatment mode, the Therac operators could be expected to mistakenly enter ‘‘X’’ somewhat by habit, even when an ‘‘E’’ was called for. But the operator could easily make a correction by hitting the up-arrow key and replacing the X by an E with a few keystrokes. AECL had actually introduced this correction feature in response to complaints by operators that starting all over again with data input after the error had been detected was too cumbersome. When now a quick X-to-E correction was made by the operator, and when this was done within eight seconds after the full prescription had been entered, the X-ray target would be withdrawn properly, but the electron beam would already have been set by the computer to its maximum energy level to comply with the earlier X-ray command. Thus the patient would be subjected to an excessively powerful and concentrated electron beam. The timing was critical, the occurrences rare, and the cause only detected with difficulty by the hospital’s medical physicist.
5. Lack of a mechanical interlock 4. Software errors AECL was still a crown corporation of the Canadian government when it collaborated with the French company CGR in the design and manufacture of the earlier 6-MeV linear accelerator Therac-6 and the 20-MeV model Therac-20. Both were based on older CGR machines. Later, AECL developed Therac-25 on its own. The earlier CGR machines relied on manual setups and hardware interlocks. Some computer control was added to these Theracs to make them more user-friendly, but the hardware was still capable of standing alone. Therac-25, however, was designed with computer control in mind from the ground up. As Leveson and Turner write, ‘‘AECL took advantage of the computer’s ability to control and monitor the hardware and decided not to
Another malfunction mode based on a software error arose from the manner in which the turntable position was sensed and digitized. The error was inadvertently introduced when AECL attempted to guard against unintended turntable positions following early reports of radiation overdoses (Leveson and Turner, 1993; Rose, 1994). When a counter was introduced to determine and to report to the computer the changing position of the turntable in transit, the counter would reset itself to zero upon reaching 256, apparently a value considered high by the programmer. A similar reset problem is now said to worry many businesses and agencies whose immense accounting and planning programs unthinkingly instruct computers to reset time to zero as 1999 ends, rather than going on to year 2000. Such a mistake will not directly affect a human life, but it does point to the difficulty of spotting errors as
242
R. Schinzinger/Control Engineering Practice 6 (1998) 239—245
the software is being written and installed. Thus it is always important to provide a safety mechanism that is truly independent of the control computer and its program. The mechanical interlocks used on the Therac-20 would be an example.
6. Safe exit Let us now look at one more report of a radiation overdose from a Therac-25: ‘‘[A] patient’s treatment was to be a 22-MeV treatment of 180 rad over a 10]17 cm field 2 , or a total of 6,000 rad over a period of 61 weeks2 . After-the2 fact simulations of the accident revealed possible doses of 16,500 to 25,000 rad in less than 1 second over an area of about 1]1 cm2 . He died from [horrible] complications of the overdose five months after the accident.’’ (Leveson and Turner, 1993) Other aspects of this particular incident are noteworthy because the machine’s operators had to rely on their own best guesses when abnormalities occurred. The natural tendency would often be to shrug off unusual machine behavior as ‘‘just another one of its quirks’’. But the consequences were serious. For instance, soon after hitting the proper key to begin treatment of the patient mentioned above, the machine stopped and the console displayed the messages ‘‘Malfunction 54 (dose input 2)’’ and ‘‘Treatment Pause’’. There was no explanation to be had what kind of ‘‘Malfunction’’ this was, not even in the manuals, though later inquiry from the manufacturer indicated that ‘‘dose input 2’’ meant a wrong dose, either too low or too high. The ‘‘Pause’’ message meant a problem of low priority. The dose monitor showed only 6 units of radiation delivered, when 202 had been specified. The operator was used to erratic behavior of the machine, and since on earlier occasions the only consequences had been inconvenience, she continued the treatment. Soon she was horrified to hear the patient pound on the door of the treatment room. He had received what felt like two severe electric shocks, apparently one after each time the start button had been pushed. This was his ninth session, so he knew something was wrong. Why did he have to get off the table and pound on the door? Because even the simplest of emergency cords or buttons or other ‘‘safe exits’’ were lacking (Martin and Schinzinger, 1996, p. 179). Instead, in the case under discussion, audio and video monitors were provided. As could be expected to happen at times, the audio monitor was broken and the video monitor had been unplugged, but the radiation treatment was conducted anyway. More to the point is the general lack throughout the industry of accurate, reliable instruments that tell an operator the actual radiation dose delivered to the
patient (Cheng and Kubo, 1988; Loyd et al., 1989). An assumed dose based on calculations involving the prescription and presumably correct machine settings, or a reading derived from a dosimeter that is not even exposed to the actually delivered radiation, is not sufficient. Direct-reading, well-calibrated dosimeters are a vital element in the feedback of information from the treatment table to the operator. After all, ‘‘[The] dose monitoring system is the last level of protection between the patient and the extremely high dose rate which all accelerators are capable of producing’’. (Galbraith et al., 1990) As reported by Rose (1994), staff at a hospital in Toronto installed a dose-per-pulse monitor that could measure the radiation delivered by a beam and in a fraction of a second shut down the machine. In case of serious mishaps, corrective action can be undertaken that much sooner when proper instrumentation is available. Equally important is the presence of experts at potentially life-threatening treatment or work sites. Three Mile Island, Bhopal, and Chernobyl have shown that good measurements and the presence of onsite experts who can evaluate the data are the sine qua non of safety and the avoidance of disaster (Martin and Schinzinger, 1996, pp. 168—170). Beyond that, a ‘‘safe exit’’ directly accessible to the patient (or, in general, the ‘‘ultimate subject of the experiment’’) must be provided.
7. The regulatory agency It is appropriate to introduce here another line of defense, one for early prevention, even though it may appear to be remote from the engineer and the user, and that is the regulatory agency. When the early Therac-25 accidents occurred, it was (in the United States) up to the manufacturer to report malfunctions of radiotherapy equipment to the US Food and Drug Administration (FDA). The user was not obliged to do so. Those users who did call the FDA, if only to learn what experience other cancer centers might have had with the Therac-25, could not find out anything, because AECL had not yet reported. Industry engineers, fearful of giving away trade secrets or of inviting law suits, are too often reluctant to share information on malfunctions. Recall procedures are in force in the health equipment industry, but time lags often hinder effective implementation. This state of affairs should remind us that as responsible engineers we should assist the regulatory process and improve it, instead of labeling it a nuisance and trying to impede it. At the same time, it must be recognized that regulations per se often do not keep up with changes in technology and that adherence to regulations may also lead to merely minimal compliance. What regulations — and laws in
R. Schinzinger/Control Engineering Practice 6 (1998) 239—245
general — can do is to give responsible engineers the support they need to correct or stop projects with clear safety problems that have been left unaddressed.
8. The control engineer Human ingenuity has created control systems of great variety for centuries. At an earlier time such systems would automatically position windmills to face the wind, cause patterns to be woven into fabrics, or govern the speed of steam engines. Now computer technology makes it possible to control the operations of very large systems such as complete electric generating stations and the networks connecting them, or chemical process plants, robotic manufacture, and jumbo airplanes. The growth of complexity places ever greater responsibilities on all engineers for the failure-free operation of their systems. This is no small task because it can be said that as systems grow in size, they almost invite failures. Charles Perrow (1984) labeled them ‘‘normal accidents’’. They can happen when even one small equipment error is propagated throughout the system by the myriad interconnections among its many components, or when one of the myriad instructions of the control computer’s software is the wrong one under unforeseen circumstances. Add operator error, and one can appreciate the difficulties of keeping accidents from happening. Very much to their credit, control engineers have a good track record of safety. Nevertheless, it will still be claimed that many of the frightening disasters such as at Chernobyl or Bhopal could have been avoided by further introduction of automation and less reliance on fallible human operators. Such arguments detract from the real reasons that these accidents turned into disasters: lack of foresight to prepare for accidents and to allow for safe exits of those exposed to danger in such situations. In Bhopal even the local authorities had not seen fit to institute evacuation plans, despite the high density of population near the plant and the repeated public warnings by a concerned local journalist, Rajukman Keswani, before the catastrophe (Tempest, 1984). The plea for more automation also overlooks the fact that human beings are still involved in the design, manufacture, and surveillance of such complex systems. What happened with the Therac-25 is not much different from occurrences elsewhere. For example, consider the following typical occurrences in selected areas: Instrumentation: Faulty, inaccurate, or misleading, causing operators to disregard readings. Control computer: Software errors, data inflow rate too high, or failure of electronic circuits.
243
Design of equipment: Unforeseen ambient temperatures or corrosion. Instead of adding fixes upon fixes to systems so that they supposedly cannot fail, it is better to have on hand capable operators who can handle an emergency, who are backed up by built-in overrides, and to have safe exits for those who might get hurt. It is the engineer’s and plant manager’s ethical obligation to see that such are in place. The Therac cases illustrate these points. It could be argued that proper training of operators would suffice, but it is often observed that after good classes for the first crew, later newcomers are given but cursory training, frequently just by members of an earlier crew. And how often are operating manuals really updated and kept in place, even at the urging of the manufacturer? Or, even if they are, how clear are the instructions? Has the damage been done by the time the right place in the manual has been found? What about the decision lag which so frequently besets operators faced with shutting down a system when doing so unnecessarily may bring blame? Such operational questions do not usually spring to mind during the design and implementation phases, but they ought to, especially when the consequences can be life-threatening. A safe product or system is ultimately one that users and bystanders can safely jettison or escape from. Engineers responsible for safety and reliability should similarly be reminded that even if the theoretical probability of system failure is low, one must still allow for the possibility of failure that could lead to loss of life, livelihood, and investment. (The engineer’s or the engineering firm’s reputation is also at risk — a matter of prudent self-interest that encourages adequate insurance coverage but does not replace all personal responsibility.) Failure mitigation introduced during design usually costs much less than will later retrofits. In the implementation phase there is also the need to alert local authorities of possible malfunctions and their effects, especially when there are poisonous chemicals involved. Fire crews need to know what they will encounter and how to douse fires; the police need to be prepared with evacuation plans; nearby hospitals need to know the treatment protocols.
9. Ethics in the workplace Engineers want to act professionally, and they know what that means: stay competent, deliver quality work, be honest. Upon giving further thought to their responsibilities, reading up on — or taking a course in — engineering ethics, or better yet, just engaging colleagues in conversation on the topic, they may add to the list: promote the public good, do not let one’s product harm people or the environment. It is not easy to fulfill these obligations as an employed engineer. A supposedly efficient division of labor has led to narrow tasks, and what
244
R. Schinzinger/Control Engineering Practice 6 (1998) 239—245
has not been thought of by the task assigners falls between the cracks or is written off as something that can be postponed. Worse yet, if a safety problem originates in a different department, or if it could be readily remedied there, it is very difficult to obtain quick resolutions via official, interdepartmental channels. Direct, personal links among engineers work much better. Given the specificity of today’s engineering tasks, engineers and their managers may want to consider other attributes necessitated by the very complication of the modern work environment. Such strengths as integrity, foresightedness, autonomous ethical decision-making, and moral courage are essential to the character of the responsible engineer. Good managers avoid fragmentation of the workplace, but where rivalries between departments or agencies are great, it persists. The story of the Challenger disaster, for instance, tells how the several engineering groups and their counterparts at NASA sites felt differently about risks in general, and about proceeding with the shuttle’s launch under problematic conditions in particular (Feynman, 1988). Engineers have to muster moral courage if they want to overcome such barriers and stand up for what is the correct thing to do: for example, to protect the right of the owner or operator of a product not to be harmed by its use, or at least to issue timely warnings of potential hazards to those most directly affected (including the captain and crew of a ship or space shuttle). The ability to do that is the true mark of professionalism and a test of a professional’s integrity. The concept of integrity is important to good engineering in several contexts. The design process must exhibit integrity, in that it must recognize that no single function can be governed well without consideration of all other functions. The product itself must exhibit integrity, in that it must function well with respect to several attributes, such as efficiency, economy, safety, and reliability, while at the same time being environmentally benign and aesthetically pleasing. It would also be difficult to associate the concept of integrity with a product for killing or maiming humans, such as land mines, or a solvent to be exported knowingly for use in preparing illicit cocaine. Finally, integrity should be recognized as a main attribute of character — the character of the engineer. Integrity implies that the engineer as a matter of habit feels accountable for her or his work, and therefore exhibits the usual attributes of ethical conduct. Another important attribute of character is foresight, the ability and the effort to look ahead. It quite naturally leads to the exercise of caution, an indispensable ingredient of responsible experimentation. Foresight should encompass the whole range of technological activities, from design to recycling and the effects of external influences. It also means that it is not right for engineers to disregard any problematic design feature or manufacturing process, even if they do not
themselves control it. Hoping that there is some other engineer down the line of production who has been designated to check for errors is not sufficient, because even if there is, the error may still be overlooked again. It is necessary to personally bring such cases to the attention of colleagues and superiors. They, in turn, must be receptive to such reports of concern. If a recall is necessary — from the design office, the shop floor, or the customer’s premises — it had best occur as early as possible! An ethical corporate climate helps, of course. But it is particularly important not to fall into the trap of thinking that an organization could rely on conveniently legalistic compliance strategies. These appear to be much favored by lawyers and executives, who can then lay sole blame for organizational failures on individuals who have supposedly acted contrary to the organization’s rules, whether these actually promote ethical behavior or not. Specific compliance rules are suitable only in very structured settings such as purchasing and contracting. Generally, a philosophically based ethics strategy is more effective for the many ‘‘experimentally’’ based, openended functions. Such a strategy is based on autonomous ethical decision making, not on mere observance of laws and regulations. This strategy is also the best way to impress on engineers that responsibility for success or failure is mostly not divisible, on the job nor elsewhere in life. There is another problem that employed engineers face, and that has to do with the fact that managers and engineers may differently interpret information coming by way of the feedback loop. This was pointed out at the IFAC Congress by panelist Mike Martin (1996) who gave the Challenger case as an example. O-rings that should seal segments of the booster rockets had shown signs of erosion after prior launchings at low temperatures. A redesign of the rocket was already underway, but so far a recall had not been issued. The engineers asked that the spaceship should not be launched at the very low temperatures that were expected at the launch site. Management (also engineers, but wearing their ‘‘management hats’’) interpreted the experiences with prior launches differently: no launches had failed, and as long as failure could not be forecast with certainty, the planned launch should proceed. The engineers were not prepared for such a response, nor were they uniformly firm in standing up to management pressure. So the ‘‘experiment’’ was carried out, apparently without the commander and the crew of the Challenger being notified of the situation — an example of ‘‘experimentation without consent of the subjects involved’’.
10. Conclusion The title Ethics on the Feedback ¸oop had the purpose of drawing attention to the possibility of acting on ethical
R. Schinzinger/Control Engineering Practice 6 (1998) 239—245
convictions, not only by truth-telling after a problem has arisen, but every time one examines a design, a production process, or a sales venture. Engineers constantly draw on their own, or the organization’s, experience. This very activity demonstrates the existence of a feedback loop in learning. Feedback should be used not only for improvement in strictly technical matters, but also in learning about their social implications. Ethics and strength of character are critical in putting this learning to use. Several texts discuss ethical decision making in the engineering milieu, e.g., (Harris et al., 1995; Unger, 1994; Martin and Schinzinger, 1996). Here there is just enough space to provide a summary through a sampling of pithy expressions: ‘‘Ask not only can it be done, but also should it be done’’ (Wujek, 1996) ‘‘The world needs engineers with the moral courage to speak the truth’’ ‘‘Character counts!’’ (Michael Josephson of the Joseph and Edna Josephson Institute of Ethics, Marina del Rey, CA. U.S.A.) ‘‘Ethics is not for wimps’’ (Michael Josephson) Acknowledgements Michael J. Rabins of Texas A and M University invited the author to address the subject of engineering ethics at the 13th World Congress of the International Federation of Automatic Control in San Francisco, 1996. M.G. Rodd, editor of Control Engineering Practice, suggested a publishable version. This paper was begun while the author was still at Meiji Gakuin University (MGU) in Japan as a visiting professor and director of an Education Abroad Program of the University of California on MGU’s Totsuka campus. The reviewers of the paper made helpful suggestions. The author thanks all of the above for their help.
245
References Cheng, Pocheng, Hideo Kubo, 1988. Unexpectedly large dose rate dependent output from a linear accelerator. Med. Phys., 15(5), 766—767. Feynman, R.P., as told to Ralph Leighton, 1988. What Do You Care What Other People Think? W. W. Norton and Co., New York. Galbraith, D.M., Martell, E.S., Fueurstake, T., Norrlinger, B, Schwendener, H., Rawlinson, J.A., 1990. A Dose-Per-Pulse Monitor for a Dual-Mode Medical Accelerator. Med. Phys., 17(3), 470—473, May/June. Harris, C.E. Jr, Pritchett, M.S., Rabins, M.J., 1995. Engineering Ethics. Wadsworth Publishing Co., Belmont, CA. Jacky, J., 1989. Programmed for Disaster. Sciences 29(5), 22—27 (Sep/Oct). Leveson, N.G., Turner, C., 1993. An Investigation of the Therac-25 Accidents. Computer (IEEE), July 1993, pp. 18—41. Loyd, M., Chow, H., Laxton, J., Rosen, I., Lane, R., 1989. Dose Delivery Error Detection by a Computer-Controlled Linear Accelerator. In Med. Phys. 16(1), 137—139, Jan/Feb. Martin, M.W., 1996. Integrating Engineering Ethics and Business Ethics. Presented on panel for topic Ethics on the Feedback Loop at 13th IFAC World Congress, San Francisco, 1996. Martin, M.W., Schinzinger, R., 1996. Ethics in Engineering. 3rd ed., McGraw-Hill Book Co., New York. Perrow, C., 1984. Normal Accidents: Living with High-Risk Technologies. Basic Books, New York. Peterson, I., 1995. Fatal Defect — Chasing Killer Computer Bugs, Times Books (Random House), New York. Rose, B.W., 1994. Fatal Dose. Saturday Night, pp 24#, Toronto, Canada, June 1994. Also in Social Issues Resource Series, vol 4, d28. Tempest, R., 1984. India Plant Safety Report Had Warned of Gas Leak. Los Angeles Times, Dec 11, p.1. Unger, S.H., 1994. Controlling Technology: Ethics and the Responsible Engineer. 2nd ed., John Wiley and Sons, New York. Whitbeck, C., 1990. Ethics as Design: Doing Justice to Moral Problems. Texas A&M Center for Biotechnology Policy and Ethics, 1692-1. See also quotes in V. Weil (1992), Engineering Ethics in Education, p. 3, Center for the Study of Ethics in the Professions, Illinois Inst. of Tech., Chicago. Wujek, J.H., 1996. Panelist on topic Ethics on the Feedback Loop at 13th IFAC World Congress, San Francisco, 1996.