Social Science & Medicine 98 (2013) 371e378
Contents lists available at SciVerse ScienceDirect
Social Science & Medicine journal homepage: www.elsevier.com/locate/socscimed
‘Wicked’ ethics: Compliance work and the practice of ethics in HIV research Carol A. Heimer a, b, * a b
Department of Sociology, Northwestern University, 1810 Chicago Ave., Evanston, IL 60208, USA American Bar Foundation, 750 N. Lake Shore Dr., Chicago, IL 60611, USA
a r t i c l e i n f o
a b s t r a c t
Article history: Available online 20 December 2012
Using ethnographic material collected between 2003 and 2007 in five HIV clinics in the US, South Africa, Uganda, and Thailand, this article examines “official ethics” and “ethics on the ground.” It compares the ethical conundrums clinic staff and researchers confront in their daily work as HIV researchers with the dilemmas officially identified as ethical issues by bioethicists and people responsible for ethics reviews and compliance with ethics regulations. The tangled relation between ethical problems and solutions invites a comparison to Rittel and Webber’s “wicked problems.” Official ethics’ attempts to produce universal solutions often make ethics problems even more wickedly intractable. Ethics on the ground is in part a reaction to this intractability. Ó 2012 Elsevier Ltd. All rights reserved.
Keywords: Ethics regulation Medical research Globalization Compliance HIV/AIDS Uganda South Africa Thailand
Introduction From the start, HIV presented ethical challenges to scientists and caregivers, politicians and religious leaders, patients and families. The disease is infectious and essentially always fatal. Does that mean infected people should be quarantined? Should drug testing and approval processes be streamlined to take account of the disease’s severity? Should testing and treatment of pregnant women be required to prevent transmission to their babies? Should international treaties adjust patent protections to permit poorer countries to produce or purchase inexpensive HIV drugs for their citizens (as occurred in the Doha Declaration modifying TRIPS)? Because of its association with sexual activity, homosexuality, and drug use, HIV has been stigmatized. Should confidentiality therefore be especially carefully guarded to encourage testing and treatment? Or should health workers be permitted to disclose someone’s infection so others can protect themselves? The ferment around HIV led to substantial ethical and legal innovation. Voluntary counseling and testing protocols were created, drug research approval processes were adjusted, patients and affected communities were incorporated into research planning, and international research, treatment, drug procurement, and
* Department of Sociology, Northwestern University, 1810 Chicago Ave., Evanston, IL 60208, USA. Tel.: þ1 847 491 7480; fax: þ1 847 491 9907. E-mail address:
[email protected]. 0277-9536/$ e see front matter Ó 2012 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.socscimed.2012.10.030
funding programs were established. But as scholars and practitioners know well, implementation inevitably creates a gap between the designed system of legal or ethical regulation and what people actually experience. (This gap between “law on the books” and “law in action” animates much sociolegal research.) This article examines the gap between official ethics (“ethics on the books”) and ethics on the ground (“ethics in action”) in HIV clinics. What I find is broadly consistent with other studies of the gap between rules and their implementation (see, e.g., Lipsky, 1980; subsequent research on street level bureaucracy). My argument goes beyond the usual gap argument, though. I find that despite its many accomplishments, ethics regulation often makes ethical problems more intractable. This tangled relation between ethical problems and solutions invites a comparison to “wicked problems” (Coyne, 2005; Rittel & Webber, 1973). In labeling problems “wicked” (the contrast being “tame”), Rittel and Webber were reacting against attempts to tackle policy problems with the rational methods of the natural sciences and engineering. Social policy problems cannot be solved that way, they argued, because our descriptions of problems strongly shape possible solutions and attempted resolutions inevitably lead to unintended consequences. Ethics problems share these characteristics of wicked problems. Official ethics’ attempts to produce universal solutions often make ethics problems more complicated. Official ethics defines problems narrowly and then diverts scarce attention and resources to compliance work, making dissent difficult. Ethics on the ground is, in part, a reaction to this “wickedness.” I outline, and offer evidence
372
C.A. Heimer / Social Science & Medicine 98 (2013) 371e378
for, a two-step process by which official ethics modifies our understanding of what is important, substituting bureaucratic ethics for professional ethics or personal morality. After introducing the larger study from which the data are drawn, I offer descriptions of official ethics and ethics on the ground. Next I tease out the core differences between ethics on the ground and official ethics and analyze the relationship between the two systems, because ethics on the ground is in part a response to official ethics. The article also fleshes out the comparison between official ethics and wicked problems and explains how official ethics becomes “wicked.”
Background: the project, the sites, and the data This article draws on ethnographic data and interviews gathered for a project on the legalization of medicine. With clinical guidelines, research protocols, and the other trappings of evidence-based medicine, HIV clinics are good places to study legalism. I was interested from the outset in issues about the gap between rules on the books and rules in action, particularly when rules are transported from rich countries to poorer ones. Ethics regulation is a particularly important element of the increased legalism in medicine and the global diffusion of legalism. Many formal rules about the ethical conduct of research are not indigenous to the clinics. Instead, they originate in the global North, particularly in US government agencies, and are transported to clinics as part of the package of requirements of research sponsorship. Because I was interested in variations in resources available to comply with various rules, including ethics-related mandates, I designed the research to include HIV clinics in countries that differed in wealth and levels of development. The ethnographic research was conducted in two HIV clinics in the US and one each in South Africa, Thailand, and Uganda. The project underwent ethics reviews at the American Bar Foundation and Northwestern University. It also received ethics approval from the Ugandan National Council for Science and Technology and from ethics boards associated with the clinics where I gathered data.1 I used a collective oral consent procedure, with provisions for individuals to opt out, that enabled me to observe clinic staff as they worked; people gave individual, written consent for the 114 formal interviews. (Most of the formal interviews were with experts not affiliated with the clinics and are not used in this article.) Interviews were audio recorded and transcribed. Fieldnotes and interview transcripts were coded using Atlas.ti. This article focuses especially on ethical regulation of clinical research. All five clinics in the study were engaged in both treatment and research, though; as readers will see, the boundary between the two is blurry. All of the clinics had hospital and university affiliations. All but the Thai clinic received American government funds for treatment (as well receiving funds from local sources) and all five clinics received some of their research funding (directly or indirectly) from the US National Institutes of Health (NIH), necessitating adherence to American government rules about the ethical conduct of research. Because some research results would be used in drug approval processes, the clinics were also attentive to the rules of the US FDA. In each field site, one or two members of my team conducted the bulk of the research while others visited the site for brief periods.2
1 Because I pledged not to identify the clinics where I collected data, I cannot name the specific local bodies that approved the project. 2 The fieldwork was conducted by Carol Heimer, JuLeigh Petty, Lynn Gazley, Rebecca Culyba, Enid Wamani, and Dusita Pheungsamran. For consistency of authorial voice, I have used “I” throughout this article, though “we” would be more correct in discussions of data collection.
The fieldwork in the American clinics was of longer duration (just short of two years in US1; thirteen months in US2) but was less intensive (we were not in the field every day). We spent four months doing intensive fieldwork in Thailand, Uganda, and South Africa, with multiple two-week visits before and after. We began fieldwork in US1 in September of 2003 and last revisited our sites in Uganda, Thailand, and South Africa in JuneeAugust of 2007. We shadowed staff as they conducted the study visits of clinical trials; examined patients coming for treatment; made phone calls; went over records with research monitors, site visitors, and accreditors; and attended endless meetings. We gathered copies of forms (for recording data, reporting serious adverse events, referring patients, etc.) and policies used in clinic work (standard operating procedures, clinic guidelines, etc.). We talked with and observed staff in a variety of positions at all levels e physicians, principal investigators, nurses, administrators, social workers and counselors, receptionists, and data processors. The information we gathered allowed us to observe the operation of both “official ethics” and “ethics on the ground,” the interdependent systems of ethics described below. Describing “official ethics” and “ethics on the ground” A system of ethics generally has several components: (1) a set of more or less formal principles, (2) practices of citing such core principles, (3) a translation of principles into statutes, regulations, guidance, policies, and standard operating procedures, and (4) methods for policing and enforcing adherence to the rules. Although both official ethics and ethics on the ground have these components, they are easier to identify in official ethics. This is partly because ties to research administration make official ethics both more formal and more uniform than ethics on the ground. The elements of official ethics Accounts of the history of bioethics appear in the writings of bioethicists and social scientists (see especially Evans, 2011; Halpern, 2004; Stark, 2012), as well as in materials disseminated by research administrators. Even in abbreviated historical treatments it is de rigueur to cite the Nuremberg Code, the Belmont Report, or the Helsinki Declaration and to list key ethical principles such as autonomy, beneficence, non-maleficence, and justice. Such foundational documents are cited repeatedly and ritualistically in written and oral discussions of ethics, with copies often handed out in training sessions (as occurred in one clinic I studied) and referenced or reproduced on websites. More importantly, the core principles are translated into policy by governments and government agencies, health care organizations, and NGOs. Especially consequential is the US statute on research involving human subjects (45 CFR part 46), often referred to as the “Common Rule” because it must be adhered to by all US government agencies conducting human subjects research and by anyone receiving American government funding for such research. These rules now apply widely beyond US borders, including to all of the clinics I studied. The Office of Human Research Protections (OHRP) in the US Department of Health and Human Services is charged with monitoring and enforcement but much of this work is delegated to research institutions that, by law, must have Institutional Review Boards (IRBs). Policing and enforcement have become more complex over time with local IRB policies carefully crafted to conform to OHRP “guidance”; training programs and certifications for researchers and their staffs and for IRB workers and administrators; and now accreditation of human subjects protection programs of research centers and universities. Because they are
C.A. Heimer / Social Science & Medicine 98 (2013) 371e378
deeply worried that their research programs might be suspended for ethics violations, a point elaborated below, research institutions are careful to demonstrate compliance. Many new positions have been created for compliance workers e in the research administration offices of major research hospitals and universities, but also in dispersed sites such as hospital clinics and individual research projects. The attentiveness of IRBs to OHRP invites comparison with the mutual orientation that occurs in other legal and regulatory systems, where definitions of compliance are worked out in the back and forth between courts, regulatory agencies, and regulated organizations (see, e.g., Edelman, Krieger, Eliason, Albiston, & Mellema, 2011). Ethics regulation is somewhat unusual, though, because it is a prospective “licensing” system. This means enforcement occurs through pre-reviews of research projects rather than through retrospective court cases. In both prospective and retrospective regulatory systems, though, once the working definition of rules is settled, organizations focus on demonstrating compliance. The result is systems focused on form rather than content. In this kind of ethics regulation, it is hardly surprising that a huge proportion of OHRP and IRB business is about adjusting the wording of consent forms. Official ethics in the clinics Official ethics is ubiquitous in the lives of researchers and shows up in ethics training programs for research staff at all levels. In the US, research staff typically are required to take computerized tests, with completion verified by project administrators and the local IRB. When requirements change, staff members must undergo a fresh round of training and testing, as occurred in US1 during my fieldwork. In Uganda, I attended ethics training alongside new research staff. This training did not include computerized test modules used in US1, but covered many of the same topics and cited the same touchstones of ethics scandals, commissions and reports, and principles of research ethics. Looking only at the content of these training sessions, one would not have been able to guess they occurred in countries that differed so profoundly in wealth and cultural heritage. Likewise, submissions to IRBs were quite similar across settings. Researchers were required to discuss risks faced by participants and how they would mitigate risks, any benefits offered to participants, and arrangements for informing potential participants and seeking their consent. Although the key points remain the same, with mandated topics, wording, signatures, dates, and so forth, the ritual of seeking consent is enacted more elaborately in the clinics outside the US (Heimer, 2012), and in Uganda was often spread over several days with multiple readings of the consent form. A signed consent form for each research participant was carefully stored in the project files, along with the annual approvals from the IRB verifying that the research met regulatory and legal requirements. These documents act as a shield for the organization should questions be raised. When research monitors came to inspect a research team’s work, they always checked consent forms and other regulatory documents. Each of the HIV clinics I studied had several internal staff positions dedicated to tracking regulatory tasks and associated documentation and demonstrating compliance with ethics rules. These workers also tracked deadlines for annual reviews for the clinic’s projects on a spreadsheet or calendar and sent reminders to research staff. The adoption of practices, forms, and structures that closely mimic those used in highly reputable peer organizations are part of the bid for legitimacy that drives institutionalization (DiMaggio & Powell, 1991; Dobbin, Simmons, & Garrett, 2007). Ethics compliance work is now fully institutionalized in reputable research enterprises.
373
But although official ethics has induced uniformity across settings, this uniformity is accomplished with considerable awkwardness and at great cost. Because requirements were initially formulated for US sites, they often do not mesh smoothly with the customs of other countries. Verification of age provides a good example. In clinical research, “adult” has been operationalized as “at least 18 years.” In Uganda, where social (though not legal) adulthood arrives earlier than in the US, this means women who already have several children can be barred from participation in clinical research on mother-to-child transmission of HIV because they do not meet the US definition of “adult.” In a triumph of form over content, biological age trumps social adulthood even though social adulthood is more easily verified than biological age in this excerpt from my field notes: This question [on a recruitment form] asks “Is the mother 18 years or older?” How do they know if the woman is 18, [the trainer] asks . Often women don’t know their exact birthdate, and not even the year of their birth. [The counselors] talk about other ways of assessing age. If the woman is gravida 5 and says she is 20, someone says, then she is really not likely to be under 18. If she doesn’t know her age, then they try to establish age with reference to historical events, such as when Museveni came to power [in 1986]. [The trainer] contrasts the situation here with the US where age is usually easy to establish since everyone has [drivers’] licenses, ID cards, birth certificates, etc. Here there is likely be more ambiguity about age and it’s important for “regulatory” to know about those ambiguities and how they were resolved. So, [the trainer] emphasizes, be sure to document. Not everyone can show an ID. You can’t just say the participant doesn’t know how old she is, but instead you have to add that you estimated her age by historical references. Researchers in all five clinics spoke frequently (and often unhappily) about the burden of compliance. But compliance was far more difficult to achieve in poor countries than rich ones, as one quality assurance worker in the Ugandan site explained: “If people don’t have any place to sit while they do their work, they are less likely to fill out forms correctly or to check them. If there is not a clock, they can’t document the time if they don’t themselves own watches. If there is not a locking cabinet, they can’t store their work securely.” Where workers are thin on the ground and resources (computers, paper, file cabinets and storage space, electricity) scarce, spending lavishly on demonstrating compliance can only be justified when the flow of resources to the clinic truly depends on it. Having experienced unusually frequent monitoring visits and research monitors’ skepticism about the capacities of clinics “in the boonies,” one staff member in Thailand explained, clinic workers in poorer countries learn to do exactly what research protocols, good clinical practice, and the rules for ethical conduct of research require. As a proportion of available resources, compliance was far costlier in poor countries than in richer ones. To be absolutely clear, though, it is demonstrating compliance rather than conducting research ethically that is so costly. Official ethics emphasizes demonstrations of compliance with the standards for the ethical practice of research e the form, often literally, rather than the content. Official ethics is thus a universalizing system (Fox & Swazey, 2005) that focuses on what will appear on the radar screen of research administrators and the research ethics bureaucracy. Ethics on the ground is much more about what is local and particular and is off the radar screen. The elements of ethics on the ground Unlike official ethics, ethics on the ground is typically not marked by official statements of principles and may not even be explicitly
374
C.A. Heimer / Social Science & Medicine 98 (2013) 371e378
identified as ethics. Without these markers, how can we tell that what we are seeing is truly ethics? Because uniformity in language is a way of marking turf, we expect more uniformity where people are making claims on resources and establishing routinized practices e in official systems, in short. And indeed when we look at ethnographic studies of ethics in medical settings (see Bosk, 1999, for a useful list), those observed do not usually mark their ethical responses by labeling them explicitly as such. Instead we see people troubled by a situation, circling back to the problem in conversation with colleagues, consulting others about what they did in similar circumstances e just what I observed in the HIV clinics. Citations of autonomy, beneficence, and non-maleficence are replaced with debates about what is fair or unfair, moral or immoral, right or just plain wrong. The emphasis is more on “doing ethics” than discussing ethics (Molyneux & Geissler, 2008) and entails considerable boundary work (Wainright, Williams, Michael, Farsides, Cribb, 2006). In addition to citing fairness or moral appropriateness, clinic workers also pointed to moral exemplars. In US1, for instance, the site principal investigator was known for taking patient welfare especially seriously; when the interests of a study and the needs of a research participant conflicted, staff members sometimes asked each other what the PI would do. This doctor’s impeccable scientific credentials made his concern for patient welfare especially noteworthy. Staff also cited clinic practices as precedent, particularly when gaps in official rules left them uncertain about how to proceed or when actions they felt to be ethically correct clearly violated official rules. Although the principles of official ethics are transformed into policies, statutes, rules, and standard operating procedures by working through official channels, the more inchoate moral sentiments of ethics on the ground get transformed into decisions and courses of action in discussions among colleagues. These on-thespot decisions about particular instances often cumulate over time into routinized but not fully codified ways of doing things. The policing and enforcement of the informal norms of ethics on the ground is quite different than the policing and enforcement of official ethics. There were no records kept, no schedules of submissions to official bodies, no visits by monitors asking to inspect forms. Nor were people worrying about threats to the organization, except when they felt compelled by ethics on the ground to do things that clearly did not comport with the law (for instance when “extra” drugs returned by patients were illegally dispensed to fill gaps or to ensure that a person with a needle-stick injury received prophylaxis immediately). In essence, enforcement occurred informally, perhaps especially through the gossiping of colleagues morally offended by a peer’s actions. Ethics on the ground in the clinics Because ethics on the ground develops to fill the gaps opened up by official ethics, it tends to be less fully routinized than official ethics. Procedures for recruiting and retaining research subjects and for protecting confidentiality, two foci of official ethics, offer useful lenses for comparing the two systems of ethics. In thinking about subject recruitment and retention, official ethics starts with research projects. Study protocols specify who can and cannot be research subjects in a study. These rules of inclusion and exclusion focus on scientific questions (the medical conditions, drugs, or therapies being studied) and matters of safety (pregnant women are excluded if a drug might damage fetuses). Official ethics supplements study protocols by alerting researchers to such important issues as coercion. In contrast, ethics on the ground starts with patients and asks which studies would be good for a particular patient or group of patients rather than which patient or group would be good for the study.
In all five clinics, researchers worried about finding ways to meet the needs of existing patients e not the long-term needs of future patients, but the short-term needs of patients already attending the clinic. Rather than a search process (of finding patients for studies), then, ethics on the ground led clinics to engage in a complicated matching process (of linking existing patients to current or future studies or treatment programs) that took the needs of patients as seriously as the needs of studies. Would a new study ensure continuity in drug supply for a patient just completing a previous study? Would a new study at least provide the laboratory tests needed to track the progress of HIV and adjust medications? If a regimen was no longer “working,” would enrolling the patient in an adherence or drug resistance study help clinicians determine what had gone wrong and which drugs might preserve the patient’s health? In the long run, the practice of looking for ways to meet patients’ needs once a study had ended became somewhat routinized e but in different ways in the four clinics with a lot of post-trial patients. (Issues about how to manage post-trial patients had not yet arisen in the South African clinic’s relatively young research program). The practices of US1, which had mainly insured patients, continued in the vein described above. In US2, with mostly indigent patients, staff members also strategized about how to get free drugs through the state-level AIDS drug assistance program (ADAP) and drug company philanthropic programs, thinking carefully about sequencing of requests and which patients to funnel to which programs. The Thai clinic created a follow-up observational study, though the inclusion criteria (only people who had already been in studies at this clinic) betrayed its true purpose. In Uganda, concerns about patient welfare led to significant modifications in the clinic agenda. In one meeting, staff discussed how to assess need for infant formula. Need is high and errors in assessing need can result in a baby being hospitalized for malnutrition. But because supplies are so restricted, a universal program is not possible. Because breastfeeding is the norm, asking a father for money for formula is tantamount to disclosing HIV infection. Mothers, particularly in non-marital or extra-marital unions, fear that fathers will then abandon them and the children. “This is a big issue,” staff conclude. “Ethically, [we] cannot have a baby starving with the mother coming in week to week for study visits.” In another meeting, the senior management team discussed how to balance [care] “programs” and research. “Our business is mainly research,” one staff member asserted. The organization’s leader retorted: “We can’t do research unless we also have care.” Discussing the deaths in one clinical trial, he added: “Those who died didn’t die from the research but from the disease e because we didn’t then have the capacity to treat them.” In these discussions, no mention was made of the principles and policies of official ethics, which do not speak to these contingencies in any case. Admittedly, though, an element of self-interest pervades these worries about research subjects’ welfare. Good research subjects are valuable commodities. At every research meeting, staff strategized about how to enroll appropriate research subjects. This is not simply about finding people with the right physical characteristics. Staff members especially value “professional research subjects,” a phrase signaling both serial research participation and a serious approach to being a research subject. Professional research subjects show up regularly for appointments, report honestly and in detail on their behavior and symptoms, and adhere to the protocol as faithfully as possible. People who are treated well make more reliable research subjects, one US1 research nurse explained. “You don’t give up on a study patient until they are in the ground,” she declared, meaning simultaneously that researchers endeavor to keep subjects enrolled in (mutually beneficial) studies, to track research subjects as they cycle on and off studies, and to provide
C.A. Heimer / Social Science & Medicine 98 (2013) 371e378
them with good care. The staff I studied, like research coordinators in contract research organizations (Fisher, 2006), felt they owed research subjects much more than official ethics would suggest they were obligated to offer. But they also felt they owed something to people who were not especially good research subjects, including the Ugandan mother whose case spurred the discussion about whether a clear policy on dispensing formula was possible. Ethics on the ground and official ethics also have different perspectives on the practical implementation of requirements for confidentiality. The US attentiveness to confidentiality has resulted in a host of rules and practices designed to protect health-related information. Especially influential are the HIPAA (Health Insurance Portability and Accountability Act of 1996) Privacy Rule and rules about confidentiality of HIV testing and treatment, some emanating from the US, others enacted locally. Like the Malawian HIV counselors interviewed by Angotti (2010) and the Ugandan health workers studied by Whyte, Whyte, and Kyaddondo (2010), the clinic workers I talked with encountered situations where violating confidentiality seemed the lesser of two evils. What should they do about the many cases where HIV-infected patients had not disclosed to sexual partners and were not consistently practicing safe sex? In US1, for instance, one 37-year-old Latino man was having unprotected sex with his 17-year-old girlfriend, whom he intended to marry. When the clinic worker urged him to tell her that he had HIV, the patient retorted: “You are denying my dream of getting married and having children.” American clinics have no clear policy about how to handle this conundrum. This is the sort of patient “you want to hit or hide from,” the staff member commented. In South Africa, staff valiantly tried to hammer out provisional policies, but the mind-boggling variety of scenarios staff actually encountered made policymaking difficult. Could a mother be informed that her non-disclosing but very ill adult child had HIV and that she needed to use universal precautions in cleaning his sores? What should a counselor do when a frightened patient asked her to create a false paper trail because his employer insisted on seeing his test result? Could a pediatrician insist that a widowed father disclose the HIV status of his child to his girlfriend who cared for the child? Could staff disclose to a father if the mother refused to tell him that their child had HIV? Could a husband be informed of the results of an HIV test on his unconscious wife? South African law does allow some bending rules on confidentiality for needle-stick injuries and for people endangered by a nondisclosing, HIV-infected sexual partner. But entirely reasonable legal procedures can prove impractically cumbersome and time consuming in clinic settings. The HIV infection can easily be transmitted before staff can complete the legal procedure. Patients were sometimes too ill, embarrassed, fearful, stubborn, or selfish to do their part. That left staff trapped between their own very real fears of getting themselves or the clinic into trouble and their equally real fears for the health and safety of innocent third parties. “It upsets [me] to have to teach about confidentiality e not that confidentiality isn’t important,” the psychologist confessed. “It makes [me] angry to have to put people at risk because others won’t disclose,” she added, before apologizing for her emotional outburst. Discussing provisions for “silent tests” in cases of needlestick injury, the counselors attending a training session laughed nervously, unsure whether the hospital’s routine fully conformed to the law. As these examples show, many practices of ethics on the ground are off the radar screen both because they are evolving, situationally specific adjustments and because they have an uneasy relationship with official ethics. People try to follow the dictates of official ethics because they know they e and the clinic e can get into trouble if they do not. But if adherence to the dictates of official
375
ethics is necessary for the welfare of the clinic, it is not ethically sufficient for the consciences of caregivers. And that ever present gap between necessary and sufficient is what makes official ethics “wicked” and ethics on the ground so important. Comparing official ethics and ethics on the ground What, then, are the core differences between these two approaches to ethics? Official ethics and ethics on the ground differ along four main dimensions. They differ, first, in how they conceive the social relationships of clinical research and medical care and who is the focus of ethical attention. Second, they have divergent understandings of where ethics comes from and, third, who has ethical agency. Finally, they differ in conceptions of how ethical obligations are to be met. Who is the focus of ethical attention? Ethics on the ground sees people as having multiple roles. Most basically, research subjects are understood to be patients as well as research subjects because it is HIV that brought them to the clinic. Moreover, research subjects are assumed to have partners, family members, and other important ties, and to face constraints associated with employment. Likewise, ethics on the ground may not conceive researcher and caregiver roles as fully distinct, and indeed often sees research as a way of caring for patients (Easter, Henderson, Davis, Churchill, & King, 2006; Hedgecoe, 2006). In short, ethics on the ground conceives social roles as overlapping and relationships as multi-stranded. In contrast, official ethics abstracts out this complexity. Caregiving and research are seen as largely distinct activities that may conflict, with enthusiastic researchers compromising patient welfare and compassionate caregivers compromising the integrity of scientific investigations. Indeed, some research practices such as double blind procedures for randomizing research subjects to the arms of a study are designed, in part, to protect against biases introduced by the mixing of roles. But rather than seeing research and care giving as conflicting activities, ethics on the ground suggests researchers are better scientists and patients are better research subjects if this complexity is acknowledged. “This is not an assembly line e give me your data and get out the door,” said one US1 research nurse, explaining how making the environment comfortable encouraged people to be “good patients” and “good research subjects.” We should not overstate the uniformity of ethics on the ground, though. Although many clinic workers thought being mindful of potential conflicts would enable them to guard against the exploitation of patients, US2 staff did not agree. As a matter of clinic policy each US2 research subject was assigned a primary care physician not involved in the research project. The clinic director “tell[s] the patients that the researchers will have some loyalty to the study, and that they need a primary care provider to be the person who looks out for their own best interest.” Official ethics usually focuses on individuals as the entities needing protection, because it is individuals who are research participants. In the clinics I studied, though, ethics on the ground expanded its focus to include the social circle surrounding the research subject. This expanded understanding of who is the proper subject of ethical concern may be especially likely in HIV because staff worry about transmission to sexual partners and children. In both South Africa and Uganda, clinic staff also argued that treatment was more likely to succeed if the whole family received care. People would then be more likely to disclose that they had HIV and to have “treatment buddies” to support them. Moreover, by caring for parents, clinic staff could increase the odds
376
C.A. Heimer / Social Science & Medicine 98 (2013) 371e378
that there would be someone alive to care for the HIV-infected youngsters. But this welcoming stance created fresh problems in environments of scarcity. In both Uganda and South Africa, staff struggled with how to define “family.” As they crafted decision rules about who to include and how to proceed as resources became available, one Ugandan doctor stressed that “‘family members’ must be kept within reason e it can’t include every last cousin!” Once the doors are opened to thinking of people in the context of their relationships, it is hard to establish firm boundaries. This was especially the case in Uganda and South Africa, where medical services were scarce and demand was very high. If patients’ and research subjects’ families were included, surely it was not appropriate to exclude the families of staff members. And as staff members acknowledged that good health depended on more than skilled medical care and reliable access to drugs, clinic boundaries in South Africa and Uganda expanded (at least temporarily) to include feeding programs, income generation programs, and psychosocial counseling. Official ethics, working through government agencies and research administration offices in hospitals and research institutions, in fact seems as much focused on protecting organizations as protecting research subjects or patients. In one telling exchange in a US1 research training session, research administrators and researchers clashed over how to deal with expenses not covered by either the research project or some other payer (such as an insurer). The research administrators, who had opened the meeting by talking about the increased intensity of regulation and the huge penalties for noncompliance, adopted the line of official ethics. From their perspective, the key issue was the organizational responsibility to inform patients of such costs before the consent form was signed. For researchers, adopting the perspective of ethics on the ground, the issue was instead that many patients could not afford to absorb the expense of costly medical supplies and procedures. Official ethics made it nearly impossible for the research administrators to see what researchers were worried about. Universities depend on research dollars, which are now a substantial proportion of their budgets (Bledsoe et al., 2007). Ethics violations threaten the flow of funds. When a research subject is seriously harmed or dies, the ensuing investigation can shut down a university’s entire research program, as occurred with the deaths of Ellen Roche and Jesse Gelsinger (Steinbrook, 2002). Rare though they are, these few suspensions of university research programs have made universities exceedingly concerned about ensuring full compliance with human subjects regulations, whether individual researchers believe these rules focus on ethically significant matters or not. Elliott worries about whose interests are served when bioethicists occupy positions in hospitals, pharmaceutical companies, and regulatory organizations: “If you wear the team colors long enough, you feel like part of the team” (2007, p. 43). Where does ethics come from? Given its origin in responses to perceived lapses in professional self-regulation, it may not be surprising that official ethics is a topdown regulatory enterprise. If researchers and the medical establishment cannot be trusted to adhere to high ethical standards, then governments may feel they must create statutes, regulations, and guidance to control researchers. Although some room has been left for local discretion, the sphere of discretion has decreased over time as the apparatus for licensing researchers and projects and for monitoring compliance has grown more elaborate. My distinction maps roughly onto Kleinman’s (1995, p. 45) distinction between ethics e “the codified body of abstract
knowledge held by experts about ‘the good’” e and morality e “commitments of social participants in a local world about what is at stake in everyday experience” e though I place more emphasis than he does on the bureaucratization of ethics. In contrast to official ethics, ethics on the ground is a more indigenous, local form of knowledge (Christakis, 1992). Although it is necessarily in dialog with official ethics, ethics on the ground is also in dialog with other sources of ethical and moral thinking. Official ethics draws on “ethical experts,” so identified by their positions on commissions and in agencies responsible for monitoring compliance with human subjects regulations. In the HIV clinics I studied, occupants of official positions handed down decisions about compliance with ethics regulations. Ethics on the ground was more likely to cite religious leaders (a pastor was often cited in the South African clinic), people regarded as moral exemplars in the clinics, and others who were not part of the regulatory hierarchy. In ethics on the ground, decisions about appropriate courses of action were made collectively in informal discussion and in regularly scheduled clinic meetings. Although “ethical issues” were not an official agenda item, talk about ethics grew from discussions of other agenda items such as adverse events (untoward events associated with medical products) or the recruitment of research subjects. Rather than being imposed from outside, then, ethics on the ground grew from local moral codes and talk among people who did not conceive themselves as having any particular ethics expertise. With nearly everyone monitoring themselves and their colleagues, though, there may be little time left for an alternative agenda. Official ethics has diffused around the globe, following the trail of research funding. Like other global rules, official ethics does not simply fill a vacuum, but is instead mixed with or layered on top of pre-existing indigenous rules and norms (Bartley, 2011). Echoing this perspective, one Thai researcher ruefully noted that Americans forget Thailand conducted ethics reviews long before the imposition of the American IRB system. “Given the emphasis on US laws,” one Ugandan commented, “people might start to wonder if we have any laws here.” Like the human rights ideas Merry (2006) studied, official ethics has not been fully indigenized. It often fails to acknowledge the pre-existing layers, de-legitimating those layers and leaving the work of translation and harmonization to others. Who has ethical agency? Related to where ethics comes from is the question of who has ethical agency. Whose moral reasoning can be the basis for action? Who can take that action? Official ethics has brought an expanding staff of specialists to administer its rules and a growing distinction between those who are ethics experts and those who are not. These ethical specialists include trained bioethicists, who work in universities, write scholarly essays on bioethics, do ethics consults in medical settings, and serve on official boards, commissions, and committees. Most of the work of ethics administration is not done by trained bioethicists, though, but instead by certified IRB professionals or certified IRB managers (Heimer & Petty, 2010). This cadre of bioethicists, ethics consultants, human subjects research staff, regulatory affairs specialists on research projects, and accreditation and certification staff is central to the work of official ethics. They create mandated practices to which ethics on the ground must respond and adjust. Crucial to both systems of ethics, though, is how ethical agency varies with whether a person is a high-ranked professional or a lower-ranked one. Doctors and nurses, to take two key professions, have very different relations to ethics, as Chambliss (1996) points out in his analysis of why nursing is a wonderful profession but a horrible job. Ethical dilemmas, he argues, are a luxury reserved for decision-makers. Nurses and other subordinate workers instead have ethical problems. Nurses are
C.A. Heimer / Social Science & Medicine 98 (2013) 371e378
simultaneously professionals trained to attend to the welfare of their patients, subordinates required to follow the directives of doctors and other high-status staff, and employees obliged to adhere to their employers’ rules. Because an increasing proportion of medical professionals are employees of large medical organizations, more medical staff now have ethical problems. Both ethics on the ground and official ethics confront the effects of the hierarchy of medical professions and the bureaucracy of medical organizations. Yet hierarchy and bureaucracy can lend authority to official ethics. This is especially likely, though, when high status physicians accept the authority of official ethics. In the clinics I studied, though, physicians were often quite ambivalent about official ethics. Finally, the specialization of ethics work can lead to ethics “deskilling.” When there are official spokesmen for ethics, people learn to doubt their own ethical judgments and those of other ordinary people (Elliott, 2007). Admittedly, official ethics has attempted to correct for this by including non-experts on ethics review panels. Yet because these non-experts are often chosen for convenience, they may not adequately represent the views of people in the medical ward or research project. How are ethical obligations met? In the realm of official ethics, many ethical obligations are met legalistically and procedurally. From the perspective of the US1 research administrators cited above, if research subjects have been informed of expenses they will incur and still sign consent forms, then there is no ethical issue. Ethical obligations are met by learning the principles, mastering the rules, and following correct procedures. To be safe, scrupulous organizations will augment, upgrade, and retrain their research ethics administrative staff and seek accreditation of their human subjects protection program. At lower levels in the institution, ethical obligations are met by avoiding vulnerable populations, by designing consent forms meticulously, and by filing appropriate forms. Ultra careful regulatory specialists (like one Ugandan staff member) may urge that treatment programs adopt the stricter policies of research, a move that may seem to others an unethical misuse of scarce resources. In contrast, the ethical problems faced by individual workers cannot be solved with proceduralism. Official ethics offers little help with irreducible personal obligations to be caring professionals. Ethics on the ground often counsels a different course of action, sometimes in (quiet) violation of the rules. When ethics becomes the property of organizations, it is ethics on the ground that accommodates people’s concerns about meeting their personal and professional ethical obligations. Conclusion: why might ethics be “wicked”? I have depicted official ethics and ethics on the ground as largely separate systems co-existing in the same space. But this overstates their independence. Considering the relation between these two systems of ethics returns us to the sense in which ethics can be “wicked.” Rather than sensitizing people to the full panoply of ethical questions, official ethics focuses attention on some questions and deflects attention from others. The compliance work associated with official ethics in fact consumes considerable time and attention, leaving less for ethics on the ground. Rather than helping us develop moral perception (Andre, 2007), official ethics puts blinders on our eyes. Official ethics especially focuses on individuals rather than groups and ignores major structural inequalities. It is not exactly that systemic problems are completely overlooked, though. They are in fact sometimes present in codes of ethics. For
377
instance, the AMA code of ethics states that “A physician shall support access to medical care for all people” (American Medical Association, 2012). But principles not translated into rules and checklists are ignored because no demonstration of compliance is required. Statements advocating universal access thus have no real consequence. The resulting thin conception of ethics seems to assume that ethical problems in research, like those in medicine, are “located at the level of the individual doctor-patient [or researcher-research subject] relationship and consist of the inappropriate values operating within that relationship” (Bosk, 1999, p. 55). In addition to putting blinders on our eyes, then, the embrace of official ethics (and bioethics) by universities and the medical establishment has helped defang other critics. Yet bioethicists are not always on the side of official ethics. Their trenchant critiques of the failings of bioethics (Fox & Swazey, 2005) point to the dangers of “wearing the team colors” (Elliott, 2007), failing to listen (Andre, 2007), neglecting inequalities (Benatar, 2002; Keenan, 2005; Tausig, Selgelid, Subedi, & Subedi, 2006), and ignoring common moral wisdom (Churchill & Schenck, 2005). Yet as long as the checklists and resources remain entirely in the hands of official ethics, research ethics is likely to remain thin. Official ethics’ channeling of attention and resources is especially “wicked” where ethics rules have been exported to sites where compliance is difficult. The diversion of resources and attention from other ethical issues and indeed from healthcare to ethics compliance is far more consequential in poorer countries than in places with more ample resources. Insisting that an IRB be staffed as it would be in the US and meet on a similar schedule can mean a hospital has one less obstetrician available for complicated maternity cases. Staff shortages associated with compliance work came up repeatedly in the Ugandan fieldwork. At the same time, though, the pushback in poor countries may be stronger because medical organizations are less able to insulate themselves from ethical problems that arrive on their doorsteps. Where medical care is scarce and need is great because of high rates of HIV infection, the division between healthcare providers and researchers, on the one hand, and patients and research subjects, on the other, is obscured (Fassin, 2008). When everyone has a close friend or relative who is affected, checklists to remind people to attend to structural inequalities may be unnecessary. Official ethics may siphon off resources in these situations, but it is less likely to succeed in putting blinders on people’s ethical eyes. So how did this happen? What encouraged the growth of official ethics at the expense of indigenous ethical systems? Gaps between official “law on the books” and “law in action,” of which this is a variant, always raise questions about what rules mean, what actions will count as compliance, and how these matters will be negotiated. Sometimes the meaning of rules gets settled through court cases or other forms of contestation. Relatively minor threats to organizations (for instance to suspend research funding pending investigation of ethics violations) can lead to vast overreactions and the creation of extra positions and elaborate bureaucracies. Organizational rules are spelled out and checked carefully against government regulations and agency guidance. In short, organizations create positions and departments to ensure compliance and stave off threats to key resources. And within the new compliance bureaucracies, the interests of newly created occupations, charged with doing compliance work, are married to the interests of the organizations. A small threat is magnified. In essence, via a two-step process, a compliance bureaucracy modifies understandings of what is important. First, it increases the sense of urgency by mandating a series of actions on a rigid timetable. Most violations in ethics compliance are violations of form rather than substance e missed deadlines or minor errors in documentation. But because organizational consequences can flow
378
C.A. Heimer / Social Science & Medicine 98 (2013) 371e378
from violations, such violations become important whether they are about the substance of ethics or not. With these two shifts e making things urgent and making them consequential for organizations e official ethics enlarges the gap between ethics on the books and ethics on the ground, reshapes people’s understanding of what is important, and ultimately redefines what counts as ethical. Official ethics substitutes bureaucratic ethics for other systems of ethics e for professional ethics and other collective moral approaches that encourage responsiveness to the situation (Selznick, 1992). It also displaces personal morality. What is lost is the distinction between law and morality and the critical edge morality has traditionally brought to law (Heimer, 2010). With the growth of official ethics, any distinction between law and ethics becomes wickedly difficult to make. Acknowledgments The data used in this article were collected for “Clinic-Level Law: The ‘Legalization’ of Medicine in AIDS Treatment and Research,” a larger project generously supported by the American Bar Foundation and the National Science Foundation (Grant No. NSF SES e 0319560). I am deeply indebted to Jaimie Morse and Arthur Stinchcombe for careful readings and sage advice. I also thank the seminar audiences at New York University (Wagner Graduate School of Public Service) and the University of California at Berkeley (Sociology Department) and five anonymous reviewers for Social Science and Medicine for exceptionally helpful feedback. References American Medical Association. (2012). Principles of medical ethics. Available at http://www.ama-assn.org/ama/pub/physician-resources/medical-ethics/codemedical-ethics.page accessed 04.03.12. Andre, J. (2007). Learning to listen: second-order moral perception and the work of bioethics. In L. A. Eckenwiler, & F. G. Cohn (Eds.), The ethics of bioethics (pp. 220e228). Baltimore: Johns Hopkins University Press. Angotti, N. (2010). Working outside of the box: how HIV counselors in Sub-Saharan Africa adapt to western HIV testing norms. Social Science & Medicine, 71, 986e993. Bartley, T. (2011). Transnational governance as the layering of rules: intersections of public and private standards. Theoretical Inquiries in Law, 12, 517e542. Benatar, S. R. (2002). Reflections and recommendations on research ethics in developing countries. Social Science & Medicine, 54, 1131e1141. Bledsoe, C. H., Sherin, B., Galinsky, A. G., Headley, N. M., Heimer, C. A., Kjeldgaard, E., et al. (2007). Regulating creativity: research and survival in the IRB iron cage. Northwestern University Law Review, 101, 593e641. Bosk, C. L. (1999). Professional ethicist available: logical, secular, friendly. Daedalus, 128(4), 47e68. Chambliss, D. F. (1996). Beyond caring. Chicago: University of Chicago Press. Christakis, N. A. (1992). Ethics are local: engaging cross-cultural variation in the ethics for clinical research. Social Science & Medicine, 35, 1079e1091.
Churchill, L. R., & Schenck, D. (2005). One cheer for bioethics: engaging the moral experiences of patients and practitioners beyond the big decisions. Cambridge Quarterly of Healthcare Ethics, 14, 389e403. Coyne, R. (2005). Wicked problems revisited. Design Studies, 26, 5e17. DiMaggio, P. J., & Powell, W. W. (Eds.), (1991). The new institutionalism in organizational analysis. Chicago: University of Chicago Press. Dobbin, F., Simmons, B., & Garrett, G. (2007). The global diffusion of public policies: social construction, coercion, competition, or learning? Annual Review of Sociology, 33, 449e472. Easter, M. M., Henderson, G. E., Davis, A. M., Churchill, L. R., & King, N. M. P. (2006). The many meanings of care in clinical research. Sociology of Health & Illness, 28, 695e712. Edelman, L. B., Krieger, L. H., Eliason, S. R., Albiston, C. R., & Mellema, V. (2011). When organizations rule: judicial deference to institutionalized employment structures. American Journal of Sociology, 117, 888e954. Elliott, C. (2007). The tyranny of expertise. In L. A. Eckenwiler, & F. G. Cohn (Eds.), The ethics of bioethics (pp. 43e46). Baltimore: Johns Hopkins University Press. Evans, J. H. (2011). The history and future of bioethics. New York: Oxford University Press. Fassin, D. (2008). The elementary forms of care: an empirical approach to ethics in a South African hospital. Social Science & Medicine, 67, 262e270. Fisher, J. A. (2006). Co-ordinating ‘ethical’ clinical trials: the role of research coordinators in the contract research industry. Sociology of Health & Illness, 28, 678e694. Fox, R. C., & Swazey, J. P. (2005). Examining American bioethics: its problems and prospects. Cambridge Quarterly of Healthcare Ethics, 14, 361e373. Halpern, S. A. (2004). Lesser harms. Chicago: University of Chicago Press. Hedgecoe, A. M. (2006). It’s money that matters: the financial context of ethical decision-making in modern biomedicine. Sociology of Health & Illness, 28, 768e784. Heimer, C. A. (2010). The unstable alliance of law and morality. In S. Hitlin, & S. Vaisey (Eds.), Handbook of the sociology of morality (pp. 179e202). New York: Springer. Heimer, C. A. (2012). Inert facts and the illusion of knowledge: strategic uses of ignorance in HIV clinics. Economy & Society, 41, 17e41. Heimer, C. A., & Petty, J. (2010). Bureaucratic ethics: IRBs and the legal regulation of human subjects research. Annual Review of Law & Social Science, 6, 601e626. Keenan, J. F. (2005). Developments in bioethics from the perspective of HIV/AIDS. Cambridge Quarterly of Healthcare Ethics, 14, 416e423. Kleinman, A. (1995). Writing at the margin. Berkeley: University of California Press. Lipsky, M. (1980). Street-level bureaucracy. New York: Russell Sage. Merry, S. E. (2006). Transnational human rights and local activism: mapping the middle. American Anthropologist, 108, 38e51. Molyneux, S., & Geissler, P. W. (2008). Ethics and the ethnography of medical research in Africa. Social Science & Medicine, 67, 685e695. Rittel, H., & Webber, M. (1973). Dilemmas in a general theory of planning. Policy Sciences, 4, 155e159. Selznick, P. (1992). The moral commonwealth. Berkeley: University of California Press. Stark, L. (2012). Behind closed doors. Chicago: University of Chicago Press. Steinbrook, R. (2002). Protecting research subjects: the crisis at Johns Hopkins. New England Journal of Medicine, 346, 716e720. Tausig, M., Selgelid, M. J., Subedi, S., & Subedi, J. (2006). Taking sociology seriously: a new approach to the bioethical problems of infectious disease. Sociology of Health & Illness, 28, 838e849. Wainwright, S. P., Williams, C., Michael, M., Farsides, B., & Cribb, A. (2006). Ethical boundary-work in the embryonic stem cell laboratory. Sociology of Health & Illness, 28, 732e748. Whyte, S. R., Whyte, M. A., & Kyaddondo, D. (2010). Health workers entangled: confidentiality and certification. In H. Dilger, & U. Luig (Eds.), Morality, hope and grief (pp. 80e101). New York: Berghahn.