The ethics of secondary data analysis: Considering the application of Belmont principles to the sharing of neuroimaging data

The ethics of secondary data analysis: Considering the application of Belmont principles to the sharing of neuroimaging data

NeuroImage 82 (2013) 671–676 Contents lists available at ScienceDirect NeuroImage journal homepage: www.elsevier.com/locate/ynimg Review The ethic...

204KB Sizes 2 Downloads 57 Views

NeuroImage 82 (2013) 671–676

Contents lists available at ScienceDirect

NeuroImage journal homepage: www.elsevier.com/locate/ynimg

Review

The ethics of secondary data analysis: Considering the application of Belmont principles to the sharing of neuroimaging data Beth Brakewood a,⁎, Russell A. Poldrack b a b

Imaging Research Center, University of Texas, 100 E. 24th Street, Austin, TX 78712, USA Department of Psychology and Neurobiology and Imaging Research Center, University of Texas, Austin, TX 78712, USA

a r t i c l e

i n f o

Article history: Accepted 14 February 2013 Available online 4 March 2013

a b s t r a c t The sharing of data is essential to increasing the speed of scientific discovery and maximizing the value of public investment in scientific research. However, the sharing of human neuroimaging data poses unique ethical concerns. We outline how data sharing relates to the Belmont principles of respect-for-persons, justice, and beneficence. Whereas regulators of human subjects research often view data sharing solely in terms of potential risks to subjects, we argue that the principles of human subject research require an analysis of both risks and benefits, and that such an analysis suggests that researchers may have a positive duty to share data in order to maximize the contribution that individual participants have made. © 2013 Elsevier Inc. All rights reserved.

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The 2011 ANPRM and the principles of human subject research . . . . . . . . . . . The research/treatment divide and the responsibility of researchers . . . Principle 1: Respect for persons . . . . . . . . . . . . . . . . Principle 2: Beneficence . . . . . . . . . . . . . . . . . . . Principle 3: Justice . . . . . . . . . . . . . . . . . . . . . . Applying the Belmont principles . . . . . . . . . . . . . . . . . . . . Application 1: Informed consent . . . . . . . . . . . . . . . Potential solutions for evaluating consent for secondary analysis Application 2: Risk reduction . . . . . . . . . . . . . . . . . Application 3: Subject selection and generalization . . . . . . . Data, biobanks, and genomics research . . . . . . . . . . . . . . . . . Data sharing as good ethical practice . . . . . . . . . . . . . . . . . . Conflict of interest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Introduction The sharing of data has become important across many different fields of science, and in some cases (such as molecular biology and genomics) has radically transformed the pace of scientific progress. It has long been hoped that the sharing of neuroimaging data could provide a similar benefit to the field of cognitive neuroscience (cf. Poline ⁎ Corresponding author. Fax: +1 512 475 8000. E-mail addresses: [email protected] (B. Brakewood), [email protected] (R.A. Poldrack). 1053-8119/$ – see front matter © 2013 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.neuroimage.2013.02.040

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

671 672 672 673 673 673 673 673 674 674 675 675 676 676 676

and Poldrack, 2012; Van Horn and Gazzaniga, 2012). However, the sharing of human neuroimaging data poses potential ethical issues that are not encountered when sharing data from non-human samples. Researchers have unique responsibilities to ensure that subjects are protected, including by following the principles of respect of persons, justice, and beneficence. Researchers also have a responsibility to use data to produce “knowledge, products, and procedures to improve human health” (National Institutes of Health, 2003). The core argument of this article is that sharing data for secondary data analysis aligns with the Belmont principles that underlie the protection of human subjects. In this paper we will outline some of the potential

672

B. Brakewood, R.A. Poldrack / NeuroImage 82 (2013) 671–676

ethical issues that can arise in the context of secondary analysis of human subject data. We will provide some context for currently proposed changes to regulations that researchers must follow, then lay out some basic principles that underlie human subject protection, and discuss how each of these interacts with data sharing and secondary data analysis. To provide support for the public trust, researchers have an obligation to share research data with other scientists; human subject ethics do not conflict with this obligation but rather provide added support for it. In this paper, we will focus on the sharing of data from nonclinical studies, which are those in which subjects are individuals who are not receiving treatment from any personnel associated with the research or from that institution. That is, these research subjects are not also patients of the researcher. Data collected in the context of treatment are subject to more restrictions than data collected solely for research, such as the U.S. Health Insurance Portability and Accountability Act (HIPAA) (42 USC §, 1320d-2) or the U.S. Patient Safety and Quality Improvement Act of, 2005 (Pub.L. 109–41, 119 Stat. 424–434), as well as FDA regulations (21 C.F.R. 50, 56, 312 and 812). The 2011 ANPRM and the principles of human subject research In 2011, the US Office of Human Research Protections (OHRP) released an Advanced Notice of Proposed Rule Making (ANPRM) outlining proposed changes in regulations on human subject research (OHRP, 2011a). This ANPRM was created to solicit opinions and information about potential changes in human subject regulations from the public at large, including IRB members, research administrators, and investigators. One of the areas of change is how the secondary use of biospecimens will be regulated and generally, the ANPRM treats using existing data and the use of existing biospecimens as one issue, implying the same regulatory controls would apply to both. OHRP has described the reforms to biospecimen research as “reforms would require written consent for research use of biospecimens, even those that have been stripped of identifiers” (OHRP, 2011b). Because the ANPRM links the use of previously collected data and the use of previously collected biospecimens, this should be of concern for any researcher who uses shared data. The official comment period for the ANPRM has closed, but the next step for OHRP will be releasing a draft of the potential revised rules in the form of a Notice of Proposed Rule Making which will again seek comments from the community of researchers that will be impacted. If and when this happens, it will provide another opportunity for researchers to ensure that a distinction is made between biospecimens and research data and to ensure that OHRP considers the ethical arguments in favor of increased data sharing and decreasing the administrative burden of doing so. This paper relies on a set of fundamental concepts underlying human subject research, which were laid out in the Ethical Principles and Guidelines for the Protection of Human Subjects Research (National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, 1978), better known as the “Belmont Report”. While the Belmont Report was specifically developed and used in the US, the principles it espouses are important ethical guidelines that reach beyond national boundaries. The Belmont Report outlined three fundamental ethical principles for human subject research: justice (the equitable distribution of benefits and burdens of research), respect for persons (in which people should be able to make autonomous decisions and people with limited autonomy should be protected), and beneficence (the obligation to not only “do no harm” but to actively maximize benefits and minimize harms to subjects) (National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, 1978). The tension between the doctrine of informed consent for participation (from the principle of respect for persons) and the need to minimize risk and increase the benefit of research (from the principle of beneficence) as well as the need for scientific validity and generalizability of

results (from the principles of justice and beneficence) are at the heart of the changes requested in OHRP's 2011 ANPRM. The principle of respect for persons as applied to informed consent would seem to require that subjects be asked to consent to the future use of their data and that, without specific information about what that future use will entail, providing truly “informed” consent is not possible. However, the principles of beneficence and justice may lean towards a different conclusion; if researchers do not use the data they are given to extract the most knowledge possible, they risk requiring individuals to bear burdens of research without providing the maximum benefit. There is a natural tension between the three principles. Each should be explored and then balanced with the others when examining how they should be applied to a specific topic or question. After first discussing the concept of the research/treatment divide, we introduce each of these principles and discuss their application in the context of data sharing and secondary data analysis. The research/treatment divide and the responsibility of researchers The research/treatment divide is also known as the “therapeutic misperception” or “therapeutic misconception.” In a clinical setting, the subjects of research often have an established relationship with the institution or even the investigative team that is collecting the data. Without a clear delineation, it can be difficult for subjects to appreciate that the time and information they are providing to researchers will not benefit their specific care. Ethically, the subjects enrolled in clinical studies are more vulnerable to blurring the lines between what is research and what is treatment. These subjects may also have a hard time understanding that there is no or very minimal benefit for participating in an imaging study (Kirschen et al., 2006). The concept of the research/treatment divide is more apparent in the collection of data from patients or in a clinical setting, but the same principles apply in the nonclinical setting. In a nonclinical setting, it can still be difficult for subjects to overcome the idea of treatment (for example, in expecting brain abnormalities to be detected as part of research imaging) (Kirschen et al., 2006). This divide can become important in understanding the subjects' perceptions of what happens to the information they provide. Subjects may inaccurately believe that fMRI scans become part of their permanent medical record, available for physician review either at the time of participation or later. Subjects may inappropriately believe that because health care data are generally protected at a very high standard that the research data will be protected and shared only at that level. Studies should be designed to consider the therapeutic misperception and to consider it at both the time of initial data collection. At the time of secondary data analysis, researchers should be confident that the research subjects allowed the type of analysis that the researchers are conducting. While not something that is specifically outlined in the Belmont report, the concept of the research/treatment divide encompasses the special relationship that develops between even a non-medical or non-clinical researcher and the participant. When a person agrees to participate in a research study, he or she forms a “fiduciary relationship” with the researcher. A fiduciary relationship means that there is an unequal balance in knowledge or training between two people, thus requiring that one party put faith in the other to act in their best interest. In Grimes v. Kennedy Krieger Institute (2001), the highest state court of Maryland described this “special relationship” as existing between research subjects and researchers in the case where the researchers were not physicians and the research was expressly nontherapeutic. In that case, public health researchers measured the effectiveness of different levels of lead abatement in housing. The researchers collected children's blood samples to test for lead levels. Parents sued for a variety of causes, including that researchers had become aware of a hazardous condition during the analysis of the samples and the researchers did not report that condition to the parents. The

B. Brakewood, R.A. Poldrack / NeuroImage 82 (2013) 671–676

court stated, “the very nature of nontherapeutic scientific research on human subjects can, and normally will, create a special relationship out of which duties arise.” While the court found various sources for these duties, what sets Grimes apart is that a court found that researchers had a positive duty of care for subjects and that violating the standard of care could result in legal liability as “research malpractice” (Jansson, 2003). This 2001 court case seems to imply that the courts believe that the research/treatment divide is not as clear to the average research subject as it appears when discussed in the Belmont report. The same issue arises in the related ethical question of how researchers should report incidental findings in fMRI studies and conflicting opinions in the literature demonstrate the murkiness of this divide in the everyday reality of the research endeavor (Hadskis et al., 2008; Kirschen et al., 2006). While the existence and persistence of the therapeutic misconception are due in some part to characteristics of the participants, researchers can mitigate some effects of this misconception. This can include a welldesigned recruitment and consent process, as well as the consent forms themselves (Hadskis et al., 2008). For studies that may involve the secondary use of data sets, many individuals have proposed that some of the issues of respect for persons may be addressed through allowing subjects to consent to specific future uses of their data (e.g., Master et al., 2012; McGuire et al., 2011; OHRP, 2011b) — allowing use in basic cognitive research, but not in research on mental diseases; allowing use of their data in research on Alzheimer's and age related conditions, but not in research on addiction. This tactic of a tiered consent process should be carefully examined to determine how having this type of specificity might increase the therapeutic misconception. For example, a subject who consents to use of his or her nonclinical data in a study on aging may mistakenly believe that the scans and data will be interpreted by someone who can warn them of future age-related illnesses. In general, researchers should be aware of the possibility of the therapeutic misconception and of their positive duty for the care of the participants enrolled in their studies. If there are problems in attempting to ensure that the therapeutic misconception is not present at the time of the original data collection, these ethical issues will follow the data collected even through to deidentified secondary data analysis. This is similar to the legal idiom that disallows use of information that is “fruit of the poisonous tree.” Principle 1: Respect for persons The Belmont report states that respect for persons is a principle that has two “ethical convictions”: people should be able to make autonomous decisions and people with limited autonomy should be protected (National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, 1978). Others have argued that a broader application of the principle of respect for persons should also include recognizing the inherent dignity of people, recognizing that autonomy is only one aspect of the principle (Beach et al., 2005). Autonomous decision-making means that a subject needs to have the ability to think about his or her choice to participate or not and the ability to actually act on that decision. Subjects must have sufficient information in a format they understand, and they must have the ability to choose. If there is something that limits a person's autonomy (e.g., age or cognitive capacity), then he or she may need special protections. This article is concerned primarily with individuals who do not have limited autonomy, but the principles discussed here may also be applied when a guardian or surrogate decision maker is providing permission for someone to participate in research. Principle 2: Beneficence Beneficence, in the Belmont report, is a positive obligation to both not injure a participant (not limited only to physical injury), but also to maximize benefits and minimize necessary harms (National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, 1978). In other arenas, some of these duties may be

673

categorized as nonmaleficence, but the Belmont report includes them under the principle of beneficence. In clinical practice, sometimes subjects must be harmed to receive a benefit — for example, when chemotherapy is necessary to combat a cancer. Similarly, research may require a harm to subjects, like the “small” harms of boredom and frustration while spending time answering uninteresting questionnaires or greater harms such as the psychological distress of being asked to relive a traumatic experience as part of a study on methods to treat post-traumatic stress disorder or the potential harm of being exposed to ionizing radiation in a PET study. It is the responsibility of an ethical investigator to ensure that subjects receive the maximum benefit to participation while those harms are minimized. Principle 3: Justice Justice, as a Belmont principle, refers to the concept of distributive justice — a balance of benefit and burden (National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, 1978). The risks borne by a research population should not result in data, technology, or advances from which they cannot benefit. The Belmont report authors give the example of 19th and 20th century patients on public wards being the subjects of medical research when the benefit of the advances in medicine were only enjoyed by private patients. In imaging studies, the concept of justice is carried out most clearly in the subjects selected for participation. Groups should not be selected simply because they are easiest to procure as subjects but because the selection of those groups is important to the point being studied. Applying the Belmont principles The Belmont report defines three ways to apply the core principles — obtaining informed consent, balancing risk and benefit, and selecting subjects appropriately. Each of these applications can, consistent with the three principles defined above (respect-for-persons, justice, and beneficence) support the sharing of imaging data for secondary analysis. Application 1: Informed consent In order for a subject to participate in a study, the researcher has an affirmative duty to ensure that the individual has the ability to voluntarily agree to participate (legal or mental capacity and in a situation free of coercion or undue influence), understands what the research entails (in terms of the purpose, procedures, risks, potential benefits, etc.), and actually indicates agreement to participate (consent). In an imaging study with direct subject contact, the application is fairly clear and well established. In a secondary data analysis study, this application is more nuanced. It may not be possible to collect informed consent to participate in specific secondary research studies at the imaging sessions; those studies may not yet be developed or even conceived of at the time of data collection. Respect for persons requires that people's decisions are respected, including ensuring that individuals actually can make those decisions. If data are shared irrespective of a participant's ability to actually agree or disagree to sharing that data, this ethical principle may be violated. This type of problem has recently been the focus of media covering genetics research done on the Havasupai tribe in which researchers used blood samples collected for diabetes research for genetic research on schizophrenia and other questions; the tribe settled with the research institution for $700,000 and recovered the samples (Harmon, 2010). In another situation related to genetic research, a large public outcry led to the State of Texas being forced to destroy blood samples collected from newborns for disease screening after it was revealed that the database of information created from those samples was being used for research without the consent or even knowledge of the parents (Fikac, 2009). In both these cases, data collected for an initial purpose was reanalyzed for another purpose without the consent of the

674

B. Brakewood, R.A. Poldrack / NeuroImage 82 (2013) 671–676

individuals involved. Both court cases relied on the fact that this secondary use was inappropriate because the researchers did not respect the wishes of the research participants. If we stop evaluating respect for persons and the need for informed consent by saying that a person is incapable of providing consent for something unless they know in detail what might be done, we may however do research subjects a disservice. In both the Havasupai and the Texas cases, the insult to the respect for persons principle was that research subjects were misinformed about the potential other uses of their samples. Truly respecting the autonomy of an individual means that we can respect a research subject's choice to allow or disallow later, currently unforeseeable analysis of their data. However, in contrast to a common modern interpretation of the principle (Lysaught, 2004), the idea of respect for persons is not simply the respect for individual autonomy; it is also about treating people with dignity and, for researchers, that means treating the information that subjects provide with dignity (Beach et al., 2005) and respect. Sharing data ensures that each individual subject's data are viewed as a valuable resource to help advance knowledge in as many ways as possible. The failure to share data is akin to treating the data as a disposable commodity after using it in the original study. The principle of respect for persons, when seen as a combination of a respect for autonomy and treating people with dignity, can actually encourage the sharing of data for multiple analyses. Potential solutions for evaluating consent for secondary analysis The current regulatory scheme in the United States already allows some studies to forgo full informed consent, including studies that are classified as “exempt” and those that meet the requirements at Protection of Human Subjects. 45 C.F.R. pt. 46, 2009(d). Those requirements include that a study must be no more than minimal risk, the waiver or alteration of the elements of consent will not adversely affect the rights and welfare of the subjects, the research could not practicably be carried out without the waiver or alteration and that, whenever appropriate, the subjects will be provided with additional pertinent information after participation. While secondary data analysis studies may not require a full waiver of the informed consent process, these requirements provide a standard to use to evaluate the potential option of providing subjects with the opportunity to consent to “future use” of their data without necessarily providing specific information about those future studies. The challenges in such a waiver depend largely on which IRB is considering the request. IRBs may differ in terms of how they see the risks of secondary data analysis studies (i.e., whether the risk of breach of confidentiality from participation is greater than that of everyday life for these subjects) and whether a waiver or alteration of consent would adversely affect the rights of subjects (i.e., that subjects have a right to determine how data derived from their participation should be used in the future). Strong data security and data release protections, and strict anonymization of shared data, can help reduce the risk of a breach of confidentiality. Other technical solutions that evolve over time will need to be developed as imaging research begins to develop the ability to identify individuals from the images provided. Anonymization and controlled access to datasets need to be considered to ensure appropriate security. Providing subjects with some information about the potential use of their data and allowing them to opt-in or opt-out of all data sharing are some of the ways to address this issue. Separate from the ethical issues for human subjects, it may not be practical to restrict data by “type” of research, because all of the future topics and potential uses will not be known at the time the subject makes a choice. This tactic will also create bureaucratic and administrative difficulties in creating a system that will allow this fine level of control by researchers and participants; this type of burden may make sharing data impractical for investigators who are already feeling burdened by administrative tasks (Kehagi et al., 2011). There are ongoing discussions regarding the circumstances

under which unanticipated future analysis of data collected for a different research purpose should be permitted and whether consent requirements should vary based on the likelihood of identifying a research subject, as in the ANPRM (OHRP, 2011b). Some IRBs may also differ on the appropriateness of providing information after participation. As mentioned previously, providing information about the results or initial findings of a study requires balancing the concept of respect for persons and the need to prevent the therapeutic misconception. The literature about the ethics of genomics research and biobanks has provided an ongoing and unsettled debate about the appropriateness of returning research results and information about incidental findings, with some commentators suggesting that the neuroimaging field's discussions on these topics may actually provide a good model for the genomics community (Wolf, 2012). There is also a variety of options for what type of consent may be most appropriate for data that is stored with the intent of sharing it with a variety of researchers and for a variety of future analysis. While this article argues that a “broad” consent does not conflict with the Belmont principle of respect-for-persons (rather than the more limited respect-for-autonomy principle), other authorities may reasonably disagree (Petrini, 2009) and there does not appear to be consensus regarding the appropriateness of broad consent (Master et al., 2012). Other forms of consent, such as delegating consent to a third party (e.g., a charitable trust, an IRB, another mediator), using communal consent, or a tiered consent process have been suggested for studies that utilize biobanks (Master et al., 2012). Some initial work has been done to examine participant attitudes towards and operationalization of different types of consent in genomics research (McGuire et al., 2011). In that study, participants were randomized to a traditional consent (in which data sharing was required for participation), a binary consent (in which participants could opt-out of data sharing but still continue participation) or a tiered consent (in which participants could chose to release information publicly in a restricted database, or not at all). Most participants supported public data release, with a significant portion preferring release only to a controlled access database. In addition, the participant's decision about data sharing was significantly associated with the consent type, even after all participants were debriefed and allowed to alter their decision. Application 2: Risk reduction The risks the participant takes on are defined by both the probability of harm and the magnitude of that harm. A benefit however, is not about probability, but is rather “something of positive value related to health or welfare” (National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, 1978). A study must produce a benefit, if not to the participant then to society at large (usually through valuable contribution to a field of study), in order to be ethically conducted. The benefits and risks of a study must be balanced; beneficence requires that not only is the study designed to protect individual participants from risks, but also that the study is scientifically sound and designed to produce a benefit for the field. In gathering imaging data, minimizing the risks to the participants is handled through careful attention to safety procedures including screening, protecting confidentiality including through strong data security, and ensuring that all researchers have the appropriate training to conduct study activities. In secondary data analysis, the main risk to participants that must be minimized is that of a breach of confidentiality. The most important way to minimize this risk is ensuring that data are not identifiable — or at least have a low likelihood of being re-identified. This includes removing all identifying information prior to sharing the data. Table 1 lists the 18 unique identifiers defined in HIPAA; each of these should be removed from the data prior to sharing. This includes defacing images with tools like MRI Defacer or Quickshear (Schimke and Hale, 2012). In some cases (e.g., family studies or studies of geographically limited populations), this would still not be sufficient to protect against re-identification, which would

B. Brakewood, R.A. Poldrack / NeuroImage 82 (2013) 671–676 Table 1 These are the 18 unique identifiers defined in HIPAA. HIPAA Identifiers Name Geographic subdivisions smaller than a state All elements of dates (except year) for dates directly related to an individual, including birth date, admission date, discharge date, and date of death and all ages over 89 and all elements of dates (including year) indicative of such age (except that such ages and elements may be aggregated into a single category of age 90 or older) Telephone numbers Fax numbers Electronic mail addresses Social security numbers Medical record numbers Health plan beneficiary numbers Account numbers Certificate/license numbers Vehicle identifiers and serial numbers, including license plate numbers Device identifiers and serial numbers Web Universal Resource Locators (URLs) Internet Protocol (IP) address numbers Biometric identifiers, including finger and voice prints Full face photographic images and any comparable images Any other unique identifying number, characteristic, or code (excluding a random identifier code for the subject that is not related to or derived from any existing identifier).

require other administrative remedies (i.e., releasing data under a strict data security plan) or mathematical methods (e.g., transforming the data using generalizations and suppression). In secondary data analysis, the principle of beneficence is applied primarily with data security measures. Some of the greatest risks from secondary data analysis are risks to privacy and confidentiality of the information. Beneficence requires that the person compiling the information verifies that there are strong procedures to confirm that the data are used in studies that are scientifically sound and in ways that protect privacy and confidentiality. The principle of beneficence can also be read to create a duty to maximize benefit by using the data to examine multiple research questions; in fact, the Belmont report specifically states that beneficence is an obligation that includes “maximize[ing] possible benefits” (National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, 1978). Under this definition, beneficence could be interpreted to require sharing of data for secondary analysis in order to maximize the benefit of an individual subject's participation, but only if rigorous data security protections are in place to prevent breaches of confidentiality or violations of privacy and thus minimize possible harms. Application 3: Subject selection and generalization The final section of the Belmont report deals with the application of the three basic ethical principles to the selection of subjects. Selection of subjects ties most directly to the principle of justice — ensuring that individuals or groups are used fairly in research. This means that individuals are not excluded from research that might be beneficial to them and that they are not exposed to risks for research simply because of their membership in a particular class that is not relevant to the scientific information sought. This also means that some groups of individuals should be excluded based on the “ability of members of that class to bear burdens and on the appropriateness of placing further burdens on already burdened persons” (National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, 1978). In primary imaging research, selection of subjects is relatively straightforward. Care should be taken that classes of subjects such as children or prisoners are exposed to the risks of research only in specific cases where those classes of person will benefit from the findings of the research. Individuals in other classes like student subject pools

675

should be used with care, understanding that these selection criteria may impact the scientific validity of findings as well as impacting the ethical considerations of justice. Here again, secondary data analysis can have a positive impact on this characteristic of research studies. Allowing researchers to pool information from subjects and populations to which they might not have ready access increases not only the generalizability of findings, but also increases the likelihood that individual and social justice mandates are met. If researchers cannot use existing, de-identified data sources to evaluate hypothesis, they will necessarily be limited in their subject selection. Denying use of existing data may actually reduce the ability to spread the benefit of knowledge by decreasing access for a larger population to whom it may be generalized. Further, it may require that more individuals be exposed to the (admittedly minimal) risk of MRI research procedures, thus decreasing the benefit and increasing the societal burden overall. Participants should be chosen fairly in order to assure that the findings from research conducted on one class of subjects will apply to other individuals in that class and that no class is inappropriately subject to the risks. Data, biobanks, and genomics research It should be noted that the ANPRM and other areas of commentary from federal agencies primarily discuss secondary data analysis in the context of analysis of biological specimens collected for research or non-research purposes. This discussion has largely focused on genetic/ genomic data, since biospecimens are often treated as though each sample contains the potential to provide such data. In this article we have used information from the genomics literature to discuss some of the options and attitudes towards informed consent and secondary data analysis. There are parallels in some of the common issues between the fields of genomics and imaging research. Genetic and neuroimaging information also similar in that both provide probabilistic information about individuals – information about the probability that an individual will develop a disease based on genetic makeup or the probability that an individual is lying based on a neuroimaging scan for example – rather than deterministic information. In another example, some commentators have suggested that genomics researchers should consider imaging researchers' policies regarding the return of research results and incidental findings (Wolf, 2012). One of the major fears for subjects involved in genetic studies is that information about them may be misused (Clayton, 2003), and IRBs, regulators, and courts are concerned about dignitary and economic harms from the release of this information (Clayton, 2005). These concerns are not yet as strongly held by these communities in regards to neuroimaging data, but researchers are already considering a not-too-distant future in which neuroimaging data can provide a “fingerprint” for individuals (Poline and Poldrack, 2012). However, it is our belief that the application of ethical principles to secondary data analysis of nonclinical neuroimaging data is different from the use of clinically collected biospecimens and that these two areas of research should not be combined without a thorough examination and explanation of the differences in these areas of study. Many of the informational risks involved with the collection of biospecimens can be remedied more completely when collecting neuroimaging data than they can with genomic data. For example, genomic data may have familial implications, can have an impact on reproductive decision-making, and have group and community implications beyond those of the individual subjects and their families (Rodriguez and McGuire, 2012). The fact that genetic data provides connections to a research subject's family members creates even greater informational risks, including a higher possibility of connecting that sample with an individual based on those familial connections (Gymrek et al., 2013). While both fields deal with possible privacy concerns about reidentifiability of information, current techniques like removing some metadata and defacing images limit the possibility of individual identification (Schimke and

676

B. Brakewood, R.A. Poldrack / NeuroImage 82 (2013) 671–676

Hale, 2012). The fact that neuroimaging data does not provide familial information also limits the possibility of reidentification using methods that are available in genetics research. This ability to deidentify data may be challenged in the future, but currently, as Schmike says, “the privacy issue is one that can be easily remedied.” Data sharing as good ethical practice Ultimately, the regulatory system in the US relies primarily on ethical behavior from principal investigators and rigorous analysis of ethical principles from institutional review boards. Principal investigators who chose to share data with other researchers have a responsibility to determine that the data will be used in accordance with the principles above. More importantly, these same principles may include a positive duty to actually seek out opportunities to share data in order to maximize the contribution that individual participants have made. The administrative and operational burdens of any regulatory changes must be considered in light of these two responsibilities of principal investigators and, by extension, the IRBs that review their work. The ethical principles of justice, beneficence, and respect-for-persons can form the foundation of an analysis of the ethics of any part of the human subject research enterprise. Understanding and evaluating these principles as they relate to secondary data analysis of nonclinical imaging data can provide a foundation for discussion and deliberation among researchers, administrators, and regulators concerned with protecting the rights of human participants involved in expanding the pool of knowledge about human brain function. Conflict of interest The authors declare no conflict of interest. References Beach, M.C., Sugarman, J., Johnson, R.L., Arbelaez, J.J., Duggan, P.S., Cooper, L.A., 2005. Do patients treated with dignity report higher satisfaction, adherence, and receipt of preventive care? Ann. Fam. Med. 3, 331–338. http://dx.doi.org/10.1370/afm.328. Clayton, E.W., 2003. Ethical, legal, and social implications of genomic medicine. N. Engl. J. Med. 349, 562–569. http://dx.doi.org/10.1056/NEJMra012577. Clayton, E.W., 2005. Informed consent and biobanks. J. Law Med. Ethics 33, 15–21. http://dx.doi.org/10.1111/j.1748-720X.2005.tb00206.x. Fikac, P., 2009. State to destroy newborns' blood samples, Houston Chronicle, 22 December 2009, online at http://www.chron.com/news/houston-texas/article/State-to-destroynewborns-blood-samples-1599212.php (last accessed 27 July 2012). Grimes v. Kennedy Krieger Institute, 782 A.2d 807, 2001. Gymrek, M., McGuire, A.L., Golan, D., Halperin, E., Erlich, Y., 2013. Identifying personal genomes by surname inference. Science 339, 321–324. http://dx.doi.org/10.1126/ science.1229566.

Hadskis, M., Kenny, N., Downie, J., Schmidt, M., D'Arcy, R., 2008. The therapeutic misconception: a threat to valid parental consent for pediatric neuroimaging research. Account. Res. 15, 133–151. http://dx.doi.org/10.1080/08989620801946917. Harmon, A., 2010. Indian Tribe Wins Fight to Limit Research of Its DNA, New York Times, 21 April 2010, p. A1 and online at http://www.nytimes.com/2010/04/22/us/22dna. html (last accessed 27 July 2012). Health Insurance Portability and Accountability Act of 1996, 42 USC § 1320d-2, 1996. Jansson, R., 2003. Researcher liability for negligence in human subject research: informed consent and researcher malpractice actions. Wash. Law Rev. 18, 229–263. Kehagi, A.A., Tairyan, K., Frederico, C., Glover, G.H., Illes, J., 2011. More education, less administration: reflections of neuroimagers' attitudes to ethics through the qualitative looking glass. Sci. Eng. Ethics 1–14. http://dx.doi.org/10.1007/s11948-011-9282-2. Kirschen, M.P., Jaworska, A., Illes, J., 2006. Subjects' expectations in neuroimaging research. J. Magn. Reson. Imaging 23, 205–209. http://dx.doi.org/10.1002/jmri.20499. Lysaught, M.T., 2004. Respect: or, how respect for persons became respect for autonomy. J. Med. Philos. 29, 665–680. http://dx.doi.org/10.1080/03605310490883028. Master, Z., Nelson, E., Murdoch, B., Caulfield, T., 2012. Biobanks, consent and claims of consensus. Nat. Methods 9, 885–888. http://dx.doi.org/10.1038/nmeth.2142. McGuire, A., Oliver, J., Slashinski, Graves, J., Wang, T., Kelly, P.A., Fisher, W., Lau, C., Goss, J., Okcu, M., Treadwell-Deering, D., Goldman, A., Noebels, J., Hilsenbeck, S., 2011. To share or not to share: a randomized trial of consent for data sharing in genome research. Genet. Med. 13, 948–955. http://dx.doi.org/10.1097/GIM.0b013e3182227589. National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, 1978. Appendix: Protection of human subjects: Belmont Report — Ethical Principles and Guidelines for the Protection of Human Subjects of Research. Federal Register, 44:76. U.S. Department of Health, Education, and Welfare, Washington, D.C., pp. 23192–23197 (18 Apr 1979, Retrieved from the Office of Human Research Protections website on January 15, 2012 http://www.hhs.gov/ohrp/humansubjects/ guidance/belmont.html). National Institutes of Health, 2003. Final NIH Statement on Sharing Research Data (NOTOD-03-032). Retrieved from the NIH Office of Extramural Research website on July 3, 2012 http://grants.nih.gov/grants/guide/notice-files/NOT-OD-03-032.html. Office of Human Research Protections, 2011a. Human subjects research protections: enhancing protections for research subjects and reducing burden, delay, and ambiguity for investigators, advanced notice of proposed rulemaking. Fed. Regist. 76 (143), 44512 (26 July). Office of Human Research Protections, 2011b. Regulatory changes in ANPRM: comparison of existing rules with some of the changes being considered. http://www.hhs.gov/ ohrp/humansubjects/anprmchangetable.html. Patient Safety and Quality Improvement Act of 2005, Pub.L. 109–41, 119 Stat. 424–434, 2005. Petrini, C., 2009. “Broad” consent, exceptions to consent and the question of using samples for research purposes different from the initial collection purpose. Soc. Sci. Med. 70, 217–220. http://dx.doi.org/10.1016/j.socscimed.2009.10.004. Poline, J.B., Poldrack, R.A., 2012. Frontiers in brain imaging methods grand challenge. Front. Neurosci. 6, 96. http://dx.doi.org/10.3389/fnins.2012.00096. Protection of Human Subjects. 45 C.F.R. pt. 46, 2009. Rodriguez, L.L., McGuire, A., 2012. Data sharing in genomic research: participant attitudes and ethical issues. Webinar presented by Public Responsibility in Medicine and Research on May 24, 2012. Archive will be available for purchase on http://www. primr.org/Conferences.aspx?id=13083. Schimke, N., Hale, J., 2012. Neuroimage data sets: rethinking privacy policies. Proceedings of the 3rd USENIX conference on Health Security and Privacy. USENIX Association, pp. 301–308 (August, PMid:23045386). Van Horn, J.D., Gazzaniga, M.S., 2012. Why share data? Lessons learned from the fMRIDC. Neuroimage Available online 13 November 2012. http://dx.doi.org/10. 1016/j.neuroimage.2012.11.010. Wolf, S., 2012. The past, present, and future of the debate over return of research results and incidental findings. Genet. Med. 14, 355–357. http://dx.doi.org/ 10.1038/gim.2012.26.