Understanding and Ameliorating Aversion to Pragmatic Healthcare Experiments
Randomized controlled trials (RCTs), also known as A/B tests, are the “gold standard” for evaluating drugs and increasingly used to evaluate business products, government programs, global aid—and behavioral interventions in a wide variety of domains, including health. Pragmatic RCTs (pRCTs)—in which interventions are tested in real-world settings—are increasingly common, as are novel designs like cluster randomization, cross-over, and adaptive trials. Although pRCTs can raise important ethical issues, our previous research—20 preregistered experiments with 9783 participants (Meyer et al., PNAS 2019; Heck et al., PNAS 2020) and an additional 13 preregistered experiments with 5408 participants (in preparation)—suggests that both laypersons and clinicians sometimes judge pRCTs comparing two unobjectionable interventions (A and B), neither of which was known to be superior, as less appropriate than simply implementing either A or B for everyone. This puzzling “A/B Effect” threatens efforts to improve health outcomes through the sort of pRCTs that the NBER Roybal Center for Behavior Change in Health (and many other NIH Institutes and Centers) supports. We conceptualized and were inspired to empirically investigate the A/B Effect after several important pragmatic trials testing behavioral interventions or outcomes generated controversy among the media, members of the general public, ethicists and other academics, advocacy organizations, and government officials and lawmakers. Critical trials might not be conducted if health system stakeholders are averse to them or anticipate that patients or employees may be. And health systems may be tight-lipped about those trials that they do field, which is not ideal for either dissemination or for obligations to be transparent with patients and other stakeholders. Yet despite the high impact of this robust work to date, critical questions remain about the causes and boundary conditions of experiment aversion, and how to ameliorate it. In this study, we will first develop this evidence base in bioethics research. Using the survey experiment methods we have developed, in which participants evaluate vignettes of decisions to either implement policies or conduct a pRCT to compare those policies, we will measure how much specific features of field experiments—(lack of) consent, notice, and ethics review and traditional parallel vs. novel pRCT designs—contribute to people’s (dis)approval of them (Aim 1a). We will also experimentally test the effect of communicating about pRCTs in different ways: by avoiding potentially aversive language, emphasizing the presence of equipoise, and making explicit inferior alternatives to pRCTs, such as implementing untested policies (Aim 1b). In Aims 1a-b, we will compare the responses of laypeople (online workers) and healthcare providers, as well as a nationally representative sample of the U.S. public. We will then build capacity in bioethics by developing a set of publicly-accessible materials for trialists and institutions to use when communicating about pragmatic healthcare experiments that are customized to maximize transparency while reducing the A/B Effect to the greatest extent possible (Aim 2).
Supported by the National Institute on Aging grant #P30AG034532
More from NBER
In addition to working papers, the NBER disseminates affiliates’ latest findings through a range of free periodicals — the NBER Reporter, the NBER Digest, the Bulletin on Retirement and Disability, the Bulletin on Health, and the Bulletin on Entrepreneurship — as well as online conference reports, video lectures, and interviews.