When does the placebo effect have an impact on network meta-analysis results?

Introduction

Many definitions have been proposed for placebo, from ‘a medicine given more to please than to benefit’ in the Shorter Oxford Dictionary of 1811, to ‘something that is intended to act through a psychological mechanism’,1 to more recent definitions, for example, ‘the effect of the simulation of treatment that occurs due to a participant’s belief or expectation that a treatment is effective’2 and ‘beneficial effects that are attributable to the brain–mind responses to the context in which a treatment is delivered rather than to the specific actions of the drug’.3 Different definitions reflect the time period in which they were proposed but also the scientific field within which placebo is studied. Research studies in neuroscience, psychology and medicine are constantly being undertaken, trying to tackle the mechanisms of placebo and its practical implications.4–7 In epidemiology, increased interest in placebo partly arises from concerns that large placebo effects may mask true clinical effects and bias results.2 Such concerns have led to a wave of research, alternative study designs8–13 and statistical methods,14–21 focused on assessing and controlling placebo effects.

Evidence synthesis techniques have also contributed to understanding placebo effects. In 1955, Henry Beecher collected 15 studies examining different diseases and found that 35% of all 1082 patients were satisfactorily relieved by a placebo.22 In his research article ‘The powerful placebo’, using in principle an evidence synthesis perspective, Beecher recognised placebo as a clinically important factor, rendering the 35% an often-cited figure in favour of the argument that placebo can be an important medical treatment. Almost half a century later, Hróbjartsson and Gøtzsche questioned the significance of the placebo effect wondering ‘Is the placebo powerless?’ in a research article in which they performed a meta-analysis of 114 randomised trials and found little evidence that placebos have powerful clinical effects.23 Since then, a plethora of pairwise meta-analyses, meta-regression and network meta-analyses (NMA) have been conducted to investigate, among others, the debatable rise of placebo response rates24–29 and the influence of patient characteristics and several study-specific factors on placebo responses,30 31 such as the probability of receiving placebo32–36 and the type of placebo.37–40

The classic paradigm has been that placebo-controlled randomised trials are focused on estimating the treatment effect, that is, the relative effect of treatment compared with placebo. However, the magnitude of placebo effect itself may also in some instances be of interest and has also lately received attention.2 It is worth noting that placebo effects are not expected to be equally impactful across medical fields. Although Hróbjartsson and Gøtzsche concluded to an in general ‘powerless placebo’, they did find a significant effect between placebo and no treatment in studies with continuous subjective outcomes and in studies involving the treatment of pain.23 In this paper, we aim to shed light as to when the placebo effect is likely to bias pairwise and NMA treatment effects and propose instruments from the evidence synthesis methodological toolkit that can be used to estimate placebo effects.

Definitions

Let us focus on figure 1 panel A Study 1 to introduce the definitions to be used throughout the paper. The three included treatments Placebo, Treatment A and No treatment are denoted as P , A and N , respectively. We define placebo response as the response that would be observed for each participant if assigned to placebo. Placebo response consists of both a possible placebo effect Embedded ImageEmbedded Image as well as other possible non-specific effects Embedded ImageEmbedded Image . These non-specific effects include the natural course of the disease or other mechanisms that lead to improvement such as the Hawthorne effect, the effect of responding to being observed and assessed.2 41 Treatment response, on the other hand, is defined as the response that would be observed for each participant if assigned to treatment (here treatment A ). It consists of three components: placebo effect, non-specific effects and true relative treatment effect between A and P (in the remainder to be called treatment effect and denoted as Embedded ImageEmbedded Image ). Responses to treatments P , A and N for study i are denoted as Embedded ImageEmbedded Image and Embedded ImageEmbedded Image respectively.

Figure 1Figure 1Figure 1

Schematic representation of placebo response and treatment response, decomposed to placebo effect, non-specific effects and treatment effect under different assumptions. π , placebo effect; Embedded ImageEmbedded Image , non-specific effects; Embedded ImageEmbedded Image , relative treatment effect between A and P ; Embedded ImageEmbedded Image , relative treatment effect between B and P .

In a two-arm placebo-controlled trial, comparing treatment A with placebo, it is not possible to isolate the placebo effect from non-specific effects. Indeed, what is often investigated is the placebo response, which however is a combined effect that includes placebo effect and additional non-specific effects. To elucidate the placebo effect, one would need to subtract any non-specific effects from the observed placebo response. A no-treatment control arm serves this purpose (third arm in figure 1 panel A Study 1); the idea is that, due to randomisation, the non-specific effects will be the same across no-treatment control, placebo and active treatment and thus the placebo effect can be estimated by comparing the observed responses in the placebo arm and the no-treatment control arm.2 42

Miller and Rosenstein note that progress in understanding and estimating the placebo effect has been hampered by a lack of conceptual clarity, some of which has been due to confusion of the placebo effect with the placebo response.15 42 Notably, the apparent distinction between the conclusions of Beecher on one hand and Hróbjartsson and Gøtzsche on the other hand boils down to the definitions of placebo response and placebo effect.22 23 While Becher measured the placebo response, Hróbjartsson and Gøtzsche used studies with a no-treatment control arm to measure the placebo effect, isolating it from other non-specific effects.

When should the placebo effect be of concern for evidence synthesis?

The example used in Definitions makes a number of assumptions. In this section, we elaborate on what is implicitly assumed about the placebo effect in meta-analysis and how departures from the assumptions impact on the unbiased estimation of direct and indirect treatment effects. Figure 1 serves as a guide of the scenarios one may encounter in practice in systematic reviews of interventions and table 1 gives the mathematical formulation of the respective models.

Table 1

Models for placebo response and treatment response, decomposed to placebo effect, non-specific effects and treatment effect under different assumptions. π : placebo effect; f :non-specific effects; Embedded ImageEmbedded Image : treatment effect. Responses to treatments P , A and N for study i are denoted as Embedded ImageEmbedded Image and Embedded ImageEmbedded Image respectively. The random errors Embedded ImageEmbedded Image and Embedded ImageEmbedded Image are assumed to be distributed normally with expectation 0. Panels numeration refer to panels of figure 1.

Placebo effects equal across and within studies and additivity holds

The first assumption we make in the example used in Definitions (figure 1 panel A Study 1) is that non-specific effects are equal across all treatment arms. This will be assumed to be true in the remainder of this paper. Second, it was assumed that placebo effects are equal across treatment arms within a study. Third, it was assumed that additivity holds, meaning that, in expectation, the response that would be observed for a treatment is equal to the response that would be observed for placebo, plus the treatment effect. Equivalently, additivity means that the amounts of non-specific effects, placebo effect and treatment effect are independent and do not act synergistically or antagonistically. We differentiate between additivity assumption and the assumption of equal placebo effects within and/or across studies.

The model for figure 1 panel A Study 1 is then given in table 1. The difference between treatment response and placebo response Embedded ImageEmbedded Image provides an unbiased estimate of the treatment effect Embedded ImageEmbedded Image , which is estimated from individual studies and pairwise meta-analyses.43 Having another study examining treatment B versus placebo (figure 1 panel A Study 2) leads to a fourth assumption, that placebo effects are equal across studies evaluating different treatments. In such a situation, it follows that estimates of both direct treatment effects Embedded ImageEmbedded Image and Embedded ImageEmbedded Image as well as indirect treatment effect Embedded ImageEmbedded Image are unbiased.

Placebo effects equal within studies, unequal across studies and additivity holds

In this situation, the placebo effect may differ across studies. For example, placebo effects may be bigger in a two-arm rather than a three-arm study (which would include placebo and two active treatments), as participants will know that it is more likely to receive an active treatment. Some studies have indeed found an association between treatment effect and number of treatment arms in the study (probability of receiving placebo).32–36 Other study-specific factors, such as informed consent,44 participant–staff contact45 and type of placebo,46 47 may also differentiate placebo effects across studies.

However, such a differentiation is taken into account in random-effects NMA and does not per se bias pairwise and NMA treatment effects.43 Consider, for example, figure 1 panel E, which illustrates one treatment A versus placebo and one treatment B versus placebo study. The indirect relative treatment effect between A and B would then be an unbiased estimate of the true relative effect Embedded ImageEmbedded Image as placebo effects Embedded ImageEmbedded Image and Embedded ImageEmbedded Image cancel out.

Placebo effects unequal within and across studies and additivity holds

Not all study-specific characteristics impacting placebo effects would leave NMA treatment effects unbiased. Consider, for example, figure 1 panel F. Such a differentiation of placebo effects within and across studies might occur and bias the estimation of Embedded ImageEmbedded Image and Embedded ImageEmbedded Image . This can be the result of unmasking as patients may suspect that they are in the active treatment due to the occurrence of adverse events, altering their expectations and potentially biasing the estimation of the treatment effect.48 To mitigate this possibility, active controls that would cause the same adverse events as the treatments have been proposed, but have been deemed impractical in clinical trial settings.2 More generally, any compromises in blinding of participants and/or assessors could lead to unmasking and consequently differentiate placebo effects within a study.

The model for figure 1 panel F, given in table 1, allows for different placebo effects across and within studies and implies that the treatment effect for study i is overestimated if Embedded ImageEmbedded Image . Including biased study treatment effects in pairwise meta-analysis or NMA will lead to biased direct and indirect treatment effects. Depending on the weight such biased studies receive in the meta-analysis, the results may be invalid.

Placebo effects unequal within studies, equal across studies and additivity holds

In figure 1 panel B, placebo effects are differentiated within studies but are equal across studies, meaning that Embedded ImageEmbedded Image for placebo. Similarly, placebo effects for other treatments are assumed to be equal across studies, Embedded ImageEmbedded Image for any study i including treatment A. The model for figure 1 panel B is a special case of that of figure 1 panel F (table 1). In particular, the indirect treatment effect Embedded ImageEmbedded Image is biased by Embedded ImageEmbedded Image and thus such a situation would also produce biased pairwise meta-analysis and NMA results. The situation depicted in figure 1 panel B is not very realistic to occur in practice.

Violation of additivity assumption

The assumption of additivity made in figure 1 panels A, B, E & F has been a point of controversy in the literature2 as it may be unrealistic in several instances. Violation of the additivity assumption could happen if, for example, the placebo effect interacts with non-specific effects. In such a case, placebo could act either synergistically or antagonistically, for example, with natural healing of the body. However, such a violation would not always bias treatment effects. If the interaction Embedded ImageEmbedded Image is equal within and across studies (figure 1 panel C) or even unequal across but equal within studies (figure 1 panel G), similar arguments as before can be made to show that direct and indirect treatment effects would be unbiased. On the other hand, unequal interactions within studies (figure 1 panels D & H) would result in biased direct and indirect treatment effects, rendering pairwise meta-analysis and NMA inappropriate tools for estimation. As with figure 1 panels B and F, figure 1 panel D can be considered as a special case of figure 1 panel H.

Estimating placebo effects

The inclusion of a ‘second, untreated’ (no-treatment) control arm was suggested by Ernst and Resch as a way of disentangling the placebo effect from non-specific effects in placebo controlled trials.8 Such a no-treatment control arm serves as a control for placebo in the same way that placebo serves as a control for the active treatment. A series of concerns have been expressed regarding the inclusion of a no-treatment control arm, such as the unavoidable compromises in blinding which may alter expectations of participants about the level of benefit they can anticipate. Other study designs have been suggested, trying to overcome such concerns, like assuring participants that they are on a ‘waiting list’ for receiving active treatment. Alternative study designs include withholding5 49 or manipulating10 50 the information that participants are getting about the chances of receiving treatment, rendering the estimation of placebo effects less prone to bias but also raising ethical concerns.51 52

Given that a no-treatment control arm is included in a network of interventions, component NMA (CNMA) can be used to estimate the incremental placebo effect π , on top of treatment effects. Such a use of CNMA highlights the role of evidence synthesis and its methodological instruments in investigating the placebo effect but is possible only under certain network structures and assumptions. For a description of CNMA, interested readers can refer to studies by Welton et al, Rücker et al and Tsokani et al.53–55

In situations like those in figure 1 panels A and E, CNMA can be used to estimate π . CNMA estimates for components can be interpreted as incremental treatment effects. Taking, for example, the OR as effect measure, for a component C, the component effect is an incremental OR (iOR) defined as the OR of treatments X+C versus X for any treatment X.56 57 If additivity does not hold, but placebo effects and interaction effects are assumed to be equal within studies (figure 1 panels C and G), π can still be estimated using CNMA with interactions. In all other scenarios (figure 1 panels B, D, F or H), CNMA (with or without interactions) is not an appropriate instrument for estimating treatment and placebo effects.

Conclusions

In this paper, we showed how different assumptions about placebo effects impact on the validity of pairwise and NMA results. In summary, in situations depicted in figure 1 panels A and E, pairwise and NMA would produce unbiased estimates of treatment effects. When a no-treatment arm is included in the network, CNMA could also be employed to produce unbiased estimates of placebo effects. CNMA with interactions can be used for situations depicted in figure 1 panels C and G for the estimation of both treatment and placebo effects. For the rest of the cases, pairwise meta-analysis, NMA and CNMA results would not be valid and evidence synthesis should be precluded. In our example of psychotherapy studies in depression, we hypothesised that placebo effects are equal within and across studies. However, this might well not be true as in an open psychotherapy study, as it is typically the design of psychotherapy studies, explanations of treatments and subsequently expectations, might be different between active and control treatments or even between studies for the same control treatment. Thus, a situation as the one depicted in figure 1 panel F might be a more realistic assumption for the specific example. If data on blinding and adverse events are available, sensitivity analysis could also give hints on potential differentiations of placebo effect.

The placebo effect might also be interweaved with the tendency of patients to please the investigators by reporting improvements that have not occurred.59 In the original analysis by Michopoulos et al the funnel plot between active psychotherapies and various control conditions was highly asymmetric (online supplemental appendix F in39) showing that small studies were associated with larger treatment effects. A potential explanation is an association between small studies and compromised blinding of assessors which in turn could lead to bigger placebo effects in active treatments compared with control treatments in small studies. A further indication is the non-negligible meta-regression coefficient of 0.86 (95% CI −0.01 to 1.75) between blinding of assessor and NMA OR.

In line with such a possible mechanism, Holper and Hengartner argued that the rise in placebo effects could be explained by small study effects.40 Inclusion criteria and baseline risk, though, can also contribute to this phenomenon. As the debate over the rise of placebo24–29 has been mostly based on placebo responses, however, it would be interesting to investigate placebo effects over time using CNMA in networks of interventions that include a no-treatment control arm and a substantial number of studies to examine temporal trends.

It might also be of interest to investigate the impact of potential bias due to imbalance in placebo effects in NMA treatment effects. In order to do so, one can use influence analysis, originally developed to quantify the influence of a direct treatment effect to NMA treatment effects.60 Using this instrument, the relationship of the magnitude of Embedded ImageEmbedded Image and the NMA results can be shown. It is, however, restricted due to the fact that the imbalance of placebo effects in only one direct effect can be investigated. In the online supplemental appendix 1, we give an example of the potential use of influence analysis for examining the potential impact of imbalances between placebo effects. For a more thorough analysis, a simulation study, investigating several scenarios, where deviations from assumptions occurs, would be more informative.

In summary, factors that equally alter the placebo effect within a study might be of interest for estimating placebo effect, while factors that alter placebo effect within and across studies are important for properly estimating both placebo and treatment effects. By simultaneously investigating factors that may alter placebo effects across or within studies, NMA can shed light on their importance for producing unbiased estimates.

留言 (0)

沒有登入
gif