Trade-Offs between Vaccine Effectiveness and Vaccine Safety: Personal versus Policy Decisions

2.1 Study Setting and Participants

Recruitment for the survey occurred in two countries (Indonesia and Vietnam) between April and May 2022 as part of a larger study. These two countries were chosen as one country (Vietnam: 10 average daily cases) had a low number of COVID-19 cases, while the other (Indonesia: 5176 average daily cases) had a high number of cases at the time of the survey conceptualization [24]. However, both countries experienced a spike in the number of daily cases at the beginning of 2022, followed by a decline during the survey administration period (Vietnam: 18,931 average daily cases; Indonesia: 691 average daily cases) [24].

The survey instrument was first developed in English and then translated into Vietnamese and Indonesian. A web-enabled survey was administered by a market research vendor. Participants in each country were recruited to be representative in terms of age, sex, income, and geographic location. To be eligible for the study, panelists had to be at least 21 years of age and be able to read Vietnamese or Indonesian. All respondents provided informed consent. All activities were approved by the National University of Singapore Institutional Review Board (NUS-IRB-2021-401).

Based on Johnson and Orme’s suggestion [25, 26], the minimum required sample size was 250 per country. To ensure an adequate number of individuals in terms of age and income, and to sufficiently analyze the expected heterogeneity of the population, the survey was administrated online to approximately 500 individuals in each country.

2.2 Discrete Choice Experiment Survey Development

DCEs are a survey research method used to elicit individuals’ preferences for healthcare goods and services [27]. Individuals are asked to select their preferred alternative (e.g., vaccine) from two or more alternatives in a series of choice tasks. The alternatives are defined by a list of selected attributes (e.g., vaccine efficacy, vaccine safety, etc.) and vary from each other by the levels of these attributes.

The design of the experiment allows the analysis of the relationship between at least one independent variable, i.e. the attributes that can be precisely manipulated, and the dependent variable, the choice behavior that can be precisely measured. The design of the study also allows for the control of further variables, such as information about the decision makers, the decision-making process, and assumptions about the decision context. This allows researchers to quantify how individuals trade-off different levels of attributes and how important each attribute is relative to other attributes [28].

The three vaccine-related attributes used in this study were (1) effectiveness of the vaccine in reducing infection rate (50%, 70%, 90%, 99%); (2) effectiveness of the vaccine in reducing hospitalizations among those infected (50%, 70%, 90%, 99%); and (3) risk of death from vaccine-related serious adverse events (1, 10, 50, 200 out of 1 million). The effectiveness in reducing infection rate and safety attributes were selected as they were the most prevalent concerns observed in the extant literature and pretest interviews [2,3,4]. Effectiveness in reducing hospitalizations was included to reflect the evolving focus of governments from reducing the spread of infection to reducing the number of serious cases that required hospitalizations [29, 30]. The highest levels of effectiveness attributes were selected to account for the maximum potential vaccine effectiveness, while the range of death from serious adverse events was selected to encompass the observed real-world data [31,32,33]. The lowest levels for effectiveness attributes and the highest level for the vaccine-related serious adverse event attribute were determined based on findings from the pretest interviews, particularly the bidding game used to assess whether individuals were making trade-offs between the attributes.

The survey instrument was first developed in English and then professionally translated into the respective languages by a translation company (see the electronic supplementary material [ESM] for the English survey instrument). The translated versions were reviewed by native-speaking team members to ensure accuracy and appropriateness. The survey instruments were pretested with 10 eligible participants from each country who were quota sampled based on age, sex, and income using convenient sampling. Native-speaking interviewers followed a ‘think-aloud protocol’ [34], where participants were encouraged to verbalize their thoughts while answering the questions. The aim of these interviews was to evaluate the understandability and appropriateness of the attributes and levels and whether participants could answer the questions as a policymaker. Based on the feedback from the pretest interviews, revisions were primarily made to improve translational and syntactical aspects in order to enhance the understandability of the survey in the relevant languages. In addition, similar questions related to participants’ experience with the COVID-19 pandemic were removed to shorten the survey instrument.

To design the choice tasks, we created an experimental design using optimal D-efficiency measures in SAS 9.4 software (SAS Institute, Inc., Cary, NC, USA) [35]. The fractional factorial design resulted in 12 choice tasks, which were then divided into four blocks of three tasks each. Participants were randomly assigned to one of the blocks. Before the DCE choice tasks, two attention-test questions were asked to determine whether individuals paid attention to the survey and if they sufficiently understood the attributes. The first attention-test question provided information on the effectiveness rates of two vaccines and asked participants to identify the vaccine with the higher effectiveness in reducing the infection rate. The second question presented two vaccines similar to the DCE tasks, but with one vaccine superior to the other across all attributes (i.e., dominant-pair test). Participants who answered these questions incorrectly were provided with an explanation to improve clarity and understanding.

Before the broader distribution of the survey, we conducted a soft launch with 50 participants in each country. This step allowed us to evaluate the various aspects of the survey, including the functionality of the survey platform, and participant response patterns. Specifically, we investigated whether the number of those who fail the attention-test questions is reasonable and whether individuals dominated on any of the attributes (indicating lack of trade-offs between attributes) or an alternative (e.g., always choosing Vaccine A). We proceeded with the final launch as no concerns arose during the soft launch phase.

Participants were randomly assigned to one of two versions of the DCE. In the first version, individuals were asked to assume the role of a policymaker and were asked if they would approve the free distribution of one of two vaccine alternatives for use throughout the country (‘policy decision’ henceforth). They were also given the choice of a ‘Do not approve either vaccine’ option. In the second version, participants were presented with the same vaccine alternatives but were asked to assume that they had not yet been vaccinated (if they were already vaccinated) and were asked to choose a vaccine for themselves (‘personal decision’ henceforth). They were presented with the options of ‘I would get vaccinated with Vaccine A/B’ and an option of ‘I would not get vaccinated with either vaccine’. Henceforth, not choosing a vaccine in a choice task will be labeled as the ‘No Vaccine’ option. Figure 1 presents an example choice task.

Fig. 1figure 1

Sample DCE choice task. DCE discrete choice experiment, COVID-19 coronavirus disease 2019

2.3 Statistical Analysis

The attribute levels were effects coded. We also included an alternative specific constant (ASC) for choosing the ‘No Vaccine’ option, indicating the utility associated with choosing this option. Since there were two versions of the survey, interaction effects were created between the attribute levels and a dummy variable indicating the ‘personal decision’ version. We ran separate models for each country.

To analyze the data, we used a mixed logit model, which allows for heterogeneous preferences. Initially, all attributes were assumed to be random and normally distributed. The attributes without significant standard deviations (SDs) in the random parameters were then assumed to be non-random. We used 1000 Halton draws for the final model estimations.

We also allowed preference heterogeneity by creating interaction effects between preference weights and attitudes towards COVID-19 beliefs (believing that the COVID-19 pandemic was a hoax) and individual characteristics (education, living with vulnerable individuals [i.e., at least 65 years of age, with chronic health conditions or poor health], having worked in essential services, having close friends and/or family who died because of COVID-19). We first included all the interaction effects in the model but those that were not significant were excluded from the final model.

We calculated the relative importance of vaccine attributes and the ‘No Vaccine’ option by taking the difference between the best and worst levels of an attribute and scaling this difference by the sum of all attribute/covariate differences [36]. These calculations were conducted at the individual level, utilizing individual-specific coefficients. For policy decisions, we computed the average of the individual-level relative attribute importance among respondents assigned to the policy version. For personal decisions, we calculated the average of the individual-level relative attribute importance among respondents assigned to this version. These calculations were carried out separately for each country. This method accounts for varying interaction effects within the models.

We also calculated the probability of approving (policy decision) or choosing (personal decision) a vaccine compared with not choosing one. Similar to the process for calculating relative attribute importance, these probabilities were estimated at the individual level and were subsequently averaged across policy and personal decisions. Furthermore, we calculated how the probabilities of approving/choosing a vaccine changed in response to variations in vaccine attributes.

留言 (0)

沒有登入
gif