Dissemination of Registered COVID-19 Clinical Trials (DIRECCT): a cross-sectional study

Summary of results

Examining all trials registered as completed during the first 18 months of the COVID-19 pandemic yielded a 32.8% cumulative probability of reporting at 12 months, with just over two-thirds of trials failing to meet the WHO’s non-pandemic standard for first dissemination of trial results. The median time from trial completion to results searches was 250 days (range 46–561 days). Despite a rise in the use and popularity of preprints, especially early in the pandemic, the most common dissemination route was to publish only in a journal article. Clinical trial registries were, comparatively, rarely used for rapid dissemination. The overall reporting rate was robust to a number of sensitivity analyses; however, trials with a completed status on a registry, in addition to having passed their listed completion date, had a notably higher reporting rate. Reporting was most rapid during the first 6 months of the pandemic compared to the subsequent two 6-month periods. Ivermectin showed notably different reporting patterns compared to other top interventions (i.e., hydroxychloroquine, convalescent plasma, azithromycin, and stem cells).

Findings in context

This study builds on our interim findings, and studies tracking COVID-19 trials from other groups [12, 24,25,26]. Accelerated trial result reporting was consistent in this expanded population, as 13.5% of studies continued to report within 100 days of completion. As would be expected, more time to report led to an increase in overall trial results availability from 14 to 24%. While our preliminary findings showed a slight preference for preprints at the start of the pandemic, by the start of our searches on 15 August 2021, journal publications were the most common dissemination route. Still, the rise in the use of preprints remains substantial and notable, with 57% of reported trials in our population having a preprint available. However, the majority of preprints in our cohort remained unconverted into journal articles (55%, 111/202). Other research has shown concordance in reporting characteristics among COVID preprints that do convert to journal articles [27,28,29].

While the raw reporting rate of 24% is low, it does appear that results dissemination of completed trials was accelerated during the COVID-19 pandemic, both compared to prior pandemics and standard practice. Jones and colleagues examined reporting of trials for Ebola, H1N1, and Zika Virus, with time from completion ranging from ~ 18 months to ~ 72 months [2]. Only Ebola saw a journal publication rate exceeding 20% within a year from completion; the journal reporting rate for COVID-19 exceeded 20% within 300 days, and for any dissemination route in under 200 days. The delayed reporting found by Jones and colleagues is consistent with other findings from the H1N1 pandemic [30, 31]. Similar to our COVID-19 analysis, there was substantially lower dissemination on registries throughout these pandemics. Only five of 333 (1.5%) trials met the non-emergency WHO standard of having results on a registry within 12 months, and in a journal within 24 months; while 32.8% of COVID-19 trials had disseminated results within 12 months, only 7.2% reported on the registry, even when restricting the population to only the registries most likely to contain results. Based on the low usage of registries for rapid dissemination during the COVID-19 pandemic to date, compliance with this standard has not improved. In contrast with our findings, Jones and colleagues’ analysis did not find a noticeable change in overall reporting for trials in a completed status.

As in our interim findings, reporting of COVID-19 clinical trials appeared accelerated compared to standard practice. Other large studies examining the time to dissemination for clinical trials in non-pandemic situations show rates of dissemination within the first year far below the 32.8% seen in our findings [32, 33]. Even legally mandated reporting to ClinicalTrials.gov under US law leads to just 41% of trials reported within a year of primary completion [34]. These non-pandemic analyses, however, typically only cover journal articles and registry results; the rise of preprints may impact future analyses of time-to-publication should they continue to be used in non-COVID-19 contexts. However, even having one quarter of trials published in journals at 1 year would represent an improvement compared with recently documented practice [35,36,37].

In our assessment of common interventions, trials containing arms assessing convalescent plasma, hydroxychloroquine, and azithromycin showed reporting patterns similar to trials examining all other interventions outside of the top five most common. Stem cells also followed the same general trend though with slightly slower reporting. However, trials with an ivermectin treatment arm showed persistently more rapid reporting. This is notable given the serious concerns raised around both fraud and overall trial quality within ivermectin COVID-19 research [38, 39]. Also notable is the relatively low reporting rate of stem cell trials. Ivermectin and hydroxychloroquine, including its usage in combination with azithromycin, were the focus of intense attention, debate, and controversy during the pandemic [40,41,42,43]. While receiving less attention, convalescent plasma also garnered serious consideration as a potential treatment, including an emergency approval from the US Food and Drug Administration, before it was shown to be largely ineffective [44, 45]. However, stem cells were never elevated to similar levels of public, political, and media attention despite high apparent interest from the research community. This mismatch translating to the lowest, and slowest, reporting trends is a notable finding worthy of additional investigation.

Strengths and limitations

This analysis presents a thorough overview of dissemination of clinical trial results during the COVID-19 pandemic. We are not aware of any other analysis that comprehensively examines the link between registration and publication of COVID-19 clinical trials across all ICTRP primary and data provider registries. We made efforts to limit duplication in our dataset through extensive checks for cross-registrations. Given that 13% of the trials in our final sample had multiple registry entries, failure to take this step could have likely impacted our conclusions. Our detailed documentation of these links between registrations and results across multiple dissemination routes could be a boon to future research examining COVID-19 clinical trials. As this is, to our knowledge, the largest comprehensive assessment of the reporting of COVID-19 clinical trials to date, our curated, open dataset can aid in making future metaresearch on the pandemic more efficient and complete.

We included all registered trials, not only randomized controlled trials, in this analysis as a reflection of the full scope of the COVID-19 research landscape. Other major COVID-landscape projects tended to focus on randomized trials, as they aimed to support evidence synthesis efforts [25, 46, 47]. Non-randomized studies, such as early research on hydroxychloroquine [48], were influential to the course of the pandemic despite their design limitations. A sensitivity analysis examining only late-phase, large, randomized studies was nearly identical to the overall reporting rate (23% vs. 24%).These samller, early-phase and non-randomized trials, though perhaps less influential for evidence synthesis and medical guidelines, represent the majority of our sample (64%, 1045/1643), and collectively enrolled thousands of participants at substantial overall cost, and thus have the same moral imperative to share timely results and avoid research waste.

While we could only search roughly two-thirds of our sample in duplicate and could not conduct outreach to investigators, due to resource constraints, our comprehensive search strategy ensured all trials underwent a thorough process for results discovery. Registries, COVID-19-specific study databases, and numerous bibliographic databases were searched using both automated and manual methods. In our efforts to be as inclusive as possible, we included non-English-language results, if we could reasonably translate or otherwise validate the connection to a given registration, though we recognize that the study team was not necessarily well positioned to locate results outside of their native languages and may have missed some results due to this limitation. Publications in non-English languages should still ideally include reference to the trial registration ID in the abstract and full-text which can help mitigate these discovery issue. Searchers were also encouraged to flag trials for adjudication and duplicate coding when faced with any doubts or questions.

This study aimed to examine the rapid dissemination of trial results within pandemic conditions leading to shorter time from completion to results searches than is typical for similar studies of trial non-publication. This approach allowed for feedback on pandemic trial reporting trends faster than typical retrospective analyses which usually occur years later. However, some studies crucial to the pandemic response, but with very long follow-up time, such as adaptive trials and vaccine trials, were not included in our population, as they remain ongoing with only interim results potentially available. We hope future research will build on our dataset of COVID-19 registration and publication through expanded and updated searches, to further understand how dissemination practices may have influenced clinical decision-making during the entirety of the COVID-19 pandemic.

The main limitation of this work was that poor data quality on clinical trial registries may have influenced our findings. Given existing concerns about the reliability of trial information across multiple registries [49], we took efforts to ensure we used more recent and complete data from across multiple registries when possible. We also attempted to examine the impact of data quality and found that using more recent data did not improve reporting statistics. However, the registry entries with more accurate upkeep, in the form of proactive updates to the trial status, did show markedly increased dissemination: the overall reporting rate nearly doubled (24% vs. 42%) when trials were limited to those that had proactively updated their trial to a “completed” status, in addition to having met their completion date.

Poor registry data could impact this analysis in a number of ways. First, the status of trials may be incorrect, resulting in the misclassification of ongoing, completed, terminated, and withdrawn trials. Trials that terminate early with partial enrollment are still expected to update their registrations and indicate whether they were (1) withdrawn prior to enrolling participants, and therefore no results could exist, or (2) terminated early after enrolling some participants. Terminated trials are still expected to report in some form, though reporting rates of these trials are known to be low [49,50,51,52]. Next, completion dates could be incorrect, leading to imprecision in reporting timelines and the potential for misclassification of “ongoing” studies. Our study showed such misspecification of completion dates on the registry: 71 trials, 18% of all results located, had results published on the same day or prior to the registered completion date. Refreshing our completion date data 10 months later did not make an appreciable difference to the overall reporting rate or trends, suggesting that increased time from trial completion does not see improved registry data quality. Lastly, proper maintenance of registry records is likely a positive predictor of trial reporting, which could be investigated in future research. While each of these mechanisms may play a role, better data on which trials actually occurred and when they completed would lead to more precise estimates of publication bias. We hope our open data can provide a starting point to further examine the impact of registry data quality on the validity of analyses of publication bias.

Implications for policy and practice

Despite recommendations for accelerated reporting during public health emergencies, overall reporting remained low with most trials failing to meet even the non-pandemic 12-month standard for results dissemination on a clinical trial registry. The slight increase in reporting compared to standard practice, especially early in the pandemic, should not obscure the fact that more than two-thirds of all pandemic-relevant trials did not publish results within 12 months of the study end date on the registry. This is despite the rise in preprints to aid faster dissemination [50], the availability of registries to rapidly host results [51], and efforts by many journals and publishers to fast-track review of COVID-19 research [52]. Whether this lack of reporting is due to publication bias, a high number of aborted studies, or poor registration data, it underscores cause for concern.

Clinical trial registries cease to represent the current clinical trial landscape when they fail to present timely, accurate, and complete data. Evidence synthesis and research planning [9, 11] rely on registries to provide information on planned, ongoing, and completed trials. Neglecting registry data reduces the accuracy and efficiency of this work and threatens the quality of the resulting clinical guidelines and medical decision-making. COVID-19 was a unique global phenomenon and dominated the focus of new research. Unfortunately, as a result, it appears that in the rush to initiate new studies, many failed to start, ended early, or had difficulty with enrollment and simply abandoned their trials and registry entries [25]. As the high proportion of results discordant with registered completion dates show, even when studies unambiguously did occur, registries could not necessarily be counted on as accurate reflections of reality.

Similarly disappointing is that registries remain substantially underutilized as a rapid dissemination platform. Registries like ClinicalTrials.gov and the EUCTR have standard reporting formats that allow for the publication of results in parallel to preprint and journal publication. While the results have to meet some quality standards, there is no peer review, and no lead time for writing and formatting manuscripts, which should allow for more rapid dissemination. With new minimum standards for registry-hosted results under consultation at the ICTRP [53], registries will need to invest in encouraging and facilitating reporting, while researchers and their institutions should consider reporting to registries a routine aspect of results dissemination, especially during public health emergencies. Journal editors could also make registry maintenance and the posting of summary results a condition of publication, in much the same way they require prospective registration [54] and be more explicit that the publication of summary results on a registry does not count as prior publication, so as to encourage the use of registries as a complementary dissemination route.

While faster dissemination, via preprints or registries, does draw concerns around unvetted or low-quality results entering the public domain, it also allows high-impact results to be adopted into care more quickly [55]. Evidence has shown that COVID-19 preprints that convert to publications are typically concordant in their main characteristics [27, 28, 56] while those that remain unpublished tend to have more issues [29]. “Hot” topics like COVID-19 also likely draw more intense scrutiny during the pre-publication review process that will lead to public discussion around controversial or low-quality preprinted results [57]. The quality of results posted to ClinicalTrials.gov has consistently shown to be high when compared to journal publications for the same study [58,59,60,61].

Our results show that the many COVID-19 studies remain unpublished and have unclear registry data that hides their true status: stakeholders involved in clinical trials, including researchers, funders, registries, research institutions, ethics committees, and regulators, need to work together to facilitate timely publication and to ensure that registered data reflects a trial’s true status. Better coordination of emergency research among stakeholders can help to reduce the number of trials that terminated early due to false starts or failure to recruit [62, 63]. However, given low reporting rates and high uncertainty about the status of unreported trials, evidence synthesis efforts around COVID-19 treatments should routinely check for publication bias and make additional efforts to confirm the status of registered trials with investigators.

Governments and international bodies like the WHO should refine their guidance and laws around when and where results should be published, especially in public health emergencies. This will provide clear criteria that stakeholders should aim to achieve and that can be tracked and audited. Individual registries and coordinating bodies like the ICTRP should improve standards and processes for routine follow-up with trial sponsors to ensure data is updated and results clearly posted on, or clearly linked to, the registry. These efforts will reduce confusion and burden for future research planning, evidence synthesis, and metaresearch efforts. Aiming to improve these standards now will aid in ensuring that the knowledge infrastructure around future public health emergencies is better managed.

留言 (0)

沒有登入
gif