Proxies of Trustworthiness: A Novel Framework to Support the Performance of Trust in Human Health Research

Using five exemplar proxies of trustworthiness, we have suggested that a values-based approach is useful in understanding the operations of the HHR ecosystem. What remains elusive, however, is a means to evaluate and assess the worth of these proxies of trustworthiness, especially when it cannot be guaranteed that they will indeed elicit trust. Moreover, we must contemplate that with new developments and threats in the HHR ecosystem, new proxies of trustworthiness might be required. For example, the conduct of HHR during a public health emergency where time is of the essence might mean that “conventional” approaches are not fit-for-purpose; in such cases, researchers and other decision-makers need clear guidance on how to think through the ethical issues at stake and decide how best to proceed (Nuffield Council 2020) and how best to communicate with publics (Lowe et al. 2022). In what follows, a values-based framework is proposed to assist in these tasks.

Why is a Values-Based Framework Appropriate?

Frameworks can help us identify, methodically, issues at stake in a particular context (Xafis et al. 2019). They are practical tools to consider complex scenarios and—importantly—do not prescribe a particular “answer” but instead offer a flexible, pragmatic approach that promotes action (ter Meulen 2016). Examples of successful frameworks to date include Dawson’s work on public health ethics (Dawson 2010) and the SHAPES Working Group on Ethical Decision-Making in Big Data (Xafis et al. 2019).

But why do we need a values-based framework? It has been argued that proxies of trustworthiness are the operational tools used to perform trustworthiness. However, each of these tools is a proxy of trustworthiness precisely because there is no guarantee that the particular mechanism—be it consent, anonymization, public engagement, etc.—will necessarily elicit trust, i.e., that the performances of trustworthiness and trust will align. However, the values underpinning any given proxy of trustworthiness will remain constant. These values are also likely to underpin performances of trust. Thus, in seeking to align trust and trustworthiness, a focus on aligning values holds considerable promise.

The identification of underlying values also answers the question—what does it mean to perform trustworthiness? In essence, to perform trustworthiness is to give effect to the values at stake in any given proxy (cf. Potter 2002 and Kelly 2018 who advocate a virtues-based approach to understanding and demonstrating trustworthiness). The performance of trustworthiness will be optimal when there is alignment of values as entrusted through a performance of trust. Suboptimal performances occur when there is misalignment of values, and the degree of misalignment is likely to be proportional to the degree of resulting mistrust.

Thus, we contend that what is required is a framework to guide actors to identify key values when attempting to perform their trustworthiness through existing proxies (or through new ones if the extant collection is found wanting). By making these values explicit, the aim is that those proxies, i.e., operational tools, are strengthened. Overall, this improves the chances of creating a trustworthy HHR ecosystem (Lowe et al. 2022). Moreover, scrutiny from a values-basis can reveal both limitations in existing proxies of trustworthiness and the basis on which to establish and put into operation new ones when needed. An open, values-based framework supports attempts to deliver trustworthiness well.

A Proposal for a Values-Based Framework

Values are an expression of what matters morally in a particular community. Values are commonly shared among community members, or if they are not then good reasons must be given to defend them as action-guiding norms. From the five proxies already discussed and looking more holistically at the HHR ecosystem, the following values can be identified as most relevant for the present discussion. These are grouped into substantive values, i.e., considerations that should be realized through the outcome of a decision, and procedural values, i.e., values that guide the decision-making process itself.Footnote 1

Substantive values

Procedural values

Autonomy

Fairness

Respect for persons

Accountability

Privacy

Transparency

Harm minimization

Integrity

Respect for cultural diversity

Humility

Promotion of valuable science

Explicability

Care

Engagement

Solidarity

Accessibility

Benefit sharing

Affirmative access

This is a non-exhaustive list. Decision-makers might reasonably make a case that other values are in play in each given context. Equally, it must be recognized that some values might create tensions, e.g., protecting privacy to a high level might mean that certain kinds of valuable science using patient data cannot be conducted. No framework for ethical decision making can resolve all possible tensions. Rather, it is for stakeholders to make the case as to which value or values should be prioritized and to defend this robustly. Our framework provides a scaffold for decision-makers to approach this task, placing trust and trustworthiness at its centre.

Figure 2 sets out a proposal for a values-based framework that research actors and their institutions can use to address a pivotal question: are there reasons to believe that trustworthiness is under threat in your research environment? The reflexivity of its approach is designed to strengthen performances of trustworthiness in HHR. It also serves as a basis to evaluate any proxies of trustworthiness in play and to establish a basis for new ones, where needed.

Figure 2figure 2

The trustworthiness framework

Operationalization of the Values-Based Framework for Proxies of Trustworthiness

This framework should not be deployed only at single points on the research trajectory, but rather throughout the HHR trajectory as illustrated by figure 1 above. At each juncture where we ask the questions set out in the framework, we may get different answers because the context is crucial. The framework therefore represents an example of a feedback loop: a cycle of consideration, analysis, and improvement. As the framework is used more frequently, an understanding of how trustworthiness can be performed well across the HHR trajectory will be strengthened by considering extant proxies, and any further proxies that might need to be introduced. The aspiration is that this framework will support a learning, intelligent system where proxies of trustworthiness are analysed for each HHR context and relative to any evidence about the values informing performances of trust (Laurie 2021).

Second, although the framework’s steps can be an individual exercise for research actors, it must be recognized that the HHR ecosystem is comprised of many actors with different roles. What one actor recognizes as a strong proxy of trustworthiness might be understood as weak by another. For example, a clinician-researcher might put a lot of faith in consent, while a data scientist-researcher might prefer anonymization because consent is seen as impracticable. Thus, a whole system approach is required for the framework’s application. That is, all actors with an interest in performing their trustworthiness should apply the framework to their activities. Indeed, they might usefully ask: are my actions the weakest link in the ecosystem of proxies of trustworthiness? The expectation is that through reflective equilibrium of the entire ecosystem, the underperformance of proxies—or indeed their successes—are less likely to be missed. But with this comes an important caveat: if any actor or set of actors does not participate fully in this reflective exercise, there is a risk that the whole enterprise is undermined. Put otherwise, the trustworthiness of the entire system is in doubt.

The Initial Question in Three Parts

The question at the heart of this framework—are there reasons to believe that trustworthiness is under threat in your research environment?—prompts us to reflect critically on efforts to perform trustworthiness well. There are three parts to consider when answering this question.

This part of the framework encourages actors to identify their performances of trustworthiness via their current reliance on one or more proxies of trustworthiness. It requires, in particular, an account of which values are being promoted (or ignored) via the proxies of trustworthiness. This will also reveal potential tensions. For example, is a desire to protect privacy unduly hindering sound science (or vice versa)? Importantly, the values that likely underpin any performances of trust should also be identified at this point. By these means, the full gamut of values in play will be revealed. From here, an assessment of alignment of values as between performances of trust and trustworthiness can proceed.

This part of the framework moves from audit to analysis. It prompts research actors to examine how far and how well the proxies of trustworthiness upon which they rely reflect the underlying values at stake, including values that might have come to prominence because of changes in the research ecosystem or wider society.

The framework suggests two prompts to kickstart analysis here. The first prompt asks research actors to consider if there are points on the HHR trajectory where performances of trustworthiness will be absent or in jeopardy. This urges actors to analyse trustworthiness across each part of the HHR trajectory, rather than to undertake analysis as an isolated event. For example, do completely unexpected research findings that formed no part of the consent process now require a re-consent and/or wider engagement with participants? If reconsent is not possible, how will trustworthiness continue to be performed well?

The second prompt asks actors to identify and explicitly reflect on the values underpinning extant proxies of trustworthiness. If there are reasons to question whether the proxies are working well, then this might also mean that the core values shoring up the research might also be open to question. For example, has a reliance on anonymization and privacy protection (at the expense of seeking consent to demonstrate respect for persons) led to mistrust? Similarly, has an early and narrow programme of public engagement led to a failure to respect cultural diversity in the downstream conduct of the research?

Manifestly, this analysis will also have an empirical element, especially regarding trust, i.e., is there tangible evidence that trust is present or under threat? Where evidence is lacking, empirical studies could garner evidence about levels of trust, and this should be done with a view to revealing what participants actually value through their continued participation. But even in the absence of evidence, this element of the framework promotes reflection of how well any particular proxy is operating across the entire research trajectory, e.g., has an informed consent given many years previously now run its course and/or been superseded by new considerations? Might a different proxy, such as anonymization, now better reflect underpinning core values at stake regarding, say, uses of participant data? If there has been a material change of circumstances, does more need to be done via the proxies of public engagement, openness, and accountability? For example, might a downstream participant engagement exercise help to test the values and tolerances in performances of trust relative to any proposed change in the research or deployment of new proxies?

This part of the framework moves actors’ analyses into a consideration of risk. This may assist in the identification of red flags that indicate that performances of trustworthiness are in jeopardy. Thinking prospectively helps anticipate risks and identify mitigating action. But what might risk look like?

There are three risk-based options that we might consider: (i) a crisis in the HHR endeavour; (ii) a material change in the HHR circumstances; and (iii) evidence of a breach of trust in the endeavour.

An illustrative example that encompasses all three triggers is a French clinical trial of a neurological drug that involved healthy volunteers (Feldwisch-Drentrup 2017). The research was catastrophic and left one participant dead and five others with brain damage (The Guardian 2016). Trust was breached here in at least two respects. First, when participants agreed to take part in the trial, they signed a consent form which stated: “You will be informed about any new significant information that could affect your willingness to continue the trial” (Enserink 2016). They were not, however, made aware that another volunteer had become seriously ill. Second, the company and its collaborators refused to publish pre-trial data after the trial collapsed to protect “industrial property.” These are clear examples of how a proxy of trustworthiness (consent) was not followed through as the research endeavour proceeded and how another proxy of trustworthiness (openness) was not adequately performed relative to its values (respect, autonomy, and care). Both failings illustrate what a red flag for trustworthiness might encompass. An ongoing audit of the validity of the proxy of trustworthiness of consent could have averted a risk to trust that resulted in collapse of the trial. Equally, a clearer commitment to openness could have mitigated any further damage to trust and the reputations of the company and collaborators.

Trustworthiness: Is There a Cause for Concern?

After undertaking the analysis encouraged by these three steps, research actors will be in a stronger position to anticipate whether the proxies of trustworthiness under scrutiny are doing enough work to shore up the trustworthiness of their endeavours or whether any red flags have been raised. This is because the proxies in play and the underlying values that are being promoted (or not promoted) have now been identified and reflected upon critically. Once this analysis is undertaken, the framework suggests two courses of action.

Action 1: Continue with the HHR endeavour.

If working through the three steps leads the actor to conclude there is no concern regarding trustworthiness (as far as that it possible to assess), they are encouraged to continue with their endeavour, but nonetheless to revisit the cycle as the research protocol proceeds through the trajectory. This revisiting is important because risks to trust are ever-present and shift throughout the trajectory. A strong performance of trustworthiness at one point on the trajectory does not mean that trustworthiness is maintained throughout, and far less that other proxies deployed at other junctures will be assessed similarly.

Action 2: Re-evaluate the values and proxies and consider whether new proxies of trustworthiness are needed.

If the research actor concludes that there is valid concern about trustworthiness, they are urged to reconsider which values are in play and whether more could be done to strengthen existing proxies. As part of this exercise, actors should also consider whether new proxies of trustworthiness are needed to strengthen general and particular performances of trustworthiness. To return to the French example, early reflection could have led to better engagement and communication with participants about what had happened and what steps were needed to perform future trustworthiness well, including a change in direction of the research and more care paid to participants.

A further example comes from experiences of the COVID-19 pandemic. In the wake of the World Health Organization declaration of a pandemic on March 11, 2020, pharmaceutical companies mobilized internationally to bring vaccines to market. In response, regulators such as the U.S. Food and Drug Administration (FDA) and the European Medicines Agency (EMA) instituted regulatory reforms to expedite scientific review, to institute rolling reviews of data in parallel with the approvals process, and to reduce evaluation timeframes (EMA 2022a). As a further consequence of these rapid regulatory responses, a shift occurred from pre-authorization scrutiny driven by safety and efficacy to post-authorization pharmacovigilance, i.e., following the data about vaccine use in the population. While this is scientifically sound, it nonetheless raises ethical questions and possible concerns for public trust. Were governments’ economic imperatives to get citizens back to work overriding previous ethical imperatives to fully test safety and efficacy? Can pharmacovigilance mechanisms adequately protect populations and ensure sufficient protection of privacy, given that detailed scrutiny of patients’ data will be required for such a system to be effective? The dilemma is this: does the value of public interest in rapidly and effectively countering a pandemic carry sufficient weight to support these regulatory reforms when they might increase risks to individual citizens’ rights and interests? This is a global issue: consider, most recently, FDA proposals to test COVID boosters solely on mice and not humans as sufficient to bring to market (Stein 2022). If such expedited review measures are not trusted, then no amount of new vaccines will make any difference because mistrust will simply drive vaccine hesitancy.

Our purpose here, however, is not to discuss vaccine hesitancy (Dubé et al. 2015) but rather proxies of trustworthiness. In this example, the public interest imperative has driven rapid regulatory change. Established proxies of trustworthiness such as consent, anonymization, and public engagement are far less relevant in this new context. Previously in Europe, in the light of numerous pharmaceutical safety concerns, a mechanism had been established to allow citizens to participate in safety reviews (Altavilla 2018); but the highly truncated timelines involving COVID-19 suggest this measure is now far less feasible and effective. And while both the FDA (FDA 2022) and the EMA (EMA 2022b) have striven to be fully transparent about their motives and actions, this is not the same as openness which might require access to data for assessibility and explicability purposes, as noted above. Accountability remains to be seen. But, in this new climate, is there room for new proxies of trustworthiness to emerge? An important development in Europe was the COVID-19 EMA Pandemic Task Force, since superseded by the Emergency Task Force which is “… an advisory and support body that handles regulatory activities in preparation for and during a public-health emergency, such as a pandemic” (EMA 2022c). This body now has a legislative basis (Regulation EU 2022/123, Article 15). It is precisely such a body that could benefit from the framework advocated herein. Crucial to this are two features: first, its remit would need to be broadened beyond the scientific elements of a response to include the socio-ethical; and second, its composition would need to include members with bioethical expertise. Thus, while the question remains open as to which new proxies of trustworthiness might emerge in new regulatory landscape, there is hope that institutionally and structurally the formal elements are in place to approach the question lest these regulatory shifts do result in a crisis of trust. Indeed, we might even consider the Task Force itself as an institutional proxy of trustworthiness.Footnote 2

In other contexts, other novel proxies of trustworthiness might be relevant. For example, it is well-documented that (most) publics are more trusting of public institutions than commercial enterprise conducting research. Here, a proxy of trustworthiness that might address concerns is benefit sharing, if not of specific profits, then certainly of data and new knowledge (Haddow et al. 2007). This would reflect the underlying value of solidarity. Health and social inequalities are also reflected in many areas of HHR, either because populations are (inadvertently) exploited or because they are excluded from participation or from just benefits arising from research (Selden and Berdahl 2020). Here, a new proxy of trustworthiness we might call “affirmative access” could begin to address such injustices, underpinned by the value of justice itself (Cash-Gibson et al. 2021).

Continuing with the core value of justice, London has argued most recently and convincingly that a commitment to the common good in HHR requires that an even wider network of actors have moral responsibilities for the proper conduct of research, including pharmaceutical companies, philanthropical organizations, affected communities, and even editors of journals (London 2022). Such an expanded moral community would also benefit from the proxies of trustworthiness framework and would contribute extensively to its refinement, the articulation and evaluation of proxies of trustworthiness, and the overall trustworthiness of HHR.

The Role of Reflexivity in This Framework

Proxies of trustworthiness are uncertain beasts in the HHR ecosystem. This makes it inappropriate to speak of optimization of proxies of trustworthiness because this would set impossibly high standards and result in endless rounds of regulatory inefficiency. This must be avoided. Instead, the framework encourages processes of ongoing reflexivity in parallel with the conduct and review of research. The argument herein is fundamentally an ethical one: it embraces the fragility of trust in HHR and recognizes that attempts to perform trustworthiness are themselves built on sand. Reflexivity can be built into the system as part of ongoing training of researchers, regulators, ethics committees, funders, peer reviewers, etc. (Samuel et al. 2022).

The Strengths of This Framework

We suggest that a strength of this framework is that it exhibits a further fundamental value: humility. It does not purport to have all the answers to how trustworthiness can be performed well in HHR. Rather it provides a guide to assist research actors to have a maximal chance of performing their trustworthiness relative to performances of trust upon which the entire research edifice is built. The use of values rather than rules to assist in this task offers flexibility that complements the fickle and fluid nature of trust. It might be easy to say “carry out some public engagement to demonstrate openness as a proxy of trustworthiness and elicit the value of respect,” but this would be little more than an exercise in ethical paint-by-numbers. A more subtle, nuanced, and open approach is needed to reflect the complexities of the HHR ecosystem and the various sociocultural contexts where HHR takes place—where different values may come into prominence at different times (Nortjé et al. 2021). This, in turn, means that proxies of trustworthiness also shift, as do the priorities of their underpinning values. It is for this reason that the framework consists of a system of feedback loops, promoting continuous commitment of the HHR ecosystem and its actors to improve through aspiring to strengthen performances of trustworthiness. Those who use this framework therefore become stewards of trustworthiness in HHR by contributing to its success as a learning system.

The Limitations of This Framework

Some may harbour scepticism that any trustworthiness indicator can be manipulated or faked (Kramer 2009). But such reservations are precisely why this framework takes a values-based approach. It encourages reflection on how far values are aligned in performances of trustworthiness and trust. The imperative to seek ways to align values through performances of trust and trustworthiness makes faking trustworthiness much harder.

Concerns about fakery might raise a further criticism that this framework relies on research actors to assess their own performances of trustworthiness. Critics might suggest that a self-referential approach will inevitably involve bias. This concern can be addressed by the fact that the framework suggests each actor in the ecosystem should feed into, and return to, the framework throughout the course of their research to elucidate where more needs to be done to perform trustworthiness well. In other words, the framework itself requires a network of trust between actors at various stages of the research trajectory who can act as a check on each other. Further research will be needed to see how this works in practice, but the expectation is that actors’ assessments using the framework converge and coalesce in a fashion similar to the Delphi method, which has proven worth in promoting collective consensus on policy and practice issues (RAND 2023).

It might also be objected that a misalignment of values will lead automatically to a conclusion about a breakdown in trust or the presence of mistrust when this might not be so. But no such conclusion should be reached too quickly. A key proxy of trustworthiness—public engagement—could be deployed in such a scenario to test whether trust was indeed under threat. The framework prompts reflection and serves as a starting point to evaluate and assess proxies of trustworthiness. It does not replace the value of hard evidence about trust itself.

This brings us to a final possible criticism, viz., the development of this framework has not taken an empirical approach. Certainly, it does not look at the “state” of trust in HHR by referencing polls or other indicators of public opinion. But this is deliberate because the changeable nature of trust limits the value of empirical studies (O’Higgins et al. 2018). Indeed, it is these shifts in trust that give this framework durability and legitimacy. Empirical evidence does, indeed, have a role within the framework. For example, to support assessment of whether proxies of trustworthiness are—or are not—working well (for now). Also, empirical evidence can “test” the operation of the framework and its underlying premise, viz., that the protection and promotion of core values can indeed serve to engender trust. But this is not the same as saying that empirical evidence is central to the operation of the framework itself, and it certainly does not follow that an assessment of trustworthiness at a given moment in time says anything about whether trust will be forthcoming in the future.

Comments (0)

No login
gif