Implementation of a Quality Improvement and Clinical Decision Support Tool for Cancer Diagnosis in Primary Care: Process Evaluation


Introduction

Diagnosing cancer early can improve patient outcomes and quality of life [,]. But, in general practice, the timely detection of cancer can be challenging in the absence of strong diagnostic features, often resulting in prolonged diagnostic intervals [-]. In patients presenting to general practice with nonspecific symptoms, the use of routine blood tests can guide decision-making []. There is strong evidence that supports the diagnostic utility of abnormal blood tests (eg, iron-deficiency anemia and raised platelets) for multiple cancer types [-]. However, suboptimal follow-up and management of abnormal test results have been shown to contribute to delays in diagnosis [].

Inadequate follow-up of abnormal test results may occur in the case of diagnostic errors, but is also influenced by the general practitioners’ (GPs) experience and training; perceptions of cancer care and investigations; patient characteristics; and health system pressures [,]. For example, controversy and confusion about prostate-specific antigen (PSA) testing, coupled with changing guidelines and revised thresholds for what is abnormal, contribute to lower follow-up rates in men who have a raised PSA. Surprisingly, there are very few trials that look at modifying the practitioner- and practice-level barriers to following up abnormal results [].

The general practice electronic medical record (EMR) allows for the integration of novel technologies, where algorithms apply epidemiological data on the underlying risks of undiagnosed cancer based on symptoms and test results to monitor and identify patients who may benefit from further investigation []. Clinical decision support (CDS) systems assist in clinical decision-making, where such tools are linked to patient data to produce patient-specific recommendations or prompts for the GP to consider [,]. Similarly, auditing tools that use patient information from the EMR enable practice population-level management and review and have the potential to capture patients who are at risk of being lost to follow-up [,]. Evidence suggests that tools that highlight patients for review, referral, or further investigation based on evidence-based guidelines can improve patient care, but many of these tools designed to support diagnosis in general practice are met with low uptake and implementation difficulties [-].

Complex interventions are used to assess the effectiveness and utility of such tools in general practice. Yet implementing complex interventions can be distinctly difficult, as they involve multiple interrelated components and there are often multiple levels where change is required []. Process evaluation can aid in the understanding of the factors that influence how or why a complex intervention succeeds or fails. This study presents the results of a process evaluation of a pragmatic trial, Future Health Today (FHT). This complex intervention consisted of a novel CDS and auditing software, education, quality improvement (QI), and practice support. The pragmatic trial evaluated whether the intervention, which flagged patients with an abnormal blood test that may be indicative of undiagnosed cancer (FHT cancer module), increased the proportion of patients receiving guideline-based care. By gaining process information, we aim to better understand the implementation gaps, explore differences between the general practices involved, understand the interactions between intervention components, and provide context to understand the effectiveness of the intervention.


MethodsIntervention Description and Study Population

The FHT study was a pragmatic cluster-randomized controlled trial that evaluated the effectiveness of a QI intervention []. Pragmatic trials, by definition, are trials that evaluate an intervention in everyday practice, with the aim of measuring the effectiveness of the intervention in routine clinical practice rather than under ideal conditions [,]. The implementation of the FHT software and the trial components (including implementation strategies) were applied and adapted to real-world conditions to understand and evaluate how the tool would be used in routine general practice.

The components of the complex intervention included the FHT software, training and educational sessions, benchmarking reports, and practice support. The trial was conducted between October 2021 and September 2022. Practices were randomly allocated to participate in either the intervention (follow-up of patients with abnormal blood test results associated with the risk of undiagnosed cancer) or the active control (which had access to a different FHT module). As the aim of this process evaluation was to explore the factors critical to the implementation of the cancer module, our study population comprises the 21 intervention arm practices only; results for the active control intervention will be reported separately. The study protocol has been published on the Australia and New Zealand Clinical Trial Registry (ACTRN12620000993998) [].

FHT was integrated within the general practice EMR and consisted of a CDS tool, a web-based audit and feedback tool, and the capacity for general practices to monitor their QI activities []. Disease-specific modules were developed for use in FHT. The cancer module used patient information in the EMR (age, sex, previous cancer diagnosis) and results of abnormal tests associated with undiagnosed cancers. The FHT cancer module consisted of 3 central algorithms, designed to assist GPs by flagging patients with abnormal blood test results that are associated with an increased risk of undiagnosed cancer: markers of iron deficiency and anemia, raised PSA, and raised platelet count). The CDS component of the tool activates when the GP or general practice nurse (GPN) opens the patients’ medical record, displaying a prompt on screen with guideline-concordant recommendations, such as the review of relevant symptoms or appropriate investigations (). There is also a web-based portal, containing an auditing tool; a QI monitoring tool; and access to resources, guidelines, education, and training, which can be accessed on any computer with FHT installed. Algorithms run each night, extracting data from the practice management software database (eg, Best Practice or Medical Director), processing the data locally by applying FHT algorithms (the data does not leave the practice), and categorizing the results. Examples of a CDS prompt and the audit tool are presented in . Further details on the development of the tool and the cancer module explored in this study have been described elsewhere [-].

Figure 1. An example of the clinical decision support tool as it appears in the medical record. Simulated patient data are used in this image.

In the pragmatic trial, FHT was installed on general practice computers before study initiation. On the first day of the trial, practices were asked to create 3 cohorts of patients using the FHT auditing tool, one for each abnormal blood test (raised PSA, raised platelets, and markers of anemia). The cohorts included all patients identified by the FHT cancer module who had recommendations for guideline-based follow-up (as part of the trial, practices could then review the patient cohorts and determine if further follow-up was necessary). Cohorts were created again at the 6-month mark, using the audit tool, so that benchmarking information could be determined. After generating the cohorts, practices were invited to use FHT as they chose during the trial.

Implementation of the software was supported by a number of additional intervention components. This multifactorial implementation strategy was informed by the Reach, Effectiveness, Adoption, Implementation, and Maintenance framework, with strategies that were relevant and useful to general practice []. These components have previously been shown to increase reach; they are low intensity and high impact, with the purpose of limiting implementation workload while promoting continued engagement with the intervention [,]. Training on the use of FHT was offered regularly in the lead up to and in the first month of the trial, and then monthly thereafter. Each practice was assigned a study coordinator, who conducted the Zoom-based training sessions on how to use FHT, assisted with any technological queries, and facilitated requests for support throughout the trial. Practices had access to short training videos on YouTube and a range of short- and long-form written training guides. In addition, 6 Project ECHO (Extension for Community Healthcare Outcomes) [] educational sessions were run on the topics of cancer diagnosis and QI, each consisting of a 10-minute didactic session, a 10-minute case discussion, followed by an open discussion for approximately 20‐30 minutes. The ECHO sessions were delivered via a webinar, and general practice staff were invited to attend. Quarterly benchmarking reports were provided to practices to review their progress in the follow-up of patients who had been flagged by the tool, and to compare their progress to other practices in the trial. All practices were required to nominate a practice champion to lead the implementation of FHT in their practice and to be the primary point of contact with the study coordinator during the trial, managing the installation and technical queries, facilitating ongoing use of the tool, identifying staff for process evaluation interviews and to disseminating trial related information to the practice. The goal of the practice champion in this study was to mirror the pragmatic approach of the intervention (eg, they were asked to filter and disseminate information to the practice using an approach that best reflects their individual practice needs and current processes).

Ethical Considerations

The study was approved by the Faculty of Medicine, Dentistry and Health Sciences Human Ethics Sub-Committee at the University of Melbourne (ID:2056564). While practices consented on behalf of all practice staff to participate in the wider trial, additional written consent was obtained for all interviews. Interview participants were compensated A $100 (US $64.83) for their time. Practice champions also consented separately and were compensated A $200 (US $129.66) for their role as practice champions. All participant data were deidentified and kept anonymous.

Data Collection

Data were collected via qualitative interviews, usability surveys, technical queries, engagement logs, and educational session surveys. For the semistructured interviews, all practice champions were contacted via phone and email to participate in an interview in the first and last months of the trial. The practice champion was most commonly a practice manager (PM) or GPN, but GPs occasionally took on this role during the trial (eg, due to staff changes). The semistructured interviews were conducted over the phone. The interviews were conducted by study researchers (SC, NL, and BH; see the following section on researcher characteristics). The duration of the interviews ranged between 15 and 42 minutes. The interview guides were developed using the Clinical Performance Feedback Intervention Theory framework [] and were pilot-tested during earlier optimization work on the FHT cancer module []. The interviews explored installation, intervention delivery, implementation barriers and facilitators, goals, and usability (see for interview schedule). The interviews explored similar themes at each timepoint, although earlier interviews included questions around goals and intention, and the final interviews explored long-term implementation and sustainability. GPs and GPNs were also recruited for interviews in month 6 of the trial. These interviews have been reported separately [], as the purpose of the clinical interviews was to explore the acceptability of the clinical recommendations and impact on clinical practice, rather than explore the implementation of the wider intervention.

Usability surveys were sent to practice champions in months 1 and 12 of the trial, with the request to distribute them to the rest of the practice. The survey was delivered via web using REDCap (Research Electronic Data Capture; Vanderbilt University) [] and included 30 questions (multichoice or free text). This survey was anonymous but captured general demographic information about the user and the general practice in which they work. The survey then explored the use and experience with the intervention (eg, length of time using the tool, what components have been used, and feedback and engagement with the intervention components). The survey also included a System Usability Scale (SUS) a 5-point Likert scale that quantifies the perceived usability of FHT []. The usability survey was developed by the study implementation team and is available in full in ).

Postsession ECHO surveys were sent to all ECHO session participants via REDCap after each educational session and collected both demographic information and feedback on the specific learning outcomes of each webinar. The survey consisted of 23 multiple-choice or free-text questions. An example survey from one of the webinars is included in .

Information on the number of installations in each practice, the number of individual users, and recommendation queries (submitted through the technology by the practice) was collected using the FHT technology. Technical reports, including any technical queries by the practice throughout the trial, were recorded by the study coordinator. All engagements between the practice and the study team (study coordinator and technical team), were recorded by the study coordinator and categorized by content (eg, technical queries, training, and administrative items). Implementation diaries were kept by study coordinators to record contextual information (eg, changes in COVID-19 pandemic guidelines, immunization rollout, and general practice initiatives) throughout the trial.

Researcher Characteristics

SC is a PhD candidate at the Department of General Practice and Primary Care, University of Melbourne. BH, a senior qualitative research fellow in the department is the implementation lead for the FHT trial. NL is a postdoctoral research fellow who was the study coordinator for the active control arm of the trial. All are female and experienced in qualitative research and conducting semistructured interviews. Some interview participants were known to the interviewer, given their position in delivering the intervention and supporting the implementation in practices throughout the trial.

Data Analysis

Recorded interviews were transcribed and imported into NVivo (version 12; Lumivero). Process evaluation data were analyzed independently (SC and BH), prior to trial effectiveness outcomes, so as not to bias the interpretation of the results. Each researcher independently conducted a structured, deductive content analysis of the interview transcripts to extract key themes in the data. The results of the content analysis were collated, and themes were presented to the research team. To promote trustworthiness, analytical codes and emerging concepts and categories were discussed at multiple points in the analysis. Positionality was discussed by the coding team, including how established relationships, biases, and experiences may influence their relationship to the study data, and reflexive notes were kept [,]. The interpretation of the key findings and discrepancies in interpretations was discussed with the wider team. The results of the evaluation were then mapped onto the UK Medical Research Council (MRC) framework [,].

While several frameworks are available to explore and evaluate the implementation of an intervention, the MRC framework was chosen as it is designed for evaluating complex interventions. It has previously been shown to be useful in evaluating the delivery of new technologies in complex environments and in instances of a multi-faceted implementation approach [,]. The framework includes overarching themes of context, implementation, and mechanisms of impact and provides a mechanism for understanding the implementation successes and failures () [,]. In the figure, the data sources from the trial are mapped onto the 4 process evaluation components as outlined by the MRC framework (implementation, context, mechanisms of impact, and outcomes). The figure outlines the core components and questions underpinning each theme, and the process data used to answer these questions.

Figure 2. How the process evaluation data are mapped onto the Medical Research Council framework. ECHO: Extension for Community Healthcare Outcomes; FHT: Future Health Today; GP: general practitioner; GPN: general practice nurse.
ResultsOverview

A total of 21 practices participated in the process evaluation. Characteristics of the participating practices are described in . Characteristics of the interview participants are outlined in . Participation in other components of the process evaluation (usability survey, ECHO surveys) and additional general practice details are outlined in . In summary, 25 interviews were conducted with 19 practice champions in the first and last months of the trial. A total of 12 usability surveys were completed, and 13 post-ECHO session surveys. Usability survey responses included a mix of PMs (n=4), GPNs (n=4), and GPs (n=3), as well as one receptionist (n=1).

Table 1. General practice characteristics.Practice characteristicsPractices (n=21), n (%)StateVictoria20 (95)Tasmania1 (5)Relative Socioeconomic Disadvantage Index (Terciles)1 (most disadvantaged)6 (29)26 (29)3 (least disadvantaged)9 (42)Previously participated in QI program9 (43)Practice size4 or fewer FTE GPs12 (57)Greater than 4 FTE GPs9 (43)RuralityMetro15 (71)Rural6 (29)

aQI: quality improvement.

bFTE: full-time equivalent.

cGP: general practitioner.

Table 2. Interview participants by timepoint.Month 1Month 12Role, n (%)GP1 (7)1 (9)GPN2 (14)4 (36)PM11 (79)5 (46)Admin0 (0)1 (9)Gender, n (%)Women13 (93)11 (100)Men1 (7)0 (0)Rurality, n (%)Metro11 (79)5 (45)Rural3 (21)6 (55)Number of interviewees, n1411Number of practices, n139

aGP: general practitioner.

bGPN: general practice nurse.

cPM: practice manager.

Results have been mapped onto the 3 themes of implementation, context, and mechanisms of impact.

Trial Results Summary

The results of the cluster randomized controlled trial did not demonstrate a significant improvement in follow-up in the intervention arm []. At 12 months, 76.2% (2820/3709) of patients with abnormal test results in the intervention arm had been followed up compared with 70% in the control arm, with an estimated difference of 2.6% (95% CI −2.8% to 7.9%). No significant differences were identified in the secondary analyses or in the time to follow-up of abnormal tests for patients flagged by the tool. The following results of the process evaluation provide some context for the null outcome of the trial and suggest areas for improvement in the development and implementation of CDS and audit software for cancer diagnosis in general practice.

Implementation

There were 3 core themes on implementation: intervention delivery, installation, and general practice characteristics, each underpinned by different evaluation data sources. Intervention delivery was supported by data from engagement logs and educational session surveys, installation and general practice characteristics were supported by data from technical reports, and all 3 drew from qualitative interview data.

Intervention Delivery

The intervention consisted of multiple components: the FHT software components (CDS, an auditing tool, and QI monitoring) and the supporting trial components (educational ECHO sessions, zoom-based training sessions, benchmarking reports, and other web-based learning components that practices could opt-in to use). The uptake of the supporting elements of the trial was generally low, except for the initial formal training sessions. GPs, GPNs, and PMs from all intervention practices were invited to the Project ECHO sessions, yet attendance ranged from 2 to 9 people per session, a mix of GPs and GPNs. Three key barriers were assessed as driving the low uptake of these trial components. First, the supporting components of the intervention were promoted via phone calls, newsletters, and regular emails to the practice champion, so it is possible that the knowledge of each session may not have reached the whole practice, dependent on how the practice champion decided to distribute this information to the practice (eg, internal email systems). The second barrier is the time and resource cost associated with each component. For example, attendance at training sessions and ECHO sessions (1 h each), during or after work hours, was not feasible for many clinical staff. The final barrier relates to recognized need and usability, with many practices reporting that they could use the CDS tool and the cancer recommendations adequately, without the need for additional education or training.

It’s quite straightforward and quite well explained so it didn’t need anything extra particularly.
[GP, female, month 1]Installation

The installation of the software was completed in the month prior to study initiation, with practices having access to a “practice” module on diabetes in the 2 weeks prior to study initiation so any technical issues could be addressed. The installation, which was done remotely and without much interruption to the practice, was reported to be a smooth process for most. For those who required additional assistance, the use of a study coordinator and technical support ensured PMs felt well-supported during this process.

I think what really has gone well is how it seamlessly was implemented. There was no - there’s no interruption.
[PM, female, month 1]

Due to the pragmatic approach of the trial, practices determined how many workstations in their practice would have FHT installed at the start of the trial. A total of 14 practices had FHT installed on all clinical computers. Five practices had FHT installed on only one computer at trial initiation, and of these, 4 made the decision to add FHT to additional computers later in the trial. Implementation logs and technical reports indicate that 3 practices were offline for a short period of time (range: 2‐6 wk), although this does not appear to have had a significant impact on the use of the system.

General Practice Characteristics

There was a large variation in the number of patients identified for follow-up across practices. Three inner-city practices, which had a younger and transient patient population, reported that the cancer module may not be useful in their clinic, given the low number of patients flagged by FHT. For example, in one practice, only 14 patients were flagged for follow-up during the entire 12-month trial period. While these practices acknowledged that the FHT cancer module was less useful for them, it did not deter them from continuing to use the tool after the trial, where they would have access to additional FHT modules (see Software Usability section).

Actually, it is cancer topic I don’t think that it is very suitable for our clinic because our clinic – the majority of our patients are international students, and they are very young.
[PM, female, month 12]Context

In exploring context, there were 2 prominent themes: the COVID-19 pandemic and staff turnover. Both themes were underpinned by engagement logs, implementation diaries, and qualitative interviews.

COVID-19

The FHT trial was conducted during the COVID-19 pandemic. In Victoria, restrictions were placed on how and when people could leave their homes, with Melbourne experiencing lockdowns for 262 days during the pandemic. There was a major shift in usual care, and many consultations were conducted via telehealth. During 2020, there was an 8% reduction in cancer-related diagnostic tests nationally, with greater reductions seen in Victoria []. The trial continued during a nationwide COVID-19 immunization rollout in primary care, and the burden on general practice was high. There were reports throughout the trial that practices could not devote as much time as they would have liked to FHT or to attend the ECHO sessions due to competing webinars related to COVID-19.

It’s been a time of change, a lot of updates, a lot of new technology with telehealth. Yeah, there’s been a lot going on because of COVID.
[PM, male, month 12]Staff Turnover

Consequently, staff turnover was a common theme throughout the trial, and the resultant loss of information and increased resource pressure featured heavily in the month 12 interviews. A total of 9 practice champions left their practice during the trial, with 2 practices ending the trial with no replacement. Many interviewees talked about the magnitude of staff turnover during the pandemic and how it was a barrier to use and to keep up momentum in the study.

We lost two staff, and two doctors at the end of last year. Now we’ve got two doctors that we’re training again. We started off from scratch again.
[GPN, female, month 12]Mechanisms of Impact

We found 4 mechanisms associated with the delivery of the intervention: adoption and integration, training and support, software usability, and clinical recommendations. The sources of data varied within each theme. Technical reports, usability surveys, and interviews supported adoption and integration. Training and support were underpinned by engagement logs, education session surveys, and interviews. Software usability was supported by the usability survey, interviews, and engagement logs. The final theme of clinical recommendations was elucidated from technical reports (in particular, recommendation queries), which were further explored in the educational sessions and qualitative interviews.

Adoption and Integration

The majority of practices reported that they did not use the QI and audit and recall components of the tool, only the CDS, which was delivered at the point of care. The CDS was considered easy to use and quick to learn and was therefore easily integrated into the clinical workflow by matching the resources available in a busy general practice. However, the audit, recall, and QI components of the tool encountered a number of barriers. First, in comparison to the CDS tool, where recommendations are actively delivered to the GP, the audit and recall tool requires the user to visit a web page and log on to access this part of the tool. Second, there were additional layers of complexity and multiple steps involved in order to identify, review, and recall patients identified in the audit tool.

Training and Support

The level of engagement between the study coordinator and most participating practices was high, and the support provided by the research and technical team facilitated the continued involvement of practices in the study. No practices in the intervention arm withdrew during the study period.

The co-operation between the teams and myself was amazing. There were no issues whatsoever and they were always there to help … it was really good.
[PM, female, month 12]

Practice staff who attended training sessions or used the web-based resources found the training adequate enough to use the tool, and practice champions reported that they would be comfortable training other members of the practice who could not attend. However, in most interviews, especially with the GPs who did not attend the training sessions, it became evident that components of training on how to use FHT did not reach the entire practice. For example, many GPs were unaware of the patient deferral button (which allows GPs to pause recommendations for a patient for a specified period of time) or that there are patient resources available. The post-ECHO session surveys highlighted that the education and case discussion components of the ECHO sessions were useful to GPs and GPNs in managing more complex patient scenarios, but did not influence the way in which the tool was used.

Software Usability

Of the 12 usability survey responses, 11 (92%) would recommend FHT to others. As part of the usability survey, respondents filled in a SUS []. The results of the survey align with the separate qualitative results from the clinical interviews in that FHT is reported to be easy to use, simple, and intuitive []. The average SUS score from the respondents was 74 (out of 100), consistent with an above-average score (average score=68; score >70 is considered good).

Acceptance and perceived usefulness of the FHT software were indicated by the number of practices agreeing to continue using the FHT software posttrial. A total of 18 of the 21 practices opted to continue using the software after the trial ended (practices were offered a 3 mo extension), and 17 practices opted to continue using the tool into 2023‐24.

Clinical Recommendations

The software included a menu option to “report recommendation query” if the GP or GPN thought the recommendation was appearing in error or wanted further information. Five queries about the clinical recommendations in FHT, from 3 practices, were received during the trial. The most frequent recommendation query centered around the clinical recommendations for raised platelets. The risk of undiagnosed cancer increases at a platelet count threshold of 400×109/L, but different laboratories report an upper limit of either 400 or 450 × 109/L; this caused some confusion among GPs if a patient was flagged with a count in the range of 400‐450x 109/L. This issue was addressed in training sessions and regular communications (monthly emails, newsletters), but the perceived error may have impacted some GPs’ willingness to use the tool and their trust in the recommendations. Interestingly, there were no queries about the recommendations for raised PSA (the FHT recommendations were based on current Australian guidelines for PSA follow-up with a lower limit of 3ng/mL in men over 50, which contrasts with some laboratories that report a lower limit of the normal range of 4ng/mL). Established referral pathways and familiarity with the abnormal test as a cancer marker (raised platelet is a relatively new marker of cancer) may have been a contributing factor to this difference in response.


DiscussionOverview

In this study, we describe a comprehensive process evaluation exploring the delivery of a complex intervention as part of a pragmatic, randomized trial, where a module to support cancer diagnosis was implemented in general practice. The process evaluation describes implementation gaps and the mechanisms that drive implementation successes and failures in order to provide context to the outcomes from the trial [].

Principal Findings

The FHT cancer module intervention did not demonstrate a significant improvement in the follow-up of abnormal test results in the patients flagged by the tool. While we hypothesize that the high-performing practices across both arms may have led to a ceiling effect (ie, there was limited room for improvement given the high rates of follow-up in both arms), an absence of any intervention effect may in part be due to implementation barriers, primarily relating to practice characteristics and contextual factors. There was limited ability for some specific practices to engage with the tool when their patient population was not suited to the FHT module that was implemented. Given this variation in the relevance and usefulness between practices, the use of the FHT cancer module may be better targeted to certain practices based on size, location, and patient demographics.

Comparison to Prior Work

In comparison to interventions with only one component, complex interventions require more time and resources, and are, unsurprisingly, more difficult to implement [,]. We found that the uptake of the supportive components of the intervention was low, aside from some initial training on the software. It was also indicated in the interviews that the supporting components were not considered necessary to use the CDS. While the implementation of new software in general practice requires some training and support, the results of this process evaluation indicate that a scaled-back approach to implementation, one which aligns with the time and resources available to general practice, may have been sufficient for the CDS component of the tool []. However, given the null outcomes of the trial, the low uptake of the audit tool, and significant contextual factors (COVID-19 pandemic), more work is needed to determine the usefulness of each component, or combination of components, in supporting this type of change in practice.

Implementing new technologies in general practice is a complex and dynamic process, and despite the potential to improve patient outcomes, many tools have low uptake after implementation [,]. The trial consisted of a number of implementation strategies that aimed to optimize the uptake of FHT in routine care, and these methods were applied primarily at the professional level (eg, education or training strategies targeting health care professionals and identification of practice champions) []. We found that the use of a practice coordinator facilitated the continued involvement and engagement of practices throughout the trial, similar to previously reported successful implementation strategies used in complex evaluations delivered in general practice. One overview of reviews concluded that practice facilitators, who work with practices in areas such as QI, problem-solving, and education, are almost 3 times as likely to adopt evidence-based guidelines, and practice facilitation improved the adoption of guidelines associated with many chronic diseases []. But given the large amount of staff turnover, driven by the COVID-19 pandemic, identifying, maintaining, and replacing practice champions was difficult and resulted in a loss of information and a barrier to engagement for some practices.

Strengths and Limitations

This process evaluation was extensive, with a multi-modal approach to collecting process data, including interviews, surveys, technical and software data, engagement logs, and implementation diaries. Interviews and usability surveys were carried out at 2 time points during the trial, to address the dynamic nature of implementation barriers and facilitators and how perceptions of the tool can change over time. This substantive evaluation provides context to a complex intervention and the environment in which it was implemented.

There were, however, some limitations. While all practices were invited to take part or contribute to each component of the process evaluation, there were 3 practices who did not participate in an interview at any timepoint or complete any surveys. The opt-in method for the interviews and surveys meant that we may not have sufficiently captured the views of practices who were less engaged with the intervention. These 3 practices did contribute some data to the process evaluation, through software data, technical information, and engagement logs, which were captured from all practices involved in the trial.

The burden of the COVID-19 pandemic in general practice and the resultant impact on staffing was a core theme throughout the process evaluation and provided context when interpreting the trial results. A second limitation was that the pandemic also likely impacted the time, availability, and resources for general practice staff to participate in the interviews and contributed to the low response rate for the usability survey. To mitigate this, we provided numerous opportunities for users to engage in interviews and respond to surveys throughout the trial and promoted such activities through the continued engagement with each practice champion.

Finally, we had originally planned on including some additional software use statistics to complement the qualitative components of the intervention; however, incomplete data prohibited our ability to do so. Software use data would have allowed us to triangulate users’ responses via interviews and surveys with their time using the software, including what parts of the tool they used and when. Future studies would benefit from including software statistics to cross-check the qualitative results.

Implications and Future Research

There are implications for both research and practice. While the FHT cancer module did not increase the proportion of patients followed up according to guidelines, the process evaluation highlighted factors around usability, which facilitated the adoption and integration of the CDS component of the tool. This, coupled with the acceptability findings from separate clinical interviews [], and the willingness of the majority of practices to continue using the tool after the trial finished, indicates that different modules developed for use in FHT should be explored, as well as CDS tools for cancer diagnosis more broadly. There are also considerations for designing complex interventions that involve the use of a new technology. Given the low uptake of the supporting components of the tool, but indications of use and acceptability of the CDS component of the software, it is unclear whether a multifaceted implementation strategy is useful when implementing new CDS tools, especially if it has been carefully co-designed to meet the needs of users. Future work should be undertaken to determine if a scaled-back approach, which meets the time and resource availability of general practice, could be as effective in supporting the delivery of novel CDS tools.

Conclusions

This process evaluation highlights the implementation and process-related gaps that could be addressed in future studies that aim to implement diagnostic support tools for cancer in general practice. While some of the factors were context-specific (eg, driven by the COVID-19 pandemic), barriers such as time, resources, and practice variations, alongside considerations of design elements, could be built upon to optimize future CDS and QI programs.

We would like to thank the general practitioners, health care professionals, and consumers who provided input to the development of Future Health Today (FHT), piloted the tool in their practices, and participated in this trial. We are grateful to the consumer and general practice advisory groups who provided important insights into this trial. We would like to acknowledge the Primary Care Collaborative Cancer Clinical Trials Group (PC4) for their support on this project. Deidentified patient data from the Patron primary care data repository (extracted from consenting general practices), which has been created and is operated by the Department of General Practice and Primary Care, the University of Melbourne, were used to underpin trial feasibility, sample size calculations, and to evaluate the outcomes of the trial. The Paul Ramsay Foundation provided funding for staff costs related to the technical development of FHT and the Implementation and Evaluation team members. They have also provided funding for professional staff support and general practice and consumer advisory groups. The development of the cancer module of FHT was supported by the CanTest Collaborative (funded by Cancer Research UK C8640/A23385) of which JE is an Associate Director. JE is supported by an NHMRC Investigator grant (APP1195320). JMG is supported by a Victorian Cancer Agency Mid-Career Fellowship (MCRF21025).

Deidentified datasets analyzed during this study are available from the corresponding author on reasonable request.

Future Health Today has been developed and managed by the Department of General Practice and Primary Care, University of Melbourne, in collaboration with Western Health.

Edited by Naomi Cahill; submitted 15.08.24; peer-reviewed by Owain Jones, Raff Calitri; final revised version received 25.03.25; accepted 27.03.25; published 12.06.25.

© Sophie Chima, Barbara Hunter, Javiera Martinez-Gutierrez, Natalie Lumsden, Craig Nelson, Dougie Boyle, Kaleswari Somasundaram, Jo-Anne Manski-Nankervis, Jon Emery. Originally published in JMIR Cancer (https://cancer.jmir.org), 12.6.2025.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Cancer, is properly cited. The complete bibliographic information, a link to the original publication on https://cancer.jmir.org/, as well as this copyright and license information must be included.

Comments (0)

No login
gif