Implementation Science 2023, 18(Suppl 3):S104
Background
The field of dissemination and implementation (D&I) research has grown in recent years. However, the field of dissemination research has not coalesced to the same degree as the field of implementation research. To advance the field of dissemination research, this review aimed to: (1) identify the extent to which dissemination frameworks are used in dissemination empirical studies, (2) examine how scholars define dissemination, and (3) identify key constructs from dissemination frameworks.
Methods
To achieve aims 1 and 2, we conducted a scoping review of dissemination studies published in D&I science journals. The search strategy included manuscripts published from 1985 to 2020. Articles were included if they were empirical quantitative or mixed methods studies about the dissemination of information to a professional audience. Studies were excluded if they were systematic reviews, commentaries or conceptual papers, scale up or scale out studies, qualitative or case studies, or descriptions of programs. To achieve aim 1, we compiled the frameworks identified in the empirical studies. To achieve aim 2, we compiled the definitions from dissemination from frameworks identified in aim 1 and from dissemination frameworks identified in a 2021 review. To achieve aim 3, we compile the constructs and their definitions from the frameworks.
Findings
Out of 6017 studies, 89 studies were included for full-text extraction. Of these, 45 (51%) used a framework to guide the study. Across the 45 studies, 34 distinct frameworks were identified, out of which 13 (38%) defined dissemination. There is a lack of consensus on the definition of dissemination. Altogether, we identified 48 constructs, divided into 4 categories: Process, Determinants, Strategies, and Outcomes. Constructs in the frameworks are not well defined.
Implications for D&I Research
This study provides a critical step in the dissemination research literature by offering suggestions on how to define dissemination research, and by cataloging and defining dissemination constructs. We will provide a critique and reflection about the dissemination literature and offer suggestions on how to strengthen these definitions and distinctions between D&I research to advance the field of dissemination research.
Primary Funding Source
National Institutes of Health
S105 The priority aims and testable hypotheses (path) for implementation research: a scoping reviewBryan Garner1, Sheila Patel2, Sarah McDaniel2, Jackie Mungo2 1The Ohio State University, Columbus, OH, USA; 2RTI International, Research Triangle Park, NC, USA Correspondence: Bryan Garner (bryan.garner@osumc.edu)Implementation Science 2023, 18(Suppl 3):S105
Background: Eccles and Mittman (2006) defined implementation research as “the scientific study of methods to promote the systematic uptake of research findings and other evidence-based practices into routine practice, and, hence, to improve the quality and effectiveness of health services and care.” Similarly, the National Institute of Health has defined implementation research as “the scientific study of the use of strategies to adopt and integrate evidence-based health interventions into clinical and community settings in order to improve patient outcomes and benefit population health.” Guided by these definitions, existing implementation research, general principles of data reduction, and a general framework for moderated mediation, we identified three priority aims and four priority testable hypotheses to advance generalizable knowledge. This presentation will present results of a scoping review to identify articles that have addressed the priority aims and testable hypotheses (PATH) for implementation research.
Methods: Using the five-stage approach developed by Arksey and O’Malley (2005) and advanced by Levac, Colquhoun, and O’Brien (2010), we conducted a scoping review of all research articles and short reports published between 2006 and 2020 in either Implementation Science, Implementation Science Communications, and Implementation Research and Practice.
Findings: Of the 862 articles identified and coded, 43 (5%) assessed a PATH for implementation research. Advancing generalizable knowledge about the relationship between an implementation strategy and a health or health-related outcome (path c) was the most addressed priority aim, with 32 articles identified. Regarding the priority testable hypotheses, we identified 34 articles that tested an effectiveness hypothesis from a superiority trial, and 1 article that tested a cost-effectiveness hypothesis from a non-inferiority trial.
Implications for D&I Research: The PATH for implementation research were examined by few articles identified in key implementation-focused journals. To help the field develop one or more scientific theories as defined by the National Academy of Sciences (i.e., a comprehensive explanation of the relationship between variables that is supported by a vast body of evidence), there is an urgent need for more PATH-centered implementation research.
Primary Funding Source
National Institutes of Health
S106 Toward a more comprehensive understanding of organizational influences on implementation: The organization theory for implementation science (OTIS) frameworkSarah Birken1, Jennifer Leeman2,3, Linda Ko4, Alexandra Peluso1, Cheyenne Wagi1, Mary Wangen5, Maria E. Fernandez6, Manal Masud4, Terry Huang7, Matthew Lee8, Grace Ryan9, Prajakta Adsul10, Mimi Choy-Brown11, Jure Baloh12, Michelle C. Kegler13, Hannah Arem14 1Wake Forest University School of Medicine, Winston-Salem, NC, USA; 2School of Nursing, University of North Carolina, Chapel Hill, NC, USA; 3Lineberger Comprehensive Cancer Center, Chapel Hill, NC, USA; 4University of Washington, Seattle, WA, USA; 5University of North Carolina, Chapel Hill, NC, USA; 6The University of Texas Health Science Center at Houston School of Public Health, Houston, TX, USA; 7The City University of New York, New York, NY, USA; 8NYU Grossman School of Medicine, New York, NY, USA; 9University of Massachusetts Chan Medical School, Worcester, MA, USA; 10University of New Mexico, Albuquerque, NM, USA; 11University of Minnesota, St. Paul, MN, USA; 12University of Arkansas for Medical Sciences, Little Rock, AR, USA; 13Rollins School of Public Health, Atlanta, GA, USA; 14MedStar Health Research Institute, Hyattsville, MD, USA Correspondence: Sarah Birken (sbirken@wakehealth.edu)Implementation Science 2023, 18(Suppl 3):S106
Background: Theoretical frameworks contribute to understanding and addressing evidence-based practice (EBP) implementation by synthesizing multiple theories’ constructs. For example, the Theoretical Domains Framework synthesizes constructs from 33 psychological theories for implementation scientists’ use. Similar frameworks do not exist for organization theories, which explain how and why organizations adopt, implement, and sustain EBP use. Although their utility is increasingly acknowledged, organization theories remain underused in implementation science. To advance their use among implementation scientists, we synthesized organization theory constructs in the Organization Theory for Implementation Science (OTIS) framework.
Methods: We recruited organization and implementation scientists to participate in an online concept mapping exercise in which they sorted 70 constructs from 9 theories identified in our previous work into domains representing similar theoretical concepts. Participants also used a five-point scale to rate each construct’s influence on implementation and potential for modification. Multidimensional scaling and hierarchical cluster analyses were used to produce visual representations of the relationships among the constructs in concept maps. To interpret concept maps, we engaged members of the Cancer Prevention and Control Research Network OTIS workgroup.
Findings: Twenty-five experts participated in concept mapping. OTIS workgroup members selected the 10-cluster solution based on included construct groupings’ coherence. Workgroup members then reorganized clusters to increase coherence, yielding 8 final OTIS framework domains: organizational dynamics (e.g., inertia); organizational structure (e.g., size); internal processes (e.g., feedback loops); tasks and technology (e.g., transaction costs); knowledge/insight (e.g., sensemaking); interorganizational relationships (e.g., coercive pressure); organizational field characteristics (e.g., selection pressure); networks/ties (e.g., cohesion).
Implications for D&I Research: We will present a detailed description of our synthesis of 70 constructs from 9 organization theories into 8 domains. The OTIS framework has the potential to increase awareness and use of key concepts from organization theories among implementation scientists. Applications of the OTIS framework will enhance understanding of organizational influences on EBP implementation, promote theory-driven strategies for organizational change, and allow for refinement of the framework, which we view to be a living tool to be improved through application. Next steps include testing the OTIS framework in implementation research and adapting it for use among policymakers and practitioners.
Primary Funding Source
Centers for Disease Control and Prevention
S107 The implementation in context (icon) framework: a meta-framework of context for dissemination and implementation scienceJanet E. Squires1, Ian D. Graham1,2, Alison M. Hutchinson3, Wilmer J. Santos4, Shelly A. Li5, Melissa Demery Varin6, ICON Team6 1Ottawa Hospital Research Institute, Ottawa, Canada; 2University of Ottawa, Ottawa, Canada; 3Barwon Health, Geelong, Australia; 4The Ottawa Hospital Research Institute, Ottawa, ON, Canada; 5Al & Malka Green Artists' Health Centre, Toronto, Canada; 6University of Ottawa, Ottawa, ON, Canada Correspondence: Janet E. Squires (janet.squires@uottawa.ca)Implementation Science 2023, 18(Suppl 3):S107
Background: Context modifies the effects of dissemination and implementation strategies to increase healthcare professionals' use of research evidence in clinical practice. However, conceptual clarity about what comprises "context" is lacking. The purpose of this study was to develop a meta-framework of context domains, attributes, and features relevant to dissemination and implementation.
Methods: We conducted a meta-synthesis of data from three interrelated studies on context: 1) a concept analysis of published literature on context (n=70 studies), 2) a secondary analysis of healthcare professional interviews (n=145) examining context across 11 unique studies, 3) a descriptive qualitative study comprised of interviews heath system stakeholders (n=39) to elicit their tacit knowledge on the attributes and features of context that are important for improved research use by healthcare professionals. A rigorous protocol was followed for the meta synthesis. Following this synthesis across studies, ICON was further refined through feedback from experts in context and implementation science.
Findings: In ICON, context is conceptualized in 3 levels: micro (individual), meso (organizational), and macro (external). The three levels are comprised of 6 contextual domains: 1) actors (micro), 2) organizational climate and structures (meso), 3) organizational social behaviour (meso), 4) organizational response to change (meso), 5) organizational processes (meso), and 6) external influences (macro). These 6 domains contain 22 core attributes of context and 108 features that illustrate these attributes.
Implications for D&I Research: ICON is the only implementation meta-framework of context available to guide dissemination and implementation efforts of knowledge users and researchers. It provides a comprehensive and critically needed understanding of the context domains, attributes and features relevant to healthcare professionals’ use of research in clinical practice. ICON will assist with the development of common assessment tools to measure context to tailor dissemination and implementation intervention design and delivery. It can also be used to better interpret the effects of dissemination and implementation interventions, and to pragmatically guide knowledge users in their implementation efforts.
Primary Funding Source
Canadian Institutes of Health Research (CIHR)
S108 Iterative prism (i-prism): background, rationale, key functions and early resultsRussell GlasgowUniversity of Colorado School of Medicine, Aurora, CO, USA Correspondence: Russell Glasgow (russell.glasgow@cuanschutz.edu)Implementation Science 2023, 18(Suppl 3):S108
Background: There have been consistent calls for implementation science to be more conceptually based, pragmatic, rapid and nimble, evidence-based, and have high levels of engagement and multi-sector collaboration. It is difficult to simultaneously meet all these objectives. The Iterative Practical Robust Implementation and Sustainability Model (PRISM which includes the Reach, Effectiveness, Adoption, Implementation, Maintenance (RE-AIM) outcomes) that we refer to as I-PRISM is designed to address many of these aspirational goals in a feasible implementation package that can be used to guide adaptations during planning, implementation, and sustainment.
Methods: Preliminary work in multiple VA settings demonstrated that an iterative approach based on RE-AIM was feasible, well received, and applicable across a wide range of different projects, teams, and content areas. We then developed I-PRISM which is a contextual expansion of Iterative RE-AIM that specifies key contextual factors to consider when evaluating data on RE-AIM outcomes. The key functions involved in I-PRISM are: educate teams on use of PRISM to set priorities and evaluate progress; obtain independent input from team members; summarize results in visual displays showing differences between priorities and progress; facilitate team discussion and goal setting; collaboratively develop and evaluate adaptations; and periodically repeat this process.
Findings: We will summarize three types of results across the presentations: RE-AIM outcomes prioritized by implementation teams; areas of greatest gaps between priorities and progress; and data on PRISM contextual factors related to RE-AIM outcomes. Across projects, the RE-AIM outcome with the greatest gap between priority and progress was Reach; the areas in which adaptations were made most often were Reach and Implementation.
Implications for D&I Research: Based upon feasibility work in multiple settings we have developed a conceptually based and data driven implementation strategy bundle that aids implementation teams in responding to changing context, priorities, and level of progress on different outcomes. This I-PRISM package, or variants of it, have been applied in several ongoing projects described in other panel presentations and integrated into an interactive webtool described in the final presentation. There are needs for improvement, replication, and comparative effectiveness research, but I-PRISM appears to address many of the implementation science challenges outlined above.
Primary Funding Source
Department of Veterans Affairs
S109 Iterative use of re-aim/prism in a hypertension control trial in guatemalaMeredith FortColorado School of Public Health, Aurora, CO, USA Correspondence: Meredith Fort (meredith.fort@cuanschutz.edu)Implementation Science 2023, 18(Suppl 3):S109
Background
Uncontrolled hypertension presents a substantial burden in Guatemala and other low- and middle-income countries. In 2019, the Guatemalan Ministry of Health (MOH) began implementing a multi-component program to improve hypertension control in rural communities, using a type 2 hybrid effectiveness-implementation design. RE-AIM/PRISM was selected as the guiding D&I framework.
Methods: Prior to implementation, we conducted a multi-methods needs assessment to capture perspectives at different levels within the Guatemalan public primary care system and rural communities. We developed implementation tracking forms that were filled out by implementers (MOH staff; primarily auxiliary nurses). Local-level evaluators captured data using forms to assess key aspects of context within health posts (availability of medications, blood pressure monitors, and staff turnover). The study team met regularly with the MOH to be aware of broader contextual changes. During the COVID-19 pandemic the study team made phone calls to implementers and patients to gain insight into their experiences and to inform adaptations. Qualitative assessment of PRISM domains and RE-AIM outcomes prior to, during and post-implementation complemented routine implementation and patient assessments.
Findings: Routine assessment of medication availability was identified as a top priority. The study team reviewed and reflected on changes in implementation and medication availability, and discussed staff turnover and implications for the PRISM Implementation and sustainability infrastructure domain; these discussions usually led to reaching out to different actors in the MOH at the central, provincial, or local levels. We reviewed Reach during initial meetings and determined it would be difficult to influence in the short-term. The COVID-19 pandemic resulted in restrictions to public transportation, reduction in face-to-face meetings with providers, and additional responsibilities for health workers. Priority adaptations included: a change in how training was conducted and increased flexibility in providing medications. Broader contextual factors were also discussed by implementers.
Implications for D&I Research: To capture changes in the context and program implementation, it was important to assess RE-AIM and PRISM components on a regular basis. While some components such as reach, representativeness, and system-level capacity may be challenging to influence in the short term, they are important to capture and understand to promote equitable long-term participation and delivery.
Primary Funding Source
National Institutes of Health
S110 Using i-prism with re-aim dashboard to speed implementation of lung ultrasound in the management of patients with covid-19Anna MawUniversity of Colorado School of Medicine, Aurora, CO, USA Correspondence: Anna Maw (anna.maw@cuanschutz.edu)Implementation Science 2023, 18(Suppl 3):S110
Background: In early 2020, the pandemic heightened the need for rapid implementation of new inpatient practices to cope with the high volume of patients admitted for COVID-19. In this context, point of care lung ultrasound (LUS) was seen as a promising alternative to traditional radiology-performed chest imaging.
Methods: We performed an implementation pilot study at a single academic center to rapidly implement LUS among hospitalists caring for patients admitted with COVID-19. Given the urgency of the pandemic, we sought an approach that would: 1) offer rapid real time data to monitor the progress of implementation, and 2) rapid assessment of contextual barriers using I-PRISM to guide adaptations to our implementation strategies. Using a convergent mixed methods design, we developed a novel ‘RE-AIM Dashboard’ which displayed quantitative RE-AIM outcomes prioritized by our hospitalist implementers using data extracted from the EHR and was automatically updated every 48 hours. In addition, we used I-PRISM to qualitatively assess contextual barriers to implementation through hospitalist interviews. In bi-weekly implementation team meetings, we jointly considered emerging trends in quantitative Reach and Adoption rates and qualitative I-PRISM barriers to guide decisions on planned adaptations to our implementation strategies.
Findings: Over a one-year period, n=24 meetings were conducted. Over this period, Reach ranged from 0% to 2%, and order Adoption rose from 0% to 50%. Key I-PRISM barriers such as limited dedicated time for hospitalist training led to the subsequent deployment of six sequential implementation strategies and modest increases of LUS integration into clinical practice. Once built by our information technology team, the Iterative RE-AIM Dashboard provided automated updates regarding the extent and representativeness of Reach and Adoption without additional staff resources.
Implications for D&I Research: We found I-PRISM in conjunction with a RE-AIM operations Dashboard was a highly feasible and low-burden way to rapidly and repeatedly evaluate implementation progress, assess for new or persistent barriers, and identify any disparities in Reach. Given the growing availability of dashboards to display health system data, our findings suggest I-PRISM used in conjunction with a RE-AIM dashboard is a promising and feasible means of monitoring implementation progress and informing mid-course adaptations.
Primary Funding Source
National Institutes of Health
S111 An interactive, visual webtool to guide the pragmatic and iterative use of I-PRISMKaty TrinkleyUniversity of Colorado Anschutz Medical Campus, Aurora, CO, USA Correspondence: Katy Trinkley (Katy.trinkley@cuanschutz.edu)Implementation Science 2023, 18(Suppl 3):S111
Background: To speed research translation, many have called for ways to make implementation science methods and models more accessible and to provide more concrete guidance for researchers and practitioners. In response to this need, we created an interactive webtool to guide both English and Spanish speaking users from diverse backgrounds through the process of applying PRISM to nimbly adapt programs during planning, implementation, and sustainment.
Methods: We used a human-centered design process and iteratively engaged potential users who were in various phases of implementing different types of programs. Multisector engagement included native English and Spanish speaking individuals and implementation teams from government, community, public health, academic, and healthcare settings. The goal was to create a user-friendly, interactive tool that facilitated systematic and flexible assessment of a program’s contextual alignment using the PRISM context domains and the pragmatic RE-AIM outcomes. Based on this information, the tool would then guide development of feasible and impactful adaptations across all implementation phases. Iterative user testing using low fidelity mockups and the think aloud method supported co-creation of the content, wording, navigation, visual displays, and overall usability of the webtool.
Findings: We will provide a demonstration of the webtool. The I-PRISM webtool is designed to be used by individuals or implementation teams and includes a set of assessment items aligned with PRISM context and RE-AIM outcomes that were refined through multisectoral engagement. The output based on answers to the assessment items is presented in graphical and tabular visual displays. After reviewing the output, users are prompted to develop and select adaptation action plans they estimate to be both feasible and impactful. Users are encouraged to download their results and use the webtool iteratively over a program’s life cycle. Considerations for equity are integrated throughout the webtool.
Implications for D&I Research: The I-PRISM webtool is a public good co-created with multisector engagement to guide the use of PRISM for iterative adaptations across the life cycle of a program. The webtool was designed to be generalizable across diverse settings and programs and will be refined over time to maximize ease of use.
Primary Funding Source
National Institutes of Health
S112 Commonality and co-occurrence of discrete strategies within implementation strategy bundles: results from a living review of global hiv implementation researchSita LujintanonJohns Hopkins School of Public Health, Baltimore, MD, USA Correspondence: Sita Lujintanon (slujint1@jhu.edu)Implementation Science 2023, 18(Suppl 3):S112
Background
Health services and innovations are delivered through implementation strategy bundles that are often complex, comprising numerous discrete strategies. Detailed assessment of the usage patterns of different types of discrete strategies within real-world strategy bundles would enable classification of discrete strategies based on commonality and co-occurrence. We leveraged the Living Database of HIV Implementation Science (LIVE) to describe patterns of discrete strategy usage within published implementation strategy bundles.
Methods
A systematic review was conducted to include any studies published from 2004-2021 in any low- and middle-income country (LMIC) that described implementation, including strategies, and reported ≥1 HIV cascade outcome. Implementation strategies were inductively specified according to actor, action, and action target.
Findings
Between January 2014-July 2022, 44,126 abstracts were screened, 555 studies met inclusion criteria in which 3,315 discrete implementation strategies were identified. The median number of reported strategies per study was 4 (1-13); 88.8% of studies reported using multiple strategies. The most common actors were researchers (48.8%), unspecified health providers (42.0%), and health associate professionals (e.g. counselors, community health workers, lay health workers; 41.8%). The most common action targets were people living with HIV (78.6%), health system (54.1%), and unspecified provider (29.5%). The most common action was providing education on a health innovation/service/behavior (365 studies; 65.8%). Many studies using this action also used training to learn a new skill (35.1%), providing community-based services (32.3%), and providing psychosocial support counseling (22.7%). The second most common action was training to learn a new skill (180 studies; 32.4%). Many studies using this action also used providing education on a health innovation/service/behavior (71.1%), supervising/mentoring/coaching/facilitating (37.8%), and providing community-based services (33.9%).
Implications for D&I Research
This large and comprehensive review of HIV-related implementation research from LMICs found that discrete implementation strategies are very frequently used in combination and feature multiple actors, actions, and action targets. This expands our understanding of how strategies are being reported and used in published research, and calls for improved strategy bundle specification and taxonomy. Further research should also assess and optimize strategy bundles, including uncommon and underused strategies, to inform effective, transferable health service delivery approaches.
Primary Funding Source
Bill and Melinda Gates Foundation
S113 Methods for implementation science systematic reviewsNoelle Le TourneauWashington University in St. Louis, St. Louis, MO, USA Correspondence: Noelle Le Tourneau (lnoelle@wustl.edu)Implementation Science 2023, 18(Suppl 3):S113
Background
Evidence synthesis tools have been primarily designed for trials evaluating efficacy; there are no guidelines for the synthesis of implementation research. Implementation science relies on measures and study designs that reflect real-world scenarios that are often mixed methods, including pragmatic trials, observational, preference, and qualitative evidence. To inform implementation guidelines and policies, evidence syntheses should incorporate a broad set of study designs, appraisal tools, and implementation science frameworks.
Methods
We searched the literature, conferred with experts, and tested tools to create a method for conducting implementation systematic reviews, incorporating implementation science frameworks into routine systematic review methods. We compiled a set of tools to characterize implementation strategies, assess implementation outcomes, implementation trial types, and evaluate and classify pragmatism in RCTs. We assembled tools to assess the methodological quality of RCTs, natural experiments, cohort, cross-sectional, qualitative, preference, and mixed methods studies within implementation science reviews.
Findings
We identified 10 tools that, in combination, assessed implementation and methodological quality of implementation research, as well as provided mixed method systematic reviews synthesis guidelines. Four tools characterized implementation components: a modified version of Proctor and TIDieR tools to classify implementation strategies, the Proctor et. al framework to characterize and assess implementation outcomes, a modified version of the PRECIS-2 tool to evaluate pragmatism in trials, and Curran’s framework to characterize implementation trials. We also identified five tools to assess methodological quality of all study designs central to implementation science reviews. These include the Cochrane Risk of bias tools (ROB-2) for RCTs, the Newcastle Ottawa scale for observational (cohort, cross-sectional, natural experiments, quasi-experimental) studies, the Joanna Briggs Institute (JBI) critical appraisal checklist for qualitative research, the ISPOR checklist for preference studies, and the mixed methods appraisal tool (MMAT). We further identified the JBI guidelines for data synthesis and integration in mixed methods systematic reviews.
Implications for D&I Research
This set of tools will allow investigators to assess methodological quality and synthesize evidence from a variety of implementation science study designs to appropriately inform implementation. Establishing best practices for implementation science evidence synthesis including consistent methodology, language, and reporting standards in implementation systematic reviews is crucial to advance the field of implementation science.
Primary Funding Source
Bill and Melinda Gates Foundation
S114 Reporting and measures of implementation outcomes from hiv-related implementation research grantsSheree SchwartzJohns Hopkins University, Washington, DC, USA Correspondence: Sheree Schwartz (sschwartz@jhu.edu)Implementation Science 2023, 18(Suppl 3):S114
Background
While the landscape of HIV-related implementation research (IR) funding has grown in recent years, less is known about reporting frequency and measures of implementation outcomes (IOs) within HIV IR.
Methods
We leveraged a previous landscaping analysis of all NIH-funded, HIV-related IR grants from 2013-2017. All publications linked to these awards in NIH RePORTER through January 1, 2021 were screened for whether they were original research publications reporting data emanating directly from the funded grant. Publications derived from the awards were reviewed and IOs identified per Proctor’s Implementation Outcomes taxonomy, as well as the ‘Reach’ outcome from RE-AIM. We describe grant- and paper-level findings.
Findings
Among 215 HIV-related IR NIH-funded grants, 59.0% (n=127) had published original research results by January 2021, resulting in 431 publications. Overall, 119/431 (27.6%) publications reported any IOs, representing IOs from 61/215 (28.4%) funded grants and 61/127 (48.0%) grants with publications. On average, grants with any publications reported a mean of 0.9 [sd:1.4] IOs. Among the 119 publications reporting IOs, the mean number per publication was 1.7 (sd:0.9, range:1-5). The outcomes most commonly reported were acceptability (n=75 papers; 35.4% [45/127] of grants with publications), appropriateness (n=39 papers; 19.7% [25/127] grants), feasibility (n=29 papers; 15.8% [20/127] grants), cost (n=20 papers; 5.5% [7/127] grants), adoption (n=16 papers; 11.0% [14/127] grants), and fidelity (n=13 papers; 9.5% [12/127] grants); penetration, sustainability and reach were reported in ≤5 papers each. Among the three most reported IOs, most acceptability (71%) and appropriateness (85%) outcomes were measured qualitatively, whereas 69% of feasibility outcomes were assessed quantitatively. The proportion of papers reporting IOs varied by EPIS phases, with IO reporting by 7.3% (n=16/220), 42.4% (n=50/118), 57.1% (n=48/84) and 55.6% (n=5/9) of papers at the ‘Exploratory’, ‘Preparatory’, ‘Implementation’ and ‘Sustainment’ phases, respectively (p<0.001). When considering grant mechanisms, 22% of R34-awards reported an IO, 27% of K-awards, 29% of R01-awards and 47% of R21-awards (p=0.14).
Implications for D&I Research
Overall, fewer than one-third of papers and grants reported IOs, though further publications may be forthcoming. Increased reporting of IOs including adoption, fidelity and other later-stage IOs improves the interpretation of effectiveness data and ultimately, supports the optimal impact of real-world implementation.
Primary Funding Source
Bill and Melinda Gates Foundation
S115 Adaptation of implementation strategies: a mechanistic framework based on literature reviewElvin GengWashington University in St. Louis, St. Louis, MO, USA Correspondence: Elvin Geng (elvin.geng@wustl.edu)Implementation Science 2023, 18(Suppl 3):S115
Background
Many implementation targets (e.g., health care workers, patients or organizations) differ from each other but yet at the same time are not absolutely unique. While design of implementation strategies can account for some of these differences at the onset, changes over time (and differences in those changes across settings) are unavoidable. Implementation strategies may need to adapt to optimize their intended effects.
Methods
We use a mechanistic review in which we conducted a search for adaptations in implementation strategies and extracted the components and causal relationships of these adaptations. We use a simple directed acyclic graph (DAG) to represent concepts as nodes and potential causal effects as arrows. We adhere to the convention in which two arrows pointing into a node implies effect modification on at least one scale.
Findings
Our diagram suggests adaptation of implementation strategies require three necessary steps. First, implementation strategies have effects on intended implementation outcomes. Second, these effects in turn modify, alter, intensify, or change in other ways the strategy used. Third, the modified strategy then itself has an effect on implementation outcomes. These three steps must be present for adaptation to be present, and may or may not be accompanied by initial responses’ effects on final responses independently of the changes incurred or the initial strategy’s effect on the effects of the changed strategy – that is to say that an altered strategy’s effects could be constrained or potentiated by the initial approach. The initial strategy also has effects on the final outcomes that are in part independent of change in the strategy.
Implications for D&I Research
A mechanistic examination of the adaptation of implementation strategies provides several novel conceptual insights. First, the effects of an adapted strategy differ from the effects of an adaptive strategy together. Second, adaptive implementation strategies may also be accompanied by non-adaptive pathways. Third, adaptations are themselves a consequence of initiation strategies deployed, and the effects of adaptations are influenced by the initial actions. The diagram also points out classes of research questions about adaptations of implementation strategies such as dosing, mechanism of adaptation, thresholds for change, and interaction.
Primary Funding Source
Bill and Melinda Gates Foundation
S116 Illuminating the impacts of outer context on implementationMs. Gracelyn Cruden1, Holle Schaper1, Shelley Crawford1, Dylan Wong2, Lisa Saldana1 1Oregon Social Learning Center, Eugene, OR, USA; 2University of South Carolina, Columbia, SC, USA Correspondence: Gracelyn Cruden (gracelync@oslc.org)Implementation Science 2023, 18(Suppl 3):S116
Background
The role of outer context on implementation success repeatedly has been theorized and demonstrated. Yet, most theories and frameworks refer to an amorphous “outer context” without further specification Unsurprisingly then, the impact of outer context on implementation often is not measured or tracked during implementation, limiting opportunities to proactively design implementation strategies that might mitigate known or unforeseen outer context events. This study sought to: 1) identify common outer context dimensions in implementation frameworks and define additional dimensions; 2) specify and track how outer context is impacting ongoing implementation efforts using a new Outer Context Module to accompany the Stages of Implementation Completion (SIC), a validated measure of implementation fidelity.
Methods
A preliminary outer context taxonomy was created by reviewing implementation frameworks (e.g., CICI, EPIS, CFIR). Next, a scoping review was undertaken to expand the taxonomy. The review explored how outer context impacts are theorized, evaluated, and described in the peer-reviewed and grey literature.
The SIC Outer Context Module was informed by the preliminary taxonomy, experts in implementation fidelity monitoring with the SIC, and data from a pilot module. The Module includes: Topics (to categorize Outer Context Events within the Outer Context taxonomy), levels at which Events occur (i.e., National, regional, local, service system), and implementation Effects (e.g., in-person trainings paused).
Findings
Eight dimensions initially were identified: implementation processes at other sites; resources from other implementations; eligible population in community; policy and politics; funding, contracting; natural disaster; social, ethical, cultural; leadership. The scoping review identified a new dimension—Infrastructure. Pilot Outer Context Module data includes 49 Effects from 37 Events across 10 implementations. Two Topics emerged from pilot data: Infectious Disease Outbreak; Workforce Challenges. Most Events were categorized as policy or politics (33%), 85% of which related to COVID-19. Events had positive, neutral, and higher barrier Effects. Effects often entailed modifying implementation timelines (e.g., delaying training, delaying in-person site visit).
Implications for D&I Research
Increasing specificity in how outer context is defined can improve monitoring and generalizable measurement of how context affects implementation. Measuring outer context impacts can increase understanding of how implementation efforts can plan for and successfully overcome outer context challenges with targeted strategies.
Primary Funding Source
National Institutes of Health
S117 Using the framework for reporting adaptations and modifications to evidence-based implementation strategies (frame-is) to document modifications to an adaptive implementation strategyRose Garza-Hennessy1, Nicholas Schumacher2, Nora Jacobson3, Andrew Cohen4, Jilian Landeck3, Paul Hunter3, Morgan Burns5, Andrew Quanbeck5 1University of Wisconsin-Madison, Madison, WI, USA; 2University of Wisconsin - Madison, MADISON, WI, USA; 3University of Wisconsin, Madison, WI, USA; 4Bellin Health, Green Bay, WI, USA; 5Department of Family Medicine and Community Health, University of Wisconsin – Madison, Madison, WI, USA Correspondence: Andrew Quanbeck (arquanbe@wisc.edu)Implementation Science 2023, 18(Suppl 3):S117
Background: The strategies used to implement evidence-based practices (EBPs) often require modifications. A systematic approach to documenting such modifications has not yet been widely adopted. The 2021 FRAME-IS is a novel framework that allows researchers to characterize both proactive and reactive changes to implementation strategies. Few publications have demonstrated the application of the FRAME-IS. The Balanced Opioid Initiative, an NIH-funded trial that tested the use of systems consultation to promote adherence to the CDC Guideline for Prescribing Opioids for Chronic Pain, provided a timely opportunity to assess the utility of the FRAME-IS.
Methods: An interdisciplinary team of researchers and implementers met to document modifications across core modules of the FRAME-IS for the four ISs that make up the package of systems consultation used in the Balanced Opioid Initiative: (1) Audit & Feedback; (2) Educational Meetings; (3) Practice Facilitation; and (4) Prescriber Peer Consulting. Modifications were necessary due to COVID-19, the rise of telemedicine, the changing landscape of opioid prescribing, and variations between healthcare systems.
Findings: The Balanced Opioid Initiative was implemented in 32 clinics within two healthcare systems in a Midwestern state using a Sequential, Multiple-Assignment Randomized Control Trial (SMART). One to three modifications were described for each IS. All seven core modules of the FRAME-IS were completed for each module. The team concluded that the FRAME-IS is practical, comprehensive, and easy to use. It works well to document modifications across levels of influence (i.e.- system-wide, clinic-wide, etc.) and promotes reflection to raise critical questions regarding implementation. Challenges include determining what defines a distinct “modification,” the blurring of roles (i.e.- researcher/implementer/manager), and how to differentiate modifications to ISs versus those to the study design or EBP. Recommendations to advance the FRAME-IS are provided.
Implications for D&I Research: Considerations and recommendations from this case study can be used to enhance the FRAME-IS, assist other scholars to utilize this framework, and improve the research community’s ability to systematically measure the dynamic evolution of implementation strategies in various settings. Future research is needed to study how documented modifications influence implementation outcomes.
Primary Funding Source
National Institutes of Health
S118 Leveraging economic tools in the context of the stages of implementation completion (sic) to inform implementation and plan for sustainmentPiper Block, Mark Campbell, Lisa SaldanaOregon Social Learning Center, Eugene, OR, USA Correspondence: Piper Block (piperb@oslc.org)Implementation Science 2023, 18(Suppl 3):S118
Background
Implementation science addresses the inherent tensions between research and practice by developing rigorous tools for processes that might otherwise happen haphazardly or inconsistently. For economic considerations, such tools are paramount for comprehensively accounting for financial and resource costs, funding streams, and issues of billing and financial sustainability. The Stages of Implementation Completion (SIC) is both a framework and measure for implementing evidence-based practices. Economic considerations are an important theme woven throughout the SIC. This presentation provides an overview of economic tools employed within the context of the SIC for the Families Actively Improving Relationships (FAIR) program, an evidence-based treatment for parents involved in the child welfare system due to substance use, to inform financial planning strategies and achievement of program sustainment.
Methods
During the Pre-Implementation Phase, the Costs of Implementing New Strategies (COINS), a cost mapping tool which maps onto the SIC, assisted in collecting and presenting precise information on the staffing and financial resources necessary to move into the Implementation Phase. The FAIR Cost Calculator created site-specific reimbursement profiles to share with site decision-makers to inform the optimal staffing ratios and caseload sizes to achieve financial balance. The Cost Calculator also elucidated unbillable costs to clinics which is critical for program sustainability. This information was presented to implementing decision-makers to better understand the necessary investment for successful implementation.
Findings
In the constantly changing landscape of COVID-19, employing a series of explicit economic tools allowed the FAIR developer team to respond swiftly and flexibly to changing clinic needs. COVID-19 especially exacerbated two financial issues: clinician turnover (averaging $25,000 per new clinician in training and lost revenue) and unbillable supply runs to clients (averaging $284 per client per month). COVID-19 also presented new funding opportunities to address those issues, which the FAIR team was able to capitalize on given the consistent use of such economic tools.
Implications for D&I Research
By actively attending to economic considerations through the implementation process, we can better plan for sustainment. Economic tools like those previewed here can be useful for clinic-level financial sustainability, emerging funding opportunities, increasing cost transparency for community partners, and eventual economic evaluations for research purposes.
Primary Funding Source
National Institutes of Health
S119 “what we have here, is a failure to [replicate]”: ways to solve a replication crisis in implementation scienceMatthew Chinman1, Joie Acosta2, Patricia Ebener3, Amy Shearer4 1RAND Corporation, Pittsburgh, PA, USA; 2RAND Corporation, Arlington, VA, USA; 3RAND Corporation, Santa Monica, CA, USA; 4RAND, Santa Monica, CA, USA Correspondence: Matthew Chinman (chinman@rand.org)Implementation Science 2023, 18(Suppl 3):S119
Background: Replication, key to the open science movement, is needed to strengthen the validity of findings in Implementation science (IS)—yet has been neglected in IS in favor of novel discovery, like in other fields. For example, reviews of implementation strategies vary so much across content domains, settings, and strategy use, it is challenging to draw conclusions about strategy replicability. The purpose of this presentation is to review what is known about replication of implementation trials and identify the gaps and offer recommendations to continue increasing the transparency, openness, and replicability of implementation research.
Methods: This presentation will review how study replication has (or has not) been approached in IS. We will discuss how different types of replications (e.g., direct, conceptual) can benefit the IS field. We will then describe our Implementation Replication Framework (IRF)—developed incorporating elements from Proctor (strategy description), Damschroder (i.e., CFIR), and Fixsen (implementation core components) to guide implementation researchers in their replication efforts. Using the IRF, we will present a case study of how to design a replication study and interpret the results using the implementation strategy called Getting To Outcomes© (GTO), which was used to facilitate two different youth-focused prevention evidence-based practices (EBPs) in two different studies.
Findings: This presentation will argue that replication should not be binary—replicated, or not—but fall on a continuum, leading to a progressive research program in which non-replicated findings can yield new theories sufficiently broad to include both replicated and non-replicated findings. Using the IRF, we will also share multiple elements to consider when designing and interpreting replication studies (e.g., Participants, Setting, Intervention, Outcome measures, and Analyses) and explanations (e.g., varying levels of EBP intensity) for why implementation findings were replicated in the GTO case study, but youth outcomes were not.
Implications for D&I Research: The presentation will end with multiple recommendations implementation scientists could consider to improve the likelihood and quality of replication studies, including how to improve IS replication reporting and how IS can enable researchers and practitioners to work together in real-world contexts to encourage wide replication of implementation studies and advance both practice and theory in public health.
S120 Identifying barriers to implementing a care coordination intervention in the veterans health administration: the brainwriting premortem focus group approachRoman Ayele1, Marina McCreight3, Marcie Lee4, Gretchen Stage5, Lauren Mckown5, Brianne Morgan5, Deisy Hernandez Lujan3, Heidi Sjoberg6, Heather Gilmartin2, Catherine Battaglia2 1VA Eastern Colorado Healthcare System, Aurora, CO, USA; 2Denver/Seattle Center of Innovation, Rocky Mountain Regional VA Medical Center; 3Eastern Colorado Healthcare System, Aurora, CO, USA; 4Department of Veteran Affairs, Denver-Seattle Center of Innovation Aurora, CO, USA; 5Denver-Seattle Center of Innovation, Department of Veterans Affairs, Eastern Colorado Health Care System, Denver, CO, Aurora, CO, USA; 6Veterans Health Administration, Denver Seattle Center of Innovation for Veteran-Centered and Value Driven Care Aurora, CO, USA Correspondence: Roman Ayele (roman.ayele@va.gov)Implementation Science 2023, 18(Suppl 3):S120
Background
Engaging clinical and administrative partners early in the implementation process is necessary to improve uptake of healthcare interventions. One of the most helpful elements of engaging partners is to rapidly identify barriers during pre-implementation period. We sought to understand barriers to implementing a care coordination intervention aimed at improving transitions of care outcomes using brainwriting premortem sessions.
Methods
We conducted brainwriting premortem exercises, a novel focus group method, with participants from six Veterans Administration Medical Centers (VAMCs) implementing an evidence-based care coordination intervention. The brainwriting premortem method is the silent sharing of written ideas about why an intervention failed, prior to implementation of the program. Participants are asked to imagine the program was implemented and failed. They then write about the reasons the program failed. Using IdeaBoardz online platform, participants anonymously and silently typed their responses. The group was given time at the end to reflect and build off of each other’s responses. The written data were collected and exported for thematic analysis and returned to stakeholders for further discussion.
Findings
Participants indicated the program could fail due to multiple perceived barriers: (1) Lack of buy-in from staff; (2) Lack of collaboration between stakeholders; (3) Inadequate time allocated for the Lead Coordinator; (4) Competing priorities that would make this initiative unsustainable; (5) Perceived challenges engaging Veterans; (6) Case management Issues; (7) Poor rollout of the initiative in educating stakeholders and monitoring; (8) Overall staffing challenges such as turnover and protected time to implement the initiative; and (9) Inadequate support and resources. Participants were given opportunities to discuss strategies to address these barriers during program implementation.
Implications for D&I Research
The brainwriting premortem exercise allowed us to capture insights from stakeholders that could inform efficient implementation. This is a novel approach that can be applied in various settings to quickly understand barriers to program implementation.
Primary Funding Source
VA Health Services Research and Development
S121 Use of concept mapping across stakeholder groups to prioritize importance and feasibility of evidence-based strategies for hpv vaccination within safety-net settingsJennifer Tsui1, Michelle Shin1, Kylie Sloan1, Tom Mackie2, Benjamin Crabtree3, Lawrence Palinkas1 1University of Southern California, Los Angeles, CA, USA; 2SUNY Downstate Health Science University, Brooklyn, NY, USA; 3Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ, USA Correspondence: Jennifer Tsui (Tsuijenn@usc.edu)Implementation Science 2023, 18(Suppl 3):S121
Background: HPV vaccination rates remain below target levels among adolescents in high-risk communities served by safety-net clinics. While multiple evidence-based strategies (EBS) for promoting HPV vaccination have emerged, identification and prioritization of EBS within safety-net settings to align with context and fit are understudied. Use of concept mapping to assess diverse stakeholders’ views and priorities of EBS can inform selection of strategies within the local context.
Methods: We conducted a concept mapping activity in Los Angeles and New Jersey with 20 participants, including: (1) internal clinical leaders/administrators, staff [MA, RN] and providers [MD, NP] and (2) external advocacy and policy representatives; both groups were previousl
Comments (0)