NA-II was a partnership between the University of California San Francisco (UCSF) (academic partner), Círculo de Vida Cancer Support and Resource Center (lead community partner/compañera (interventionist) supervisor), and three CBOs that implemented the program in rural California communities. The lead community partner is a bilingual-bicultural clinical psychologist, Co-Principal Investigator on the study, and Executive Director of a San-Francisco-based CBO providing cancer support services to Latinos. Partnering CBOs serving as implementation sites included: WomenCARE (Watsonville, CA), Kaweah Delta Health Care District (Visalia, CA), and Cancer Resource Center of the Desert (El Centro, CA). Sites are described in detail elsewhere [17].
Implementation of NA-II was guided by the Transcreation Framework [21] and CBPR principles (e.g., trust, shared decision-making, equal value placed on scientific and community knowledge) [22]. NA-II partners were engaged in all research phases (i.e., program adaption/co-creation, implementation, evaluation, interpretation of results, and dissemination). Monthly partnership meetings included all study staff (UCSF-academic partner, lead community partner staff, and individuals from three CBO implementation sites (administrators, recruiters, and compañeras). The partnership shared responsibility for and ownership of intervention and data collection activities, while emphasizing CBO’s strengths and resources and building capacity [23].
Personnel and organizational structureCBOs received funds for study implementation ($45,000 each) and controlled their budget. Three types of CBO personnel participated in the study: administrators, recruiters, and compañeras (interventionists). One administrator from each CBO was actively involved throughout the study, serving as key decision maker regarding program design and implementation. Input was secured throughout from all study staff. Two CBO staff members or volunteers were identified by each administrator to be recruiters. Recruiters promoted NA-II in the community, explained the study, and enrolled eligible women (consent, baseline survey, and randomization into the RCT). Two individuals (Latina breast cancer survivors at least three years post-diagnosis with no recurrence) were identified by each CBO to deliver the intervention (compañeras). Recruiters and compañeras were trained by the academic and lead community partners (Co-Principal Investigators). The lead community partner supervised the compañeras in the field.
Nuevo Amanecer-II programNA-II was a 10-week structured program delivered in Spanish by a trained compañera in the woman’s home or alternate site chosen by participants. Structured weekly modules provided training in cognitive-behavioral coping skills to manage stress and emotions and emotional support from the compañera (a culturally similar breast cancer survivor). The program is described in detail elsewhere [17]. Sessions included a deep breathing practice, review of the prior session to reinforce key concepts, review of the new week’s material, hands-on exercises, modeling and coaching by the compañera, role-playing, and a recap of the new material and weekly goals to practice new skills introduced. Women received a program manual and DVD containing stress management and breast cancer informational videos with instructions and a YouTube link. During sessions, compañeras and participants used the manual and pre-loaded tablet with videos to practice skills and review information.
Study design and frameworks for evaluating implementation and equityWe used a concurrent convergent mixed-methods design with qualitative and quantitative data collected from multiple perspectives (CBO administrators, recruiters, compañeras, compañera supervisor, and participants). Multiple data types were collected concurrently, analyzed separately, and then integrated and converged to conduct a comprehensive equity-informed implementation process evaluation [24]. The evaluation is guided by the Proctor implementation outcomes framework [19] and the Conceptual Model for Evaluating Equity [20]. Based on these frameworks, evaluation outcomes specified a priori were: implementation (feasibility, fidelity, acceptability, adoption, appropriateness, and sustainability) and equity outcomes (shared power and capacity building). The Proctor framework was selected due to its distinct yet inter-related implementation outcomes [19]. The Conceptual Model for Evaluating Equity was employed to evaluate CBPR partnership equity outcomes outlined by Ward and colleagues [20]. Equity outcomes were included as the literature highlights the challenges of communication, inclusiveness, and community involvement to successful implementation process [25] and successful CBPR [20].
RespondentsCBO administrators, recruiters, and compañeras were contacted for a semi-structured interview about program implementation. Compañeras completed a structured program tracking form for all intervention group participants (end-users) after each weekly session. The compañera supervisor completed a structured fidelity rating form for observed program sessions. All participants completing the program (including intervention and wait-list control group women who elected to receive the program after the final outcomes survey) were invited to complete a structured program evaluation survey. We randomly selected 10 participants who completed the program evaluation survey for a semi-structured interview about their experiences.
Data collectionFive types (sources) of program evaluation data were collected. Data were managed using a secure REDCap [26] data system.
RCT tracking formRecruiters used a paper tracking form to document recruitment and retention for each potential participant (name obtained through outreach). The form included name, contact information, study ID, study eligibility questions, and a check list of study enrollment requirements (study consent, baseline survey, and randomization), with places to record dates/times of phone calls/contacts, recruitment disposition (e.g., enrolled, not interested), and reasons why participants did not enroll (e.g., too busy). The academic team entered all RCT tracking forms into the REDCap data system. Similar tracking forms were used for women enrolled in the study that included a check list of 3-month study requirements: 3-month survey, program evaluation survey (if assigned to intervention group) and 6-month survey. The disposition (e.g., completed survey, loss to follow-up) and reasons why participants did not complete the surveys (e.g., too busy, disconnected phone) or reasons that intervention group women discontinued the study at any point (e.g., experiencing serious treatment side effects, traveling) were documented. The academic team used tracking forms to assess retention rates.
Fidelity rating formThe compañera supervisor made site visits to CBOs to directly observe intervention sessions (1–2 intervention sessions per compañera). Using structured rating scales (1 = not at all to 5 = all the time), the supervisor rated compliance with six program components (the extent to which they followed the manual for that session, explained concepts in language the participant understood, checked that participant understood the material, modeled the skills, spoke in a supportive/caring way, and provided praise/feedback to participant when practiced the skills) and the extent to which compañeras encouraged participants to practice the seven skills being taught.
Program tracking formCompañeras completed structured program tracking forms after each session and recorded program attendance and logistics, reasons why participants missed a session, and several aspects of program uptake (whether participant completed the assigned goal(s) for that week (yes or no), whether the participant reported difficulty in doing the goal (yes or no) and type of difficulty (open-ended), whether participants were able to answer correctly a few questions about a session’s material (correct or incorrect), and whether they were able to demonstrate skills covered in the prior session (yes or no).
Program evaluation surveyA few weeks after completing the program, a structured program evaluation survey was administered by telephone by a bilingual-bicultural research associate to participants who completed at least 7 of 10 sessions. The interview lasted about 10-minutes and women received $10.
Semi-structured interviewsAfter the RCT, all CBO administrators, recruiters, compañeras, and a subsample of participants were invited to semi-structured interviews via telephone to debrief them about their experiences in implementing the program and participating in the study. Interviews with administrators were conducted in English (by informants’ choice) by a trained bilingual-bicultural interviewer and lasted 60-minutes. Interviews with recruiters and compañeras were conducted in Spanish (by informants’ choice) by a trained bilingual-bicultural interviewer and lasted 90-minutes. Administrators, recruiters, and compañeras each received $50. Participant semi-structured interviews were conducted in Spanish via telephone by a trained bilingual-bicultural interviewer; the interview lasted 30-minutes, and each participant received $25.
Semi-structured interviews were audio-recorded and transcribed verbatim in English or Spanish by a professional transcription service. Transcriptions were de-identified and were analyzed in their original language to prevent nuances from getting ‘lost in translation’ [27].
Implementation outcomesImplementation outcomes include: feasibility, fidelity, acceptability, adoption, appropriateness, and sustainability [19]. Shared power and capacity building were the equity outcomes of interest because of the importance of communication, inclusiveness, and community involvement to successful implementation and CBPR processes [20, 25]. Table 1 provides an overview of the outcomes with definitions, operationalization (content), respondent, and data source.
Table 1 Outcomes, operationalization, respondents, and methods of data collectionFeasibility is defined as the extent to which a program can be successfully used or carried out within a given setting [19]. We focused on the feasibility of recruitment and retention, and dose of the program received. The overall RCT enrollment goal was 150 women across all three sites; thus each organization was responsible for enrolling 50 women. Retention at 6 months was defined as completing the 6-month study survey. The retention goal was 90% at 6 months. Data on recruitment and retention were collected on the RCT tracking form. Program dose was measured by the number of program sessions attended as recorded by compañeras on the program tracking form. Program adherence was defined as having completed at least 7 of 10 sessions.
Fidelity is the degree to which a program was implemented as described in the original protocol [19]. For NA-II, fidelity was operationalized separately for participants (adherence to program) and compañeras (adherence to program delivery). For participants, fidelity was operationalized in terms of 1) participants’ adherence to and uptake of the program protocol as noted on tracking forms by compañeras. For compañeras, fidelity was operationalized in terms of (1) compañeras’ adherence to the program delivery protocol, and (2) the quality of program delivery, based on supervisor ratings during directly observed sessions.
Acceptability reflects participants’ and compañeras’ perceptions of whether the program was agreeable, palatable, or satisfactory [19]. To assess acceptability, we used participants’ program evaluation surveys and semi-structured interviews with participants and compañeras. Using structured response choices, the program evaluation survey assessed participant’s program acceptability, specifically: participants’ format preferences (timing, number of sessions, and delivery format); quality of the program, videos, and compañera skills; perceived usefulness (how much the program helped them cope with breast cancer); ease of use; and suggestions for program improvement. Women rated the usefulness of each session content/topic (i.e., cancer information, survivorship care plan, communicating with doctors, communicating with family members, managing thoughts and mood, managing stress, healthy living, and setting goals). Ease of use was assessed by asking how easy it was to understand the manual, how convenient the program was, and how often they continued to practice the skills learned after completing the program. Participant semi-structured interview questions were parallel to the program evaluation survey but more in-depth. In the compañera semi-structured interview, we asked about their perceptions of program acceptability, usefulness of program content and materials, appropriateness of format and delivery, how the program helped participant’s cope, whether participants understood or had problems understanding content or materials, barriers to successful completion of sessions and how these might be overcome, and suggestions for improvements.
Adoption is defined as the intention, initial decision, or action to employ an evidence-based program by CBO administrators as part of the real-world implementation efforts in their settings [19]. We asked about administrators’ initial decisions to implement NA-II and its relevance to their site and community needs.
Appropriateness reflects perceptions of the fit or practicability of the program and research methods [19]. Appropriateness was assessed through semi-structured interviews with recruiters, compañeras, and CBO administrators. Recruiters were asked about the appropriateness of recruitment and enrollment methods, e.g., outreach, recruitment, consent, baseline interview, randomization, and strategies for reaching more women. Compañeras and administrators were asked about their involvement in tailoring the program for their clients and settings. Administrators were also asked about hiring and supervision of compañeras and recruiters.
Sustainability is defined as the extent to which a newly implemented program is maintained or institutionalized within a CBO’s ongoing operations [19]. Sustainability was assessed through semi-structured interviews with administrators asking them about incentives/disincentives to implementing the program (e.g., resources, infrastructure), barriers and facilitators to program implementation at the individual, organization, and community levels, and plans for program sustainability.
Equity outcomesShared power reflects the perceptions of individuals engaged in the partnership including leadership, dynamics, communication, decision-making, resources, governance mechanisms, efficiency, and partnership challenges [20, 28]. Related semi-structured interview questions included, “How could the communication between your organization and the research team be improved?”; “How did the research team take into account your organization’s unique needs?”; “What was the leadership style of the research team?” Community recruiters and compañeras were asked parallel questions.
Capacity building reflects perceptions of personal growth (e.g., expertise, knowledge gained, personal skills) or how their organization was enhanced (e.g., services, reputation) as a result of the partnership [20, 28]. Related semi-structured interview questions for administrators and compañeras included, “How did the training you received through the Nuevo Amanecer program benefit your organization?” and “What were the changes in your community or organization as a result of this study?”
Data analysesWe first analyzed quantitative and qualitative data separately then converged qualitative and quantitative findings.
Feasibility data from the RCT and compañera program tracking forms were summarized in terms of frequencies and percentages for the recruitment rate (participants enrolled/invited/ineligible), retention (completed 6-month survey), and program dose (completed at least 7 program sessions/assigned to intervention group).
The compañera supervisor’s fidelity rating forms were summarized across compañeras using means and standard deviations. For the seven skills, we report the frequency of how much compañeras encouraged participants to practice skills.
Acceptability outcomes from the structured program evaluation survey were summarized in terms of frequencies and percentages. Semi-structured interviews with participants and compañeras were analyzed using a deductive thematic approach [29] using Dedoose software [30]. The program evaluation survey was used to create a structured program codebook since, as described above, the participant and compañera semi-structured interview questions were parallel to the program evaluation survey but more in-depth. The structured program codebook was replicated in Dedoose to organize and analyze the data. Analysis started with one author (JS-O) coding interview transcripts to ascertain themes and constructs aligned with the structured program codebook. Two coders then independently coded each interview using the structured program codebook then reviewed the themes to determine coding consensus. Themes were then summarized by respondent type (participant vs. compañeras).
For adoption, appropriateness, sustainability, shared power, and capacity building outcomes, the semi-structured interviews with CBO administrators, recruiters, and compañeras were analyzed using the similar methods [29] using Dedoose software [30] as described for the acceptability-related semi-structured interview data. Semi-structured interview responses were triangulated iteratively across respondent types [31]. Analysis started with one author (JS-O) coding interview transcripts to identify themes and constructs that aligned with Proctor’s [19] implementation outcome definitions and equity outcomes described in the Conceptual Model for Evaluating Equity [20], using Dedoose to create an initial structured outcome codebook organized by outcome. The structured outcome codebook was then used to code two transcripts over two rounds of iterative coding by two coders independently. Any outcome codebook modifications were discussed with JS-O. Any remaining transcripts were coded using the modified outcome codebook. Codes by outcome were then reviewed by all coders and any discrepancies were discussed until consensus was reached. Once consensus was reached, the most relevant quotes were highlighted and extracted from the transcripts, and the quotes were translated into English, if in Spanish, for reporting purposes.
Comments (0)