Evaluation of Two Educational Modalities for the Clinical Practice Guideline for Opioid Therapy for Chronic Pain for US Military Physicians

Although military personnel are less likely to use illicit drugs than their civilian counterparts, research suggests that opioid misuse is a public health concern among members of the Armed Forces.1,2 In 2017, the Department of Veterans Affairs (VA) and the Department of Defense (DoD) released an updated Clinical Practice Guideline (CPG) for the Management of Opioid Therapy (OT) for Chronic Pain.3 The CPG provides targeted guidance to provide safe opioid use while mitigating increasing levels of prescription opioid misuse, and efforts continue to ensure that military medical providers are well versed in and comfortable with the updated guideline. Although provider guidelines for this topic and others are a critical part of the provision of health care, they are not consistently delivered to providers. The US Army Medical Command's (2016) operation order on CPG implementation4 directly acknowledges the lack of standardization in CPG implementation and the resulting differences in health care received by patients. There is a need for assessment of different modalities of dissemination of the information contained in the CPG to determine the most effective means of translating the guideline into practice.

A review of evaluations of provider education for OT indicated that most educational efforts have been delivered through traditional live group training, but many programs in recent years also provided an alternative online version of the training.5 The review also emphasized the need for the incorporation of evidence-based practices, such as interactivity and multiple methods, into provider education. Other studies have shown that web-based education (eg, online training [OLT] and email-delivered education) for providers in various health care topics has positively impacted providers' knowledge,6 as well as attitudes and practices.7 When specifically examining app-based interventions, a recent scoping review of the literature that was aimed at the wider use of app technology for the management of opioid use disorder found no studies that evaluated the efficacy of mobile apps to address the management of opioid use disorder.8 The authors noted an increase in available opioid-related educational mobile apps for clinicians, including an app with the Centers for Disease Control (CDC) and Prevention Opioid Guideline, but that most were related to opioid conversion, and all lacked evaluation for efficacy. A subsequent study that evaluated the effect of app-based learning modules on different types of clinician prescribing behaviors found short-term improvement in intravenous fluid prescribing behaviors but not in opioid prescribing behaviors.9

Both web-based and app educational interventions allow medical professionals to gain access to a variety of resources to improve competency and practice. These forms of digital health education for CPG dissemination also seem to be promising at supporting clinical decision-making.10 However, few evaluations of digitally based health education for CPG have been conducted with military providers. One study that evaluated the effect of a web-based intervention to disseminate CPG for post-traumatic stress disorder (PTSD) among clinicians, including VA and DoD providers, found that web-based resources may support comprehensive efforts to disseminate CPG.11 Another study that tested an online military-focused PTSD training for primary care practitioners in VA hospitals found that the training improved providers' PTSD knowledge.12 There remains, however, a gap in understanding the degree to which these types of interventions can aid military medical providers' uptake of clinical practice guidelines as it relates to OT for chronic pain.

This article presents two digital educational interventions—an OLT offered through an online learning management system and an app downloaded to personal devices—that were examined independently and together as a means to increase military provider awareness of the updated CPG. The educational interventions were designed to increase familiarity with the CPG, which includes specific recommendations for the overall treatment of chronic and acute pain, and ensure understanding of specific recommendations for the initiation or continuation of OT when required. If effective, these trainings would be associated with increased provider knowledge, comfort, and CPG-consistent behavior.

METHODS Study Population

This study sampled active duty military physicians with opioid prescribing privileges from DoD health care facilities who were required to be familiar with the CPG for OT for chronic pain as part of their job. The educational interventions were designed for medical providers within the Military Health System with active duty providers in mind because they are diverse in specialty, level of training, and health care setting. For example, these physicians rotate through a variety of medical billets and are often embedded within operational units and serve relatively young and healthy military personnel who may engage in occupational activities resulting in injury. They may also be the primary provider in remote settings (e.g., forward deployed and shipboard), and where their understanding of a range of CPGs may be crucial to ensuring optimal care for the service members under their care. Thus, these providers must be aware of the broad scope of medicine and may be required to provide care in austere environments where access to specialty care beyond a primary care physician may be limited.

Study Design

This study used a mixed model factorial design with two between-subject factors [2 (web-based training: yes/no) x 2 (app training: yes/no)] and one within-subject factor (time: pre-test/post-test). After completing pre-test measures, participants were randomly assigned to one of three intervention groups or a control condition, yielding four groups: (1) OLT only, (2) smartphone app only, (3) OLT and smartphone app combined, and (4) neither OLT nor smartphone app. Approximately eight weeks after completing the pre-test, participants were invited to complete a post-test assessment of knowledge, behavior, and comfort, and they reported their satisfaction with the interventions. Institutional Review Board approval of the study protocol was obtained in compliance with all applicable Federal regulations governing the protection of human subjects. Informed consent was obtained from all participants before their participation and was electronically documented.

Interventions

As part of their duties, military medical providers are required to be familiar with all DoD and VA CPGs, which are provided on a public website at https://www.healthquality.va.gov/. The app and the OLT investigated in this study represent expansions to existing dissemination products for this population and were chosen to offer increased flexibility and access to the training information.

Smartphone App

The Pain and Opioid Safety app used in this study was developed by pain experts from a pain program at a large military hospital and is published by the Defense Health Agency. The app provides a comprehensive list of materials for providers who prescribe pain medication and a point of reference for opioid training. At the time of the study, the app contained three major sections: CPGs, Provider Resources, and Patient Resources; an additional section containing a pain assessment tool was later added. The CPGs section provided a centralized list of links to relevant resources including the VA/DoD CPG for OT for Chronic Pain (2017), the VA/DoD Management of Substance Use Disorder CPG (2015), and additional relevant VA/DoD and CDC guidelines. The Provider Resources section included information on available training in several topics (i.e., buprenorphine, acupuncture, and pain), information about journal club and grand rounds, and links to information and resources external to the app (e.g., Do No Harm training, Sole Provider Program training, and relevant webinars). The Patient Resources section included patient resources from the CDC and VA/DoD, a naloxone/opioid safety brochure, and links to several relevant substance abuse organizations. For the purpose of this study, providers were instructed to review the main content of the app, which was estimated to take approximately 90 minutes.

Online Training

The OLT was developed by the investigators and used an interactive narrated slide deck presentation to emphasize the practical application of the newly updated CPG. The slide deck focused on the algorithm that serves as a decision-making guide encompassing the updated guideline and treatment recommendations. Participants were required to complete five interactive clinical scenarios by selecting answers to key decision points that highlighted various components of the algorithm modules including determination of appropriateness of OT (including how to apply the biopsychosocial assessment as it relates to pain management), treatment with OT, reasons for tapering or discontinuation of OT, and factors and considerations for patients currently on OT. The scenarios were designed to provide providers with practice in working with the algorithm to make decisions that follow the recommended steps in the clinical process.

The training also included quick links to the full CPG and other key documents, as well as supplementary materials. An embedded multiple choice quiz was required on completion of all required sections of the training. Passing the quiz with a score of 80% or higher was required to receive a training completion certificate. The OLT was accredited by the Defense Health Agency J-7 Continuing Education Program Office and allowed study participants to receive 1 hour of continuing medical education (CME) credit. Participants assigned to receive this training were asked to complete the training and to review the supplementary materials similar to those provided in the app. The estimated total time spent completing this training and reviewing additional materials was approximately 90 minutes. Participants randomized to other conditions were given the option to access the OLT for CMEs after the completion of their participation in the study.

Control Condition

For the purposes of this study, the control group consisted of participants who continued with standard practices at their facility. According to the Army's order on CPG implementation,4 all regional health commands are tasked with making sure that training on the CPG is integrated into unit training and training for new clinical personnel. The minimum requirement is familiarity with the website that posts the CPG. No parallel implementation order was found for the Navy or Air Force.

Measures

Self-reported measures included provider knowledge, CPG practices, and comfort with performing the recommended practices (see Table, Supplemental Digital Content 1, https://links.lww.com/JCEHP/A196). Sociodemographic, military, and medical training characteristics were also collected. Participants receiving an intervention reported their satisfaction and use of the assigned intervention(s).

Knowledge

Provider knowledge of the VA/DoD CPG for OT for Chronic Pain was assessed using a 25-item multiple choice and true/false test developed by the investigators. Items were developed directly from the CPG content and emphasized changes from previous common practices. Sample items included, “If prescribing take-home opioids for acute pain, reassessment should occur after no more than __ days” (multiple choice) and “When risks outweigh benefits, providers should taper to a reduced dose or discontinuation of long-term opioid therapy” (true/false). A knowledge score was calculated as the percentage of correctly answered items.

Practices

Provider practices were measured based on the self-reported frequency of 16 key actions outlined in the guideline (e.g., making referrals, conducting screening, safe prescribing practices, providing drug information, and advising/recommending specific limits). Participants indicated the frequency of their actions on a 5-point response scale with 1 = never, 2 = rarely, 3 = sometimes, 4 = often, and 5 = always. Item responses were averaged where higher scores indicated greater frequency of performing the recommended practices. The Cronbach's alpha reliability coefficient (α) for the 16-item scale at pre-test was 0.84.

Comfort

Providers' comfort with performing the recommended practices was assessed using an 18-item, investigator-developed scale, using a 7-point response scale with 1= very uncomfortable and 7 = very comfortable. Sample items of recommended practices included, “Determining what dose is appropriate if an opioid prescription is warranted,” “Discussing potential risks and benefits of opioid therapy with patients,” and “Beginning a tapering regimen for opioid therapy.” Item responses were averaged to compute a scale score where a higher mean score indicated greater comfort with performing the recommended practices. The Cronbach's α at pre-test was 0.92.

Sociodemographic, Military, and Medical Training Characteristics

Participants were asked about their sex, race, rank, branch of service, and number of years served on active duty. They were also asked about their medical degree, medical specialty, licensure, years of postgraduate training, and whether they had received supplementary training or education in pain management before participation in this study.

Satisfaction

Provider satisfaction with the interventions was assessed with nine investigator-developed items gauging overall satisfaction with the intervention, perceived usefulness, whether providers learned new information, perceived helpfulness, relevance to readiness, credibility, whether the participant would recommend it to other providers, whether it was motivating to learn more, and whether it changed the provider's approach to prescribing opioids. Participants answered using a 5-point response scale with 1 = strongly disagree and 5 = strongly agree. Means of individual item responses were examined, and a mean overall scale score using all nine of the satisfaction items was computed for each intervention as well. The Cronbach's α for satisfaction was 0.92 and 0.95 in the app and OLT conditions, respectively. In addition, open-ended items queried participants on aspects they liked best and least about the interventions.

Usage

At post-test, participants were asked how much total time they spent using each intervention and the frequency of use (total number of times used) during the study period.

Statistical Analysis

Initial analyses assessed random assignment to condition by examining differences between the control and intervention groups at pre-test in demographic variables and primary outcomes using chi-square tests of association and independent group t tests. Similarly, differences in participant characteristics of study completers and noncompleters were examined using independent t tests and chi-square tests of association. The primary outcomes of CPG knowledge, practices, and comfort were analyzed using a 2 (OLT: yes/no) x 2 (app: yes/no) x 2 (time of measurement: pre-test, post-test) mixed model analysis of variance with time of measurement as a within-subject factor, with the 3-way interaction of app × OLT × time being of most interest. In addition, descriptive analyses and qualitative analyses of participant feedback about the OLT and the app were performed.

RESULTS

A convenience sample of 73 active duty medical providers agreed to participate in the research; 56 completed both the pre-test and post-test (75.6% post-test response rate). Attrition at the post-test was not significantly associated with race, rank, sex, years on active duty, previous pain management training, or intervention vs. control group assignment. By study condition, the final sample included the following numbers of participants: 13 received the app only; 13 received the OLT only; 14 received the app and OLT; and 16 received neither. The providers were predominantly male, white, and serving in the Navy (Table 1). The sample represented a diverse set of specialties, and more than half reported receiving supplementary training in pain management before participation in this study. No significant differences were found at pre-test among the four study condition groups in (1) the primary outcome measures of knowledge, practices, or comfort; (2) the demographic variables race, rank, or years served on active duty; or (3) previous pain management training. There was a significant difference found in sex (P = .006) with the app plus OLT group having a higher percentage of women than the app only (P = .031) and the control groups (P = .001). In addition, the OLT only group had a higher percentage of women than the control group (P = .033).

TABLE 1. - Characteristics of Providers Who Completed the Pre-test and Post-test Assessments (N = 56) Characteristic N % or M (SD) Sociodemographic characteristics  Sex (male) 38 67.9  Race   White 49 87.5  Rank   Junior officers (O-1 to O-4) 35 62.5   Senior officers (O-5 to O-6) 21 37.5  Service branch   Air Force 12 21.4   Army 8 14.3   Navy 36 64.3   Years on active duty 56 11.61 (7.05) Medical training  Medical degree   MD 44 78.6   DO 11 19.6   Currently licensed (yes) 52 92.9  Specialty*   Anesthesiology 10 10.4   Obstetrics/gynecology 9 9.4   Emergency medicine 7 7.3   Surgery general 7 7.3   Family medicine/preventive medicine 5 5.2   Orthopedic surgery 5 5.2   Other 53 55.2   Years of postgraduate training 56 5.02 (2.31)   Received supplemental training in pain management (before this study) (yes) 29 51.8

*Participants could select all that applied; 96 total responses.


Knowledge

A significant main effect of time that was qualified by a 3-way interaction of OLT, app, and time of measurement emerged, F(1, 44) = 15.36, P < .001, ηp2 = .26, and F(1, 44) = 5.72, P = .02, ηp2 = .12 (Table 2), respectively. There were no significant differences in provider knowledge at pre-test (all ps > .22). At post-test, there was a marginally significant simple main effect of OLT, F(1, 47) = 2.99, P = .09, ηp2 = .06 and a marginally significant OLT × app interaction, F(1, 47) = 3.06, P = .09, ηp2 = .06. Knowledge scores increased roughly 9 percentage points in the OLT only, F(1, 11) = 7.64, P = .02, ηp2 = .34, and nine points in the app only group, F(1, 10) = 12.30, P = .006, ηp2 = .55. Although knowledge increased by 6 percentage points in the combined OLT and app group, this difference was not statistically significant, F(1, 12) = 2.65, P = .13, ηp2 = .18. There was no significant change in the control group, F(1, 11) = 0.27, P = .61, ηp2 =.05. These results show that both the app and OLT were associated with significant increases in provider knowledge, whereas the control group showed no significant increase in knowledge over time (see Figure 1). Those in the combined OLT and app condition did not demonstrate greater increases in knowledge compared with either intervention alone (ps > .05).

TABLE 2. - Main Outcomes by Time and Study Condition Outcome Pre-test Post-test App × OLT × Time Effect M (SD) M (SD) df E F P Knowledge* 44 5.72 .021§  App only 0.71 (0.09) 0.80 (0.12)  OLT only 0.76 (0.10) 0.85 (0.06)  App + OLT 0.75 (0.06) 0.81 (0.12)  Control 0.76 (0.07) 0.75 (0.08) Practices 46 0.02 .898  App only 4.32 (0.46) 4.41 (0.38)  OLT only 4.18 (0.52) 4.41 (0.51)  App + OLT 3.87 (0.75) 4.17 (0.73)  Control 3.96 (0.58) 4.02 (0.72) Comfort 45 0.08 .783  App only 5.78 (1.03) 6.03 (0.76)  OLT only 5.41 (1.31) 5.71 (0.96)  App + OLT 5.41 (0.96) 5.66 (0.77)  Control 5.49 (0.75) 5.89 (0.80)

Across outcomes, ns ranged from 11 to 12 for the app only and OLT only groups, and 12 to 13 for the control group. n = 13 for the app + OLT group.

*Knowledge score is the percentage of correct knowledge items of 25 items represented by a decimal.

†Practices scale is the average frequency of performing 16 recommended provider practices; 1 = never and 5 = always.

‡Comfort scale is the average level of comfort with performing 18 recommended practices; 1 = very uncomfortable and 7 = very comfortable.

§P < .05.


F1FIGURE 1.:

Provider knowledge of the CPG for opioid therapy for chronic pain by intervention (app, OLT) and time of measurement (pre-test, post-test).

Practices and Comfort

Although both provider practices and comfort increased significantly over time, F(1, 46) = 3.99, P = .05, ηp2 = .08, and F(1, 45) = 8.36, P < .01, ηp2 = .16, respectively, neither the OLT, the app, nor their combination significantly increased either practices or comfort over time (all ps > .05; Table 2).

Satisfaction

Credibility was the highest rated factor for both the app and OLT, followed by learned new information and helpfulness for reviewing the CPG (Table 3). Most providers “agreed” or “strongly agreed” that they would recommend the interventions to other military health care providers. Overall rating scores of the interventions were compared between those who used the app only and those who used the OLT only; the OLT overall rating was high (M = 4.39, SD =.53), whereas the app overall rating was somewhat lower (M = 3.67, SD = 0.79; P = .05).

TABLE 3. - Provider Satisfaction With the App and OLT Interventions Item App, M (SD) OLT, M (SD) Satisfaction item*  The information provided in the [intervention] is credible 4.75 (0.44) 4.70 (0.47)  I learned new information from the [intervention] 4.40 (0.60) 4.63 (0.49)  The [intervention] helps providers review the clinical practice guidelines for prescribing opioids 4.30 (0.73) 4.60 (0.50)  I am satisfied with the [intervention] 3.95 (0.94) 4.55 (0.51)  The [intervention] is useful for helping providers become aware of the clinical practice guidelines for prescribing opioids 4.15 (0.93) 4.50 (0.61)  I would recommend the [intervention] to other military health care providers 4.05 (1.09) 4.55 (0.75)  The [intervention] is relevant to maintaining readiness 3.90 (1.02) 4.55 (0.50)  The [intervention] motivated me to want to learn more about prescribing opioids 3.70 (1.21) 4.30 (0.80)  The [intervention] motivated me to change the way I approach prescribing opioids to patients 3.65 (1.26) 4.30 (0.80) Overall rating score 3.67 (0.79) 4.39 (0.53)

n = 20 for the app ratings (includes app only group and app plus OLT group); n = 20 for OLT ratings (includes OLT only group and app plus OLT group). [Intervention] refers to the app for those participants who participated and rated the app and to the online training for those who participated and rated the online training.

*Rating scale was 1 = strongly disagree and 5 = strongly agree.

†Overall rating score for each intervention was computed as the mean scale score using all nine of the satisfaction items among those who used the app only (n = 9) and those who used the OLT only (n = 8).

Feedback from providers on the interventions included that both modalities were easy to use and were accessible. The most frequent comment on what providers liked best about the app was that the CPG content was centralized. Aspects of the app that providers liked least were the app format itself and the amount and types of information, with participants citing that they would have liked more brevity of the information and additional information relevant to different types of providers. The most frequent comment on what providers liked best about the OLT training was that it included scenarios. Aspects of the OLT that participants liked least were the navigation, difficulty confirming what sections had been completed, and the pace of the slide advancement.

Usage

Sixty-five percent of providers assigned to the app used it for 90 minutes or less during the study period, and the majority (45%) of providers used the app 3 to 5 times total during the study. Among providers assigned to receive the OLT, 80% used it for 90 minutes or less (the training was 1 hour of instruction), and the majority (80%) used it 1 to 2 times.

DISCUSSION

This study examined whether the use of two educational modalities for the dissemination of the updated VA/DoD CPG for OT for Chronic Pain would have differential impact on provider knowledge, practices, and comfort related to the topic area. Results of this study showed that using either the OLT or the app individually was associated with an increase in provider knowledge of the CPG for OT for chronic pain, whereas the control group showed no significant increase in knowledge over time. In addition, participants reported being satisfied with both interventions and indicated that they would strongly recommend them to a colleague. These results suggest that these tools constitute a valuable addition to the available resources to optimize CPG implementation among military practitioners and may hold similar promise for civilian providers.

The results also indicate although both individual modalities were associated with significant improvement in knowledge, there may be no added value in requiring both methods. The combination of both the app and OLT did not seem to increase knowledge above either the app or the OLT alone, which may be due in part to the fact that both educational interventions covered similar content. That there is no added benefit from using both modalities concurrently is consistent with earlier research on multicomponent interventions.13 In the current study, it is possible that providers assigned to the OLT and app condition were overburdened with the total time requirements for both trainings in addition to their normal duties, which may have resulted in some participants not fully engaging with either intervention or only cursorily reviewing them.

The study did not demonstrate a significant impact on self-reported provider practices or comfort with the CPG, although some previous research on digital provider education has shown effects on attitudes and practices.7–9 The lack of statistically significant improvements in provider practices and comfort was likely affected by the study limitations of a small sample size and low statistical power. In addition, it is possible that the interventions may not have been robust enough or the time spent using the learning tools may not have been of sufficient duration to significantly change behavior or comfort level among these relatively more experienced providers among whom many reported having received previous training in pain management specifically.

Although the current findings suggest that these two educational interventions are associated with increases in provider knowledge, there are several limitations worthy of note. First, as previously noted, the study sample was relatively small, resulting in low statistical power, and findings may not generalize to other groups of medical providers. However, the value of showing the effectiveness of these relatively brief educational interventions on knowledge among this small but diverse group of providers that are tasked with supporting the health of military service members cannot be underestimated. They are a unique and important sample with an array of operational and occupational requirements, making it more challenging for them to find the time to take advantage of other methods of staying up to date on CPGs than their civilian counterparts. Second, although the study relied on an objective test of knowledge of the CPG, both provider behavior and comfort level were based on self-report, which have been known to be associated with biases including social desirability.14,15

Future research could address the study limitations by evaluating the effect of the educational interventions in a larger sample and investigating which evidence-based components of the app and the OLT have the strongest effects, as suggested in a recent scoping review.5 Moreover, creating a hybrid tool integrating the preferred components of each modality into a single intervention remains a viable possible target of future research, for example, taking advantage of the app's potential benefit to knowledge retention through continued access to information and the OLT's benefit of scenarios to reinforce the application of concepts. More broadly, future research should investigate ways to tailor electronic health education on clinical guidelines to the educational needs of the provider (e.g., based on assessment of individual knowledge gaps) in a manner that traditional modalities may not be able to do as easily.

Furthermore, future research should expand the investigation of these modalities into examining their impact on civilian sector providers. Although it is unlikely that their learning styles or other individual factors will be vastly different, requirements to adhere to best practices may have wider variance among civilian practitioners, and there may be additional variables (e.g., variance in hospital systems) that moderate the impact of training. The suggested research on tailoring digital education to the educational needs of the provider would also be relevant to civilian practitioners, allowing for exploration into whether optimizing such tailoring has different requirements for military and civilian sector practitioners. That said, this study provides an important contribution to the knowledge base of the effect of digital provider training in clinical practice guidelines for opioid therapy for chronic pain.Lessons for Practice ■ Results indicated that compared with those who received neither educational intervention, military medical providers who received the OLT only or the app only showed significant increases in CPG knowledge over time, suggesting that these tools constitute a valuable addition to the available resources to optimize CPG implementation. ■ Both methods of training were well received by the participants, which bodes well for the integration of similar modalities into practice when releasing updated clinical practice guidelines.

REFERENCES 1. Dembek ZF, Chekol T, Wu A. The opioid epidemic: challenge to military medicine and national security. Mil Med. 2020;185:e662-e667. 2. Tam CC, Zeng C, Li X. Prescription opioid misuse and its correlates among veterans and military in the United States: a systematic literature review. Drug Alcohol Depend. 2020;216:108311. 3. Rosenberg JM, Bilka BM, Wilson SM, et al. Opioid therapy for chronic pain: overview of the 2017 US Department of Veterans Affairs and US Department of Defense clinical practice guideline. Pain Med. 2018;19:928–941. 4. US Defense Health Headquarters. Army Medical Command. Operation Order 16-40: Clinical Practice Guideline Implementation (261100R). Falls Church, VA: US Defense Health Headquarters. 2016. 5. Sud A, Molska GR, Salamanca-Buentello F. Evaluations of continuing health provider education focused on opioid prescribing: a scoping review. Acad Med. 2022;97:286–299. 6. Kerfoot BP, Kearney MC, Connelly D, et al. Interactive spaced education to assess and improve knowledge of clinical practice guidelines: a randomized controlled trial. Ann Surg. 2009;249:744–749. 7. Trudeau KJ, Hildebrand C, Garg P, et al. A randomized controlled trial of the effects of online pain management education on primary care providers. Pain Med. 2017;18:680–692. 8. Nuamah J, Mehta R, Sasangohar F. Technologies for opioid use disorder management: mobile app search and scoping review. JMIR Mhealth Uhealth. 2020;8:e15752. 9. McEvoy MD, Dear ML, Buie R, et al. Effect of smartphone app-based education on clinician prescribing habits in a learning health care system: a randomized cluster crossover trial. JAMA Netw Open. 2022;5:e2223099. 10. Tudor Car L, Soong A, Kyaw BM, et al. Health professions digital education on clinical practice guidelines: a systematic review by Digital Health Education collaboration. BMC Med. 2019;17:139. 11. Ruzek JI, Wilk J, Simon E, et al. Randomized controlled trial of a web-based intervention to disseminate clinical practice guidelines for posttraumatic stress disorder: the PTSD clinicians exchange. J Trauma Stress. 2020;33:190–196. 12. Samuelson KW, Koenig CJ, McCamish N, et al. Web-based PTSD training for primary care providers: a pilot study. Psychol Serv. 2014;11:153–161. 13. Grimshaw J, Eccles M, Tetroe J. Implementing clinical guidelines: current evidence and future implications. J Contin Educ Health Prof. 2004;24(suppl 1):S31–S37. 14. King MF, Bruner GC. Social desirability bias: a neglected aspect of validity testing. Psychol Mark. 2000;17:79–103. 15. Neeley SM, Cronley ML. When research participants don't tell it like it is: pinpointing the effects of social desirability bias using self vs. indirect-questioning. In: Kahn BE, Luce MF, eds. NA: Advances in Consumer Research. 31. Valdosta, GA: Association for Consumer Research; 2004:432–433.

留言 (0)

沒有登入
gif