Development and Pilot Testing of Telesimulation for Pediatric Feeding: A Feasibility Study

This study employed an iterative action learning process to develop, trial, and then engage user feedback to evaluate the feasibility of a telesimulation learning experience for pediatric feeding. This process is presented here as four phases: (1) simulation design, (2) telesimulation adaptations, (3) user testing, feedback, and modifications, and (4) user testing of modified simulation, feedback, and final modifications. This study received ethics approval from [name withheld] (HREC/21/QCHQ/80217) and [name withheld] (2021/HE002620).

Phase 1: Simulation Design

The simulation involved a 3-week-old infant with laryngomalacia and feeding difficulties. Participants worked through information gathering from a nurse, interviewing the parent, completing a feeding assessment and providing recommendations, and providing feedback to the otolaryngologist. The simulation was developed in line with the eleven criteria proposed in the Healthcare Simulation Standards of Best Practice for Simulation Design [21] (Table 1) and in consideration of the barriers and enablers of psychological safety in healthcare simulation [22]. Two of the research team members with expertise in pediatric feeding (Authors 1 and 2) collaborated to design the simulation. Both authors hold a PhD in this topic area, with ten (Author 2) to twenty (Author 1) years of clinical experience. Best practice management for the scenario was derived from literature review regarding infant feeding and laryngomalacia [23,24,25,26,27], as well as the clinical experience of the simulation designers. Consultation was sought with three additional team members with expertise in simulated learning (Authors 3, 4, and 7). These three authors have previously published research in this area, and Authors 4 and 7 hold a PhD in simulated learning. All team members had considerable practical experience with running simulations in an in-person format through multiple years of student and clinician training.

Table 1 Alignment of telesimulation components to the Healthcare Simulation Standards of Best Practice™

Learning objectives regarding knowledge development, and practical application of assessment, management, and interprofessional communication were developed, targeted at a novice learner with <6-months experience in infant feeding (Table 2). The goals of the simulation were to support learners to develop particular clinical skills (e.g., completing an oral reflex examination) but also to employ critical thinking and problem-solving skills (e.g., making decisions about feeding management), aligning with both behaviorist and constructivist learning theories [19]. Learning opportunities were scaffolded to build on skills and work toward independent practice.

Table 2 Learning objectives for the telesimulation

The simulation involved five components as described in Fig. 1: pre-brief, didactic teaching, part-task activities, the simulation itself, and debrief. Pre-learning resources were sent to participants in the form of an online learning module on respiratory distress (Pediatric Feeding Learning Framework, https://ilearn.health.qld.gov.au/d2l/login). This online learning framework was developed as a national resource using a working group of clinicians from across our state and involved literature review, consultation, and external peer review. Other preparatory videos illustrating infant oral reflex assessment, positioning, and feeding equipment were also distributed. The pre-brief session commenced with an ‘ice-breaker,’ which served as an opportunity for participants to establish a psychologically safe learning environment and included personal goal setting for the session. There was time dedicated during the pre-brief to explain the concept of simulation and the expectations for the session, as well as exploring the learning objectives (Table 2). It was made clear during the pre-brief that differences in opinion were welcomed, with the plan that the facilitator would engage in open discussion and present any refuting evidence where necessary. This discussion would be followed up with resources via email after the simulation experience. During the pre-briefing, the roles of the two facilitators were also explained. A brief didactic teaching session was provided to ensure adequate knowledge of laryngomalacia and respiratory distress, prior to commencing the simulation. Part-task activities, where participants practiced specific components of feeding assessment and management, included identifying elements of respiratory distress from a video, reading a vital signs monitor, identifying different equipment for use in the session, and a discussion regarding different management techniques. The simulation itself was designed as a pause-discuss scenario [28], and each participant took a turn at completing a section of the feeding assessment (e.g., nurse interview, parent interview, and infant assessment), with discussion during each pause as to what might happen next and what decisions could be made in assessment and management. Three different ‘scenes’ were presented. This included questioning of the nurse, assessment and management of the infant/mother, and handover to the otolaryngologist. A life-like low-fidelity infant mannequin was used as the ‘patient’, and the ‘mother’ simulated holding and feeding the infant. The session finished with a structured debrief that was facilitator guided [29].

Fig. 1figure 1

Facilitator 1 (Author 1) acted as the parent/nurse/otolaryngologist, and Facilitator 2 supported discussion and decision-making for the group. Facilitator 1 (Author 1) had considerable clinical experience in pediatric feeding, affording a high degree of content knowledge, which allowed for flexibility in response to participant actions that may have slightly differed from the planned script. As this was planned as a low-fidelity simulation, Facilitator 2 also provided additional auditory cues to support assessment/management during the simulation (e.g., Facilitator 2 would advise participants that the baby was crying, etc.). Before the simulations were run, the facilitators met on three occasions to complete training and troubleshoot any issues (approximately 3 h total).

Phase 2: Telesimulation Adaptations

Three members of the team (Authors 1, 2, and 3) met on two occasions to troubleshoot translating aspects of the in-person simulation scenario into a virtual medium (via Zoom®). Some guidance was taken from previous literature in medical education [6, 30, 31]. A decision was made to use Zoom® in this study given the accessibility of this platform for most users, enabling easy access and replication of this telesimulation methodology. After exploring the capabilities of Zoom®, various functions were pilot tested with three team members in different rooms at the same physical location to allow for cross-checking.

It was determined that use of the ‘spotlight’ function to highlight the active participant and the simulated parent/nurse/otolaryngologist would be useful from the participants’ viewpoint (see Fig. 2). The ‘spotlight’ function allows the host to highlight specific Zoom® participants. This makes the spotlighted participants’ tiles larger than those of the other participants, increasing focus on those who are actively engaging during the simulation sequence. The use of different context-specific backgrounds (i.e., clinical ‘scenes’) to support the fidelity of the simulation scenario was also discussed and applied (see Fig. 3). Participants were encouraged to have their view set to ‘speaker view’ during the simulation experience and ‘gallery view’ during the pre-brief and debrief. In speaker view, all learners could be seen in the smaller tiles when only the two participants were spotlighted (Fig. 2), but the other learners could not be seen when the screen was shared with the vital signs monitor (Fig. 3). Zoom® also allowed for manipulation of the size of the Microsoft PowerPoint® screenshared information with the spotlighted participants. Participants were encouraged to reduce the size of the screenshared information and increase the size of the participant tiles by dragging the cursor over the control in the middle of the screen between the two items. It was felt that this manipulation would additionally support ease of viewing and participation (see Fig. 3). The impact of competing noise on the Zoom® platform was also explored, specifically the ability for the simulated nurse/parent/otolaryngologist, participant, and facilitator to all use the microphone simultaneously without impacting on the auditory quality. Team members discussed use of the hand-raise function as a means of coordinating opportunities for participants to contribute.

Fig. 2figure 2

Use of spotlight function to optimize view of simulated nurse (with context-specific background) and participant

Fig. 3figure 3

Use of spotlight function enabled displays the simulated parent and infant as well as the “participant” engaging with them and vital signs monitor

For the pre-brief section, use of the virtual whiteboard and polling functions within Zoom® was explored in setting up the ice-breaker tasks. The virtual whiteboard was used to brainstorm participant-specific goals for the session, and these were reviewed during debrief at the end of the session. The part-task activities were carefully designed to enable transfer to a virtual medium and included the use of video interpretations, discussion-based tasks, and an equipment sorting activity in breakout rooms with smaller groups. For example, participants watched videos of infants feeding and practiced identifying indicators of respiratory distress with the support of the facilitators. It was determined that it would be valuable for participants to have a doll or plush toy available with them to support skill development in oral reflex examination and for use during the session (i.e., to hold and practice completing an oral reflex exam during the part-task activity).

During the simulation itself, from a physical perspective, the team explored the visibility of the infant/mother and their relative distance from the camera. For the infant/mother portion of the simulation, use of a virtual background impacted on how well the infant mannequin could be seen, so a decision was made to use no virtual background and seat the mother against a dark wall wearing dark clothing. The infant/mother were situated 1.5 m from the camera, on a chair that had a swivel capacity, so that the participants could instruct the mother to move to manipulate their view more easily. Simple costumes (e.g., mother in dressing gown, nurse in surgical scrubs) were used to help support realism for participants (see Figs. 2 and 3). The facilitator’s Zoom® tile label was also changed so to support clarity for participants as to whom they were engaging with.

Phase 3 and 4: Recruitment for User Testing

All participants for user testing were speech pathologists recruited from local health services (approximately 18-mile radius) using convenience sampling through distribution of an expression of interest. Prior to the session, following the informed consent process, participants were asked to report their experience in infant feeding. For the purposes of this study, participants with <6-months infant feeding experience were considered to be ‘novice’ practitioners, and participants with >6-months experience were considered to be ‘experienced.’ Participants were allocated to either Session 1 or Session 2 based on availability and self-identified experience levels.

Phase 3: User Testing, Feedback, and Modifications (Session 1)Group Demographics

Pilot testing of the developed simulation program for Session 1 occurred with a cohort of five participants, two of whom were novice and three were experienced (see Table 3). Anonymous polling was used during the ice-breaker section to explore participants’ prior experiences with simulation and feelings of anxiety, with immediate collated feedback provided across the group to support understanding of the perspectives of other participants in the virtual ‘room.’ As can be seen in Table 3, most participants had not participated in a simulated learning experience previously, and the median score (of four participants) for nervousness was 1.5 ranging from 1 (not nervous at all) to 3 (somewhat nervous). Polling data were missing for one participant due to late arrival.

Table 3 Demographics of pilot participantsGroup Feedback and Changes

Immediately following the simulation experience, participants were invited to provide feedback via an anonymous survey. Participants were also invited to participate in an optional focus group with a member of the research team not involved in the delivery of the telesimulation, who is an active researcher in the field with a PhD with experience in conducting focus groups and was known to participants (Author 3). Focus groups were conducted virtually via Zoom® and were 20–30 min in duration. As not all participants completed the survey, the same questions were posed during the focus group to allow all participants the opportunity to respond. Additional probes for exploration during the focus group were developed based on responses from the anonymous survey. The survey/focus group guide is provided in Table 4.

Table 4 Survey/focus group questions and additional focus group probes

Results from the feedback survey and the focus groups were interpreted using manifest content analysis [32] and are reported using guidance from the Consolidated Criteria for Reporting Qualitative Research (COREQ) checklist [33]. Transcriptions were completed by a research assistant and checked by the primary author. Two authors (Authors 1 and 6) reviewed all survey responses and the focus group transcripts and independently developed condensed meaning units and codes. These two authors then met to reach consensus regarding coding and to develop categories from the codes. The full team then met to achieve agreement regarding the findings and resolve any disagreements between Authors 1 and 6. Following this process, a survey was distributed to the participants to determine whether the codes and categories derived were reflective of their experiences and feedback. Five participants responded, with all noting that codes and categories reflected their feedback.

Three participants completed the survey and two participated in the optional focus group (Focus Group 1, identified as Clinicians 1 and 2). As the survey was anonymous, it was not possible to discern whether participants in the focus group also completed the survey. Quotes are presented in the text below with an identifier (e.g., Clinician X). Quotes with no identifier were derived from an anonymous survey response.

Overall, seven categories were agreed upon regarding Session 1 feedback. These included feedback regarding (a) simulation preparation and structure, (b) session practicalities, (c) supports for realism, (d) Zoom® functions, (e) group dynamics, and (f) participants’ experiences of the simulation. Specific coding associated with each of these categories and the learnings applied for Session 2 are presented in Table 5.

Table 5 Feedback from participants and changes adopted

In simulation preparation and structure, participants reported that the pre-learning activities “set up the course well” and one participant felt that “…if you hadn’t had done that first or even had that theoretical refresher with [Facilitator 1] before getting into the SIM [telesimulation], it would have been really challenging…” (Clinician 1). Participants also appreciated the ‘ice-breakers’ at the beginning of the session and commented that they helped them feel that “others were in the same boat” and that “they’re safe people” (Clinician 2). Participants enjoyed that the activities were layered to build up to simulation participation, i.e., “you sort of built up…you had these little practical tasks along the way prior to getting into the full-blown sim…” (Clinician 1). The overall flow of the simulation was considered to be appropriate by multiple participants.

Regarding session practicalities, multiple participants reported that time was an issue during the session. Specific areas for improvement included allowing adequate breaks, increased time for discussion of management options during the simulation, and increased time for the debrief. One participant suggested “it would also be helpful just to have more time to sit to pause and discuss what is the rationale for [that]?” (Clinician 2). This feedback resulted in the extension of Session 2 by one hour. There was also an issue with breakout room initiation, where participants left to join a breakout room prior to having the activity explained to them. This was remedied for Session 2 by not initiating breakout rooms until after the activity description had been completed. Finally, on the day of Session 1, there was a state-wide internet outage, which impacted participants’ ability to join into the Zoom® session at the scheduled start time. This resulted in some participants being late to join in, there was a malfunction of the breakout room function, and some lag with Microsoft PowerPoint® sharing, which caused the session to run slightly late. Participants recognized these issues noting that the “only tech issues were from the…internet.” The team discussed ensuring that contact details are provided to support troubleshooting in the event that participants may be experiencing any technical issues.

Participants described that the backgrounds, costumes, vital signs monitors, and the “real-looking doll” were all helpful supports for realism during the simulation. One participant reported “[Facilitator 1]’s acting was really good and it actually felt very realistic. It made me feel like I was in that clinical situation with how stressed she was…” (Clinician 1). Several participants described that there was not enough auditory information (i.e., infant respiratory sounds) to participate in the initial assessment with the mother/infant. One participant described their experience:

I was sort of holding back and waiting for that [auditory information]. But in waiting for that I was seeing the monitor go up and then getting quite anxious that I wasn’t doing anything. Um, so that was just something that I had to overcome, be like, okay, well, we need to stop, obviously. (Clinician 2)

Subsequently, a decision was made to add recordings of infant stridor sounds and more auditory assessment points from Facilitator 2 for Session 2.

Regarding the Zoom® functions, participants reported that use of the ‘spotlight’ function in Zoom® meant that “only the clinician role-playing and the presenter were viewed—this really helped to ‘get in to character’ and act like we usually would in a work scenario.” Although the facilitators had intended to use the ‘hand-raise’ function during the session, this was not used consistently, and several participants reiterated statements regarding preferring to have used the hand-raise function, for example, “to make sure that I’m giving other people opportunity… I would have appreciated maybe more consistent use of the hands up…” (Clinician 1). Subsequently, the facilitators planned to more clearly explain and reiterate use of the hand-raise function for Session 2.

Group dynamics were an important consideration in setting up the simulation experience to support psychological safety. Participants fed back that the group size of six participants was good, allowing participants to “share ideas/opinions with comfort,” get a “decent go at doing the practical tasks” (Clinician 1), and to “learn and to grow and also make mistakes” (Clinician 2). Having a mixture of experienced clinicians and novice clinicians was viewed as valuable, but it was important to identify different levels of experience early in the session so that the participants are “all on a level playing field” (Clinician 2).

Yeah, I was, I was glad that there were some other people that were fairly new to, like Ped [pediatric] feeding clinical work as well. So I think it’s important to have not just one person who’s the newie (Clinician 2).

One participant identified that not having any direct supervisors in the group was helpful to her “comfort” and that “if one of my supervisors was there as well, it might then create a bit more anxiety if I were to make a mistake while learning in that space” (Clinician 2).

In general, participants’ experience of the simulation put them in a “good headspace.” The case was considered to be “bread and butter” for an experienced clinician but challenging for a novice. One participant reported experiencing some stress during the scenario, although this was more to do with the authenticity of the set up than the simulation experience itself. Participants reported the simulation was “convenient and easy to access” via Zoom®.

Phase 4: User Testing of Modified Simulation, Feedback, and Final Modifications (Session 2)Group Demographics

Following user feedback, modifications were made to the program and a second set of clinicians completed the revised simulation program. This group included six participants, three of whom described themselves as novice practitioners and three of whom were experienced. More participants from Session 2 had participated in simulation experiences previously, although this was predominantly during undergraduate study. The median score for nervousness was 3, ranging from 2 (a little nervous) to 5 (very nervous) (Table 3).

Group Feedback and Changes

Four participants completed the anonymous survey following the telesimulation and four (identified as Clinicians 3–6) participated in the optional focus group (Focus Group 2). Data from both the survey and the focus groups were interpreted using manifest content analysis. Again, six categories emerged, including (a) simulation preparation and structure, (b) session practicalities, (c) supports for realism, (d) Zoom® functions, (e) group dynamics, and (f) participants’ experiences of the simulation with the addition of a new category, “future enhancements” (See Table 5).

Regarding simulation preparation and structure, this second set of participants provided similar positive feedback to the clinicians who took part in the first telesimulation session, with several participants emphasizing the importance of the pre-brief in making participants “feel comfortable” (Clinician 5) and setting the expectation that “this was a learning opportunity and not an assessment.” When considering the debrief, participants reported that having the opportunity to “reflect at the end on what I learnt/gained from the experience” was helpful. With regards to session practicalities, participants reported again that they benefited from the pause-discuss format of the simulation, but that there was not enough time for an adequate debrief at the end of the session.

In response to feedback from Session 1 and to facilitate supports for realism, audio clips depicting stridor in the infant were played and increased facilitator cueing was added into the second presentation of the telesimulation learning experience. This, however, resulted in excessive audio-visual activity, which was reported as “distracting” by the participants. The audio quality of the stridor sounds was also described as poor, with one participant commenting, “The sound to me kind of sounded like a cough, but it was actually stridor.” (Clinician 4). As such, the team made the decision to limit auditory information to facilitator cues only for the future and to complete careful scripting and rehearsal of this cueing. Participants again described the overall experience as realistic, commenting particularly on the acting and costumes of Facilitator 1.

In Zoom® functions, participants fed back that the “hand-raising function worked well to ensure no one spoke over the top of each other” in this session. Additionally, participants enjoyed use of the virtual whiteboard function in Zoom® and felt that the “multiple methods of communication (writing in chat, speaking on camera, breakout groups) gave different participants a chance to contribute in a way they preferred,” which was identified as an additional benefit of the telesimulation modality. One participant described feeling that the telesimulation modality allowed them to participate more equally than if they were in-person:

Yeah so, one thing that I thought was a great advantage of the telehealth setup was that we were all very much even, it was a very equal experience. Because typically, you know, you walk into a room where potentially a group are quite familiar with each other and they might sit next to, you know, one another, if they’re already well known. And equally, there’s a bit of an ordering in terms of your approximation to the person presenting, etc. And so the fact that we can all visibly see each other, very equally, I thought was a real advantage. (Clinician 3).

This same participant commented on the disconcerting nature of the camera, describing that “when you can actually see yourself the whole time, I think it does add an extra level of intensity and an added level of self-consciousness perhaps” (Clinician 3).

The group dynamics for Session 2 were different to Session 1. Despite there being more novice participants in Session 2, two novice participants felt uncomfortable to participate, reporting that they “…would have felt more comfortable in a group with other more novice speechies [speech pathologists] in this area” (Clinician 6) and suggesting that novice and experienced clinicians should be split. These clinicians described “feeling inferior to the other clinicians in the group who work with feeding on a daily basis.” Conversely, other participants in the group “loved having a mix, because it created a bit of a safe space” (Clinician 3). These participants described the value of having different perspectives in the group, stating things like “…actually if there are people going back to the basics, asking pretty simple questions, it makes it safer to then be a bit more vulnerable in the learning process, I think” (Clinician 3). Multiple participants also valued the modeling of clinical skills that more experienced clinicians could provide. The research team discussed this issue at length and decided that the benefit of incorporating a group with varied skill mix outweighed the risks in this scenario, but that further preparation opportunities should be provided to novice participants so that they know what to expect during the simulation experience.

Skill mix issues did influence the report of participant experiences, where some participants reported feeling uncomfortable and a lack of confidence to volunteer, describing that “they were happy to just sit and observe and learn” (Clinician 5). Other participants in the group, however, reported positive experiences, including that the simulation increased their confidence and consolidated their knowledge, describing it as a “rich learning experience” (Clinician 3) that “met my expectations.” One participant reported that the format of learning for them was more accessible:

“I accessed this learning experience without having to leave my worksite and therefore feel that the telehealth format, for me, reduces some barriers to learning (requesting leave, increased travel time to and from learning facility).”

Finally, this group raised several suggestions for future enhancements for the telesimulation experience, including that future cases could include increased social complexity or be conducted over multiple sessions (i.e., the simulation might involve an initial assessment and multiple reviews for the same patient). Further opportunities for peer discussion regarding problem-solving and case management, particularly about “what you wouldn’t do or when things don’t go to plan” (Clinician 3) were reported as suggested future inclusions. Finally, split breakout rooms between novice and experienced clinicians for part-task sessions were suggested to support more targeted discussions at similar skill-mix levels.

A final list of general recommendations for telesimulation based on these findings is presented in Table 6.

Table 6 Final list of general recommendations for telesimulation based on findings

留言 (0)

沒有登入
gif