Unintended consequences of technology in competency-based education: a qualitative study of lessons learned in an OtoHNS program

The e-portfolio created barriers to the assessment practices of documenting, accessing, and interpreting EPA data for both faculty and residents, but the residents faced more challenges. Difficulties collating and visualizing EPA data slowed the capacity of competency committee members to review resident performance in depth. However, residents faced three obstacles to use of the e-portfolio: requesting assessments, receiving formative feedback, and using data to self-regulate learning. The workload of trying to manage EPA data led to unintended workarounds for these barriers, resulting in a “numbers game” (R7) residents played to acquire successful EPA assessments.

The findings are organized to detail each technology barrier and resulting workarounds. Figure 1 illustrates themes in the data that prioritized needs for searchable, contextual, visual, and mobile technology solutions to overcome these challenges.

Fig. 1figure 1

Envisioning a resident dashboard

Requesting EPA assessments

Residents described difficulties requesting EPAs through the e-portfolio that mapped onto the procedures they were learning. One commented, “the stem of the problem is, the website is not friendly, EPAs are not designed properly” (R8). Another explained, “you have to find which is the appropriate EPA, which is not always very obvious. For example, this one about hearing loss is actually the tube insertion one” (R2). The design visualizations of improved interfaces showed that how an EPA is structured in the curriculum plan matters for how residents search for it in the e-portfolio. Their sketches illustrated that organizing EPAs in the database in the same way they are laid out in their program curriculum map, by level of training and rotation, would helpfully narrow the search field to “the appropriate EPA”. The searchable schematic in Fig. 1 encapsulates these ideas.

Residents also felt hampered by the extra work required to manage EPA feedback requests and notifications within the e-portfolio: “You hit request, and then it generates a note to the staff who then sees, ‘Request for EPA Number 3.7’… they don’t get any other information than that” (R2). To prompt faculty to recall the case for feedback, the residents developed communicative workarounds: “you have to either communicate that to them in person and they have to remember, or you have to send a separate email telling them, hey, I’m sending you an EPA, it’s about patient X, Y, Z” (R2). Residents agreed that this tactic of sending extra emails and reminders was essential to ensure that faculty understood which procedure they were being asked to provide feedback on and to complete assessment documentation. However, while residents had to shoulder the workload of filling in contextual gaps for faculty, they faced the same problem of lack of context when receiving feedback from the system.

Receiving formative feedback

The residents described the feedback notifications as “generic”, making it difficult to remember which cases they could be related to.

It’s just a very generic email so, it doesn’t say what you’re being evaluated, it just says so-and-so completed an evaluation on your form. You have to think, 30 days later, you have to think about what did you send them? (R7)

The default setting of up to “30 days” from sending a request to receiving feedback was intended to allow faculty time to complete assessments. However, both the delay of information and lack of context rendered the feedback uninformative, as the following conversation between residents highlights:

R4: I, over time, I just stopped reading the EPA feedback… I mean, just delete it from my inbox. I guess, yeah, it doesn’t tell me the exact contextual context. I know that I achieved it when I asked so I don’t read the feedback for it.

There were two reasons residents found feedback documented in the e-portfolio did not support their learning. First, as Resident 7 pointed out, “30 days later” the assessment served a purely summative, “black or white”, purpose. Residents found the feedback lacked specificity on ways to improve performance. Our own analysis of EPA feedback in the residents’ e-portfolios confirmed that details were scant: “good economy of motion”, “good understanding of relevant anatomy”, “knew all components of the procedure”.

But there was another reason feedback was uninformative. Resident 4’s commentary on why emails with feedback notifications were deleted is telling: “I know that I achieved it when I asked for it”. The residents described a process of waiting until they were reasonably assured they would receive a ‘successful’ entrustment rating before sending feedback requests for an EPA: “When?—confident on procedure, never first-time doing procedure” (R1).

It might seem that residents were hesitant to receive more improvement-focussed assessment. However, when asked how residents decided who to ask and when to ask for EPA assessments, the answer landed on a tactic for efficiency rather than avoidance of constructive feedback:

It’s basically a numbers game. You’re like, are you going to send one out that’s got 30 days and then you’re going to have to re-request it? Probably not. It’s not a good use of your time, it’s not a good use of their time. (R7)

Given the high stakes of the “numbers game” for progression, it might also seem that residents would seek feedback strategically from faculty known to give higher entrustment ratings. However, as the following exchange outlines, this was not the case.

R11: I would say, I just ask based on people who I know will get it back to me and people who are willing to do it.

R2: And most staff will really only happily do one EPA for you a day.

To protect their time and faculty time in the workload of CBD, residents managed the numbers game by deciding who to ask based on faculty approachability and efficiency and deciding when to ask once reasonably confident of success. The contextual features in the second schematic in Fig. 1 show how the workload of managing the numbers game could be reduced and the value of feedback increased with a technology design that included contextual details in feedback requests.

Using assessment data to self-regulate learning

Faculty and residents also shared that the technology design made it difficult to track progression towards entrustment on EPAs. In the CBD curriculum, EPAs have a number of components; for example, entrustment of the tonsillectomy EPA required a variety of patient indications and multiple assessment observations. In the competency committee meeting, members had to toggle between different screens to see how many EPA observations were complete, which contextual variables were incomplete, and to read the feedback on the observations. Faculty struggled to interpret this data holistically. The following exchange between two committee members indicates that faculty understood accessibility of EPA data was a problem for the residents as well:

S2: They need a system for logging themselves—they can’t see them, we struggle because we’re flipping back and forth between screens.

S6: If they could just have a personal dashboard so they know! It’s so hard to keep track.

The problem residents faced in tracking their progression was piecing contextual variables together, finding opportunities on different days to complete EPA requirements. The challenge to “keep track” was compounded by the absence of interpretive details in the reporting system.

As Resident 11 explained, notifications of a “pass” for an EPA assessment did not provide information on which contextual variables “contributed to your pass”. The schematic for visual features in Fig. 1 illustrates the residents’ requests for a more accessible snapshot of EPAs in progress and more informative metrics on completion of contextual variables.

留言 (0)

沒有登入
gif