Electronic health record (EHR) systems are real-time records of patient-centred clinical and administrative data that provide instant and secure information to authorized users. Well designed and implemented systems should facilitate timely clinical decision-making.1,2 However3 the prevalence of poorly performing systems suggest the common violation of usability principles.4
There are many methods to evaluate system usability.5 Usability evaluation methods cited in the literature include user trials, questionnaires, interviews, heuristic evaluation and cognitive walkthrough.6-9 There are no standard criteria to compare results from these different methods10 and no single method identifies all (or even most) potential problems.11
Previous studies have focused on usability definitions and attributes.12-17 Systematic reviews in this field often present a list of usability evaluation methods18 and usability metrics19 with additional information on the barriers and/or facilitators to system implementation.20,21 However many of these are restricted to a single geographical region,22 type of illness, health area, or age group.23
The lack of consensus on which methods to use when evaluating usability24 may explain the inconsistent approaches demonstrated in the literature. Recommendations exist25-27 but none contain guidance on the use, interpretation and interrelationship of usability evaluation methods, usability metrics and the varied measurement techniques applied to assess EHR systems used by clinical staff. These are a specific group of end-users whose system-based decisions have a direct impact on patient safety and health outcomes.
The objective of this systematic review was to identify and characterize usability metrics (and their measurement techniques) within usability evaluation methods applied to assess medical systems, used exclusively by hospital based clinical staff, for individual patient care. For this study, all components in the included studies have been identified as “metrics” to facilitate comparison of methods when testing and reporting EHR systems development.28 In such cases, Nielsen's satisfaction attribute is equivalent to the ISO usability component of satisfaction.
2 METHODSThis systematic review was registered with PROSPERO (registration number CRD42016041604).29 During the literature search and initial analysis phase, we decided to focus on the methods used to assess graphical user interfaces (GUIs) designed to support medical decision-making rather than visual design features. We have changed the title of the review to reflect this decision. We followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines30 (Appendix Table S1).
2.1 Eligibility criteriaIncluded studies evaluated electronic systems; medical devices used exclusively by hospital staff (defined as doctors, nurses, allied health professionals, or hospital operational staff) and presented individual patient data for review.
Excluded studies evaluated systems operating in nonmedical environments, systems that presented aggregate data (rather than individual patient data) and those not intended for use by clinical staff. Results from other systematic or narrative reviews were also excluded.
2.2 Search criteriaThe literature search was carried out by TP using Medline, EMBASE, CINAHL, Cochrane Database of Systematic Reviews, and Open Grey bibliographic databases for studies published between January 1986 and November 2019. The strategy combined the following search terms and their synonyms: usability assessment, EHR, and user interface. Language restrictions were not applied. The reference lists of all included studies were checked for further relevant studies. Appendix Table S2 presents the full Medline search strategy.
2.3 Study selection and analysisThe systematic review was organized using Covidence systematic review management software (Veritas Health Innovation Ltd, Melbourne).31 Two authors (MW, VW) independently reviewed all search result titles and abstracts. The full text studies were then screened independently (MW, VW). Any discrepancies between the authors regarding the selection of the articles were reviewed by a third party (JM) and a consensus was reached in a joint session.
2.4 Data extraction We planned to extract the following data: Demographics (authors, title, journal, publication date, country). Characteristics of the end-users. Type of medical data included in EHR systems.Usability evaluation methods and their types, such as:
questionnaires or surveys, user trials, interviews, heuristic evaluation.Usability metrics (components variously defined as attributes, criteria,32 or metrics33). For the purpose of this review, we adopted the term “metric” to describe any such component) but we include all metric-similar terms used by authors in included studies:
satisfaction, efficiency, effectiveness metrics, learnability, memorability, errors components, Types and frequency of usability metric analysed within usability evaluation methods. We extracted data into two stages. Stage 1 relied on the extraction of general data from each of the studies that met our primary criteria based the original data extraction form. Stage 2 extended the extraction to gain more specific information such as the measurement techniques for each identified metric as we observed that these were reported in different ways.The extracted data was assessed for agreement reaching the goal of >95%. All uncertainties regarding data extraction were resolved by discussion among the authors.
2.5 Quality assessmentWe used two checklists to evaluate quality of included studies. First used tool, the Downs & Black (D&B) Checklist for the Assessment of Methodological Quality34 contains 27 questions, covering the following domains: reporting quality (10 items), external validity (three items), bias (seven items), confounding (six items) and power (one item). It is widely used for clinical systematic reviews because it is validated to assess randomized controlled trials, observational and cohort studies. However, many of the D&B checklist questions have little or no relevance to studies evaluating EHR systems, particularly because EHR systems are not classified as “interventions.” Due to this fact, we modified D&B checklist to have usability-oriented tool. The purpose of our modified D&B checklist, constructed of 10 questions, was quality assessment of the aim of the study (specific to usability evaluation methods) evidence that included methods and metrics were supported by peer reviewed literature. Our modified D&B checklist investigated whether the participants of the study were clearly described and representative of the eventual (intended) end-users, the time period over which the study was undertaken being clearly described and the results reflected the methods and described appropriately. The modified D&B checklist is summarized in the appendix (Appendix Table S3). Using this checklist, we defined “high quality” studies as those which scored well in each of the domains (scores ≥ eight). Those studies, which scored in most but not all domains were defined as “moderate quality” (scores of six and seven). The remainder were defined as “low quality” (scores of five and below). We decided to not exclude any paper due to low quality.
3 RESULTSWe followed the PRISMA guidelines for this systematic review (Appendix Table S1). The search generated 2231 candidate studies. After the removal of duplicates, 1336 abstracts remained (Figure 1). From these, 130 full texts were reviewed, with 51 studies eventually being included. All included studies were published between 2001 and 2019. Of the included studies, 86% were tested on clinical staff, 6% on usability experts and 8% on both clinical staff and usability experts. The characteristics of the included studies are summarized in Table 1.
Study selection process: PRISMA flow diagram
TABLE 1. Details of included studies Ref Author Year Country Participants Number System type 35 Aakre et al. 2017 USA Internal Medicine Residents, Resident, Fellows, Attending Physicians 26 EHR with SOFAa score calculator 36 Abdel-Rahman 2016 USA Physicians, Residents, Nurses, Pharmacologists, Pharmacists, Administrators 28 EHR with the addition of a medication display 37 Al Ghalayini, Antoun, Moacdich 2018 Lebanon Family Medicine Residents 13 EHR evaluation 38 Allen et al. 2006 USA “Experts” experienced in usability testing 4 EHR evaluation 39 Belden et al. 2017 USA Primary Care Physicians 16 Electronic clinical notes 40 Brown et al. 2001 USA Nurses 10 Electronic clinical notes 41 Brown et al. 2016 UK Health Information System Evaluators 8 Electronic quality-improvement tool 42 Brown et al. 2018 UK Primary Care Physicians 7 Electronic quality-improvement tool 43 Chang et al. 2011 USA Nurses, Home Aides, Physicians, Research Assistants 60 EHR on mobile devices 44 Chang et al. 2017 Taiwan Medical Students, Physician Assistant Students 132 EHR with the addition of a medication display 45 Devine et al. 2014 USA Cardiologists, Oncologists 10 EHR with clinical decision support tool 46 Fidler et al. 2015 USA Critical Care Physicians, Nurses 10 Monitoring – physiology (for patients with arrhythmias) 47 Forsman et al. 2013 Sweden Specialists Physicians, Resident Physicians, Usability Experts 12 EHR evaluation 48 Fossum et al. 2011 Norway Registered Nurses 25 EHR with clinical decision support tool 49 Gardner et al. 2017 USA Staff Physicians, Fellows, Medical Resident, Nurse Practitioners, Physician Assistant 14 Monitoring – physiology (for patients with heart failure) 50 Garvin et al. 2019 USA Gastroenterology Fellows, Internal Medicine Resident, Interns 20 EHR with clinical decision support tool for patients with cirrhosis 51 Glaser et al. 2013 USA Undergraduates, Physicians, Registered Nurses 18 EHR with the addition of a medication display 52 Graber et al. 2015 Iran Physicians 32 EHR with the addition of a medication display 53 Hirsch et al. 2012 Germany Physicians 29 EHR with clinical decision support tool 54 Hirsch et al. 2015 USA Internal Medicine Residents, Nephrology Fellows 12 EHR evaluation 55 Hortman, Thompson 2005 USA Faculty Members, Student Nurse 5 Electronic outcome database display 56 Hultman et al. 2016 USA Resident Physicians 8 EHR on mobile devices 57 Iadanza et al. 2019 Italy An evaluator 1 EHR with ophthalmological pupillometry display 58 Jaspers et al. 2008 Netherlands Clinicians 116 EHR evaluation 59 Kersting, Weltermann 2019 Germany General Practitioners, Practice Assistants 18 EHR for supporting longitudinal care management of multimorbid seniors 60 Khairat et al. 2019 USA ICU Physicians (Attending Physicians, Fellows, Residents) 25 EHR evaluation 61 Khajouei et al. 2017 Iran Nurses 269 Electronic clinical notes 62 King et al. 2015 USA Intensive Care Physicians 4 EHR evaluation 63 Koopman, Kochendorfen, Moore 2011 USA Primary Care Physicians 10 EHR with clinical decision support tool for diabetes 64 Laursen et al. 2018 Denmark Human Computer Interaction Experts, Dialysis Nurses and Nephrologist 8 EHR with clinical decision support tool for patients of need of haemodialysis therapy 65 Lee et al. 2017 South Korea Professors, Fellows, Residents, Head Nurses, Nurses 383 EHR evaluation 66 Lin et al. 2017 Canada Physicians, Nurses, Respiratory Therapists 22 EHR evaluation 67 Mazur et al. 2019 USA Residents and Fellows (Internal Medicine, Family Medicine, Paediatrics Specialty, Surgery, Other) 38 EHR evaluation 68 Nabovati et al. 2014 Iran Evaluators 3 EHR evaluation 69 Nair et al. 2015 Canada Family Physicians, Nurse Practitioners, Family Medicine Residents 13 EHR with clinical decision support tool for chronic pain 70 Neri et al. 2012 USA Genetic Counsellors, Nurses, Physicians 7 Electronic genetic profile display 71 Nouei et al. 2015 Iran Surgeons, Assistants, Other Surgery Students (Residents Or Fellowship) unknown EHR evaluation within theatres 72 Pamplin et al. 2019 USA Physicians, Nurses, Respiratory Therapists 41 EHR evaluation 73 Rodriguez et al. 2002 USA, Puerto Rico Internal Medicine Resident Physicians 36 EHR evaluation 74 Schall et al. 2015 France General Practitioners, Pharmacists, NonClinician E-Health Informatics Specialists, Engineers. 12 EHR with clinical decision support tool 75 Seroussi et al. 2017 USA Nurses, Physicians 7 EHR evaluation 76 Silveira et al. 2019 Brasil Cardiologists and Primary Care Physicians 15 EHR with clinical decision support tool for patients with hypertension 77 Su et al. 2012 Taiwan Student Nurses 12 EHR evaluation 78 Tappan et al. 2009 Canada Anaesthesiologists, Anaesthesia Residents 22 EHR evaluation within theatres 79 Van Engen-Verheul et al 2016 Netherlands Nurses, Social Worker, Medical Secretary, Physiotherapist 9 EHR evaluation 80 Wachter et al. 2003 USA Anaesthesiologists, Nurse Anaesthetists, Residents, Medical Students 46 Electronic pulmonary investigation results display 81 Wu et al. 2009 Canada Family Physicians, Internal Medicine Physician 9 EHR on mobile devices 82 Zhang et al. 2009 USA Physicians, Health Informatics Professionals 8 EHR evaluation 83 Zhang et al. 2013 USA Physicians, Health Informatics Professionals unknown EHR evaluation 84 Zheng et al. 2007 USA Active Resident Users, Internal Medicine Residents 30 EHR with clinical reminders 85 Zheng et al. 2009 USA Residents 30 EHR with clinical reminders a Sequential Organ Failure Assessment.Of the included studies, 16 evaluated generic EHR systems. Eleven evaluated EHR decision support tools (four for all ward patients, one for patients with diabetes, one for patients with chronic pain, one for patients with cirrhosis, one for patients requiring haemodialysis therapy, one for patients with hypertension, one for cardiac rehabilitation and one for management of hypertension, type-2 diabetes and dyslipidaemia). Seven evaluated specific electronic displays (physiological data for patients with heart failure, arrhythmias, also genetic profiles, an electronic outcomes database, longitudinal care management of multimorbid seniors, chromatic pupillometry data, and pulmonary investigation results).
Four studies evaluated medication specific interfaces. Three evaluated electronic displays for patients' clinical notes. Three studies each evaluated mobile EHR systems. Two evaluated EHR systems with clinical reminders. Two evaluated quality improvement tools. Two evaluated systems for use in the operating theatre environment and one study evaluated a sequential organ failure assessment score calculator to quantify the risk of sepsis.
We extracted data on GUIs. All articles provided some description of GUIs, but these were often incomplete, or were a single screenshot. It was not possible to extract further useful information on GUIs. Appendix Table S4 presents the specification of type of data included in EHR systems.
3.1 Usability evaluation methodsTen types of methods to evaluate usability were used in the 51 studies that were included in this review. These are summarized in Table 2. We categorized the 10 methods into broader groups: user trials analysis, heuristic evaluations, interviews and questionnaires. Most authors applied more than one method to evaluate electronic systems. User trials were the most common method reported, used in 44 studies (86%). Questionnaires were used in 40 studies (78%). Heuristic evaluation was used in seven studies (14%) and interviews were used in 10 studies (20%). We categorized thinking aloud, observation, a three-step testing protocol, comparative usability testing, functional analysis and sequential pattern analysis as user trials analysis. Types of usability evaluation methods are described in Table 3.
TABLE 2. Usability evaluation methods User trial analysis Ref User trial Thinking aloud Observation Comparative usability testing A three step testing protocol Functional analysis Sequential pattern analysis Cognitive walkthrough Heuristic evaluation Questionnaire / Surveys Interview 35 * * * 36 * * * * * 37 * * * * 38 * 39 * * * 40 * * 41 * *
Comments (0)