Digital health apps have a potential to make health care more accessible to people of different age groups living with a wide range of health-related conditions. Currently, there are ongoing efforts by the National Health Service (NHS) in the United Kingdom to enhance the use of digital health technologies. These efforts were outlined in the NHS long-term plan []. Moreover, these efforts were accelerated by the COVID-19 pandemic []. There are approximately 250 new digital health apps available in app stores per day, with a total of approximately 350,000 digital health apps on the market as of 2021 []. As digital health apps rise in popularity, so do the risks associated with their use. Digital health apps that are classified as “medical devices” [] in the United Kingdom are strictly regulated, but digital health apps that are classified as “wellness apps” do not face such regulations.
There are organizations that produce guidelines and frameworks regarding the development and assessment of digital health apps. These include the International Organization for Standardization (ISO): Health and Wellness Apps—Quality and Reliability [] and the National Institute for Health and Care Excellence (NICE): Evidence Standards Framework []. Digital Technology Assessment Criteria (DTAC) [] is an NHS-developed framework that is being used to provide criteria for the assessment of digital health apps. However, some of the guidelines may be open to interpretation, and there are frameworks that are currently being used to assess digital health apps for their quality.
A scoping review published in 2023 [] examined the problems and barriers related to the use of digital health apps. It found that “validity,” “usability,” “technology,” “data privacy and security,” and “individuality” were addressed in several studies and are partly considered in quality assessment. “Use and adherence,” “patient-physician relationship,” “knowledge and skills,” “implementation,” and “costs” of digital health apps were rarely extensively studied. Furthermore, a systematic review published in 2021 [] found security challenges when developing digital health apps. These and other quality problems surrounding digital health apps may be mitigated by rigorously assessing their quality.
Digital health apps are complex, and the usual methods for assessing medicines (survival, quality of life, and cost), as recommended by NICE, may not be sufficient to monitor the broad spectrum of potential issues arising from their use. Therefore, more specific requirements for assessment of quality are required. However, there is a lack of consensus on how best to achieve this.
In this umbrella review, we define quality as “compliance with best practice standards.” The objective of this umbrella review was to give a holistic summary of the current methods and “condition agnostic” frameworks that are broadly applicable for the quality assessment of all digital health apps. Because several review articles have been published regarding the quality assessment of digital health apps, or aspects related to this, we conducted an umbrella review to provide a holistic view of how digital health apps are currently being assessed and where they can improve. This review can be informative to digital health researchers and assessment framework developers. In this umbrella review, we included systematic reviews, scoping reviews, rapid reviews, and narrative reviews.
For the systematic search of literature, the PICOS (Population, Patient, or Problem; Intervention; Comparison; Outcomes; and Study Design; see ) and PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) methodologies have been used. The following search has been conducted using the Scopus, PubMed, ACM Digital Library, and IEEE Xplore databases with the following search objective: find literature, systematic, or scoping review articles for quality assessment of digital health apps. The search for review articles was conducted on January 26, 2024.
Textbox 1. PICOS (Population, Patient, or Problem; Intervention; Comparison; Outcomes; and Study Design) methodology for systematic search of previous or related literature reviews.Inclusion criteria
Problem: Review of quality assessment tools for digital health apps, full article, in English, from 2018 to 2023.Intervention: Quality assessment frameworks or criteria review applicable to all digital health apps.Comparator: Not applicable (N/A)Outcome: Review articles regarding quality (or aspects of quality, eg, usability) assessment frameworks or criteria applicable to all digital health apps.Study design: Search databases included Scopus, PubMed, ACM Digital Library, and IEEE Xplore. Search query for article title and abstract: ( ( mhealth OR ehealth OR m-health OR e-health OR “mobile health” OR “electronic health” OR “health app*” OR “medical health app*” OR “digital health app*” OR “digital health product*” OR “digital health intervention*” OR “digital health technolog*” OR “digital health solution*” ) AND ( assurance* OR assessment* OR evaluation* OR audit* OR framework* ) AND ( review* OR assessment* ) )Exclusion criteria
Problem: Not quality assessment of digital health apps, conference paper, book or book chapter. Not in English.Intervention: Not a quality assessment frameworks or criteria review.Comparator: N/AOutcome: No information on quality assessment frameworks or criteria. Does not focus on digital health apps. Articles targeting specific users (eg, women or adolescence). Focuses on specific feature or category (condition area). Frameworks that focus on user acceptance of technology.Study design: N/Apresents the PRISMA checklist and presents the exact search queries that were used for each of the databases.
Study ScreenThis study used PICOS methodology to set the inclusion and exclusion criteria (). The review articles included in this study were initially screened by title and abstract. If the review article focused on quality (or its aspect) assessment of digital health apps, it was read in full. If the article met inclusion criteria and none of the exclusion criteria (), it was included in the study. This systematic search followed PRISMA guidelines when screening the articles, where a step-by-step process was set forth. The Rayyan tool has been used to remove article duplicates.
Critical AppraisalBecause 4 (27%) of the 15 review articles were systematic reviews, for those reviews, the Joanna Briggs Institute (JBI) critical appraisal of systematic reviews [] has been used (see ). This has been done to assess the quality of the systematic reviews to determine whether they should be included in this umbrella review.
Data ExtractionThe characteristics of this study follow the applicable protocols for the umbrella review from the JBI Manual for Evidence Synthesis []. The following information has been extracted: author-year, objectives, total sample size, number of sources searched, date (year) range of searched and included studies, number of studies included, methods of analysis, and key findings (see ).
Data SynthesisIn this study, we did not synthesize results statistically. Instead, we narratively synthesized key findings of each of the reviews. This was done due to the presumed heterogeneity of the included reviews.
shows the PRISMA process of selecting article reviews to be included in the study. The search queries used for each of the databases searched (Scopus, PubMed, ACM Digital Library, and IEEE Xplore) are available in . After duplicates were removed, the articles were reviewed by title and abstract, and were included only if they were related to quality assessment of digital health apps and did not meet the exclusion criteria outlined in . Afterward, 39 review articles were read in full and 15 met the inclusion criteria and were included in this umbrella review. This review was not registered and a protocol for systematic reviews was not prepared.
Mendeley (Elsevier) was used to manage all the included review articles (n=15) in the study. The 15 review articles were published between 2018 and 2023. Scoping reviews were the most common (n=6, 40%), followed by systematic reviews (n=4, 27%), narrative reviews (n=4, 27%), and a rapid review (n=1, 7%). depicts which criteria the review articles focused on.
Figure 1. PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) diagram of reviews selected for inclusion in the umbrella review. The Assessment CriteriaThe objective of this umbrella review was to give a holistic summary of current methods and frameworks used for quality assessment of digital health apps. depicts 4 (27%) of the 15 review articles that reported on assessment criteria for digital health apps. Of the 4 review articles, 2 (50%) outlined what was being assessed with frameworks [,] and 2 (50%) focused on what should be assessed [,]. The remaining 11 review articles included in this umbrella review either focused on a specific criterion (eg, usability), the methods of assessment (eg, Likert scale), or referenced the criteria of a review article already accounted for (Woulfe et al [] references the Nouri et al [] criteria). Nouri et al [] stated that different articles defined assessment criteria differently. For example, usability was mentioned in 14 articles that were reviewed. One article considered usability a subclass of ease of use; other articles placed ease of use under usability.
Table 1. Framework criteria used for the assessment of digital health apps.Authors, YearObtained byaCriteria (assessment domains)Hensher et al, 2021 []Assessing frameworksClarity of purpose of the appa“Obtained by” states whether assessment criteria have been identified by assessing assessment frameworks or preset or developed by other means.
Hensher et al [] and Nouri et al [] identified criteria by assessing assessment frameworks. Moshi et al [] and Lagan et al [] had their own set of criteria and compared assessment frameworks against those criteria. Moshi et al [] developed a checklist for health technology assessment for mobile medical apps based on previous research [,]. Lagan et al [] used the previously proposed Mobile health Index and Navigation Database (MIND) [], which was developed with inputs from clinicians, patients, family members, researchers, and policy makers with the aim of providing clinically relevant criteria. Nouri et al [] stated that there may never be complete digital health apps assessment criteria because these criteria apply to digital health apps that are changing in development continuously.
Data Privacy or SecurityThirteen (87%) of the 15 review articles [-,-] mention the data privacy or security of digital health apps. Three reviews [,,] found that reviewed articles often considered privacy and security together when evaluating digital health apps. Nurgalieva et al [] highlighted that although privacy and security overlap, security relates to protection against unauthorized access to data, and privacy is an individual’s right to maintain control of their personal data. The review points out that exclusively focusing on security can lead to increased surveillance, which can introduce privacy risks. Furthermore, Nouri et al [] stated that interpretation of privacy, security, and safety considerations differed when examining different quality assessment tools or methods.
Nurgalieva et al [] found that methods used to evaluate security are more technical, whereas methods used to evaluate privacy are more user oriented. Some of the review articles [,] pointed out that there appears to be greater effort to assess privacy. However, Muro-Culebras et al [] found lack of assessment regarding developer transparency and policies regarding user data privacy and security. Lagan et al [] found that most of the examined assessment frameworks (43/79, 54%) included privacy-related questions. Hensher et al [] also found privacy and security to be frequently addressed in assessment frameworks.
Grundy [] discussed how guidelines such as General Data Protection Regulation (GDPR) [] rely on the users’ knowledge and “notice and consent” model. Grundy [] referenced a study published in 2020 [] that discusses whether GDPR is fit for purpose, as it assumes that users can know why and how their data are being collected and shared. GDPR also assumes that individual app users can control how their personal data are being processed. Grundy [] stated that the majority of digital health apps fail to provide assurances around privacy and security. On the other hand, Carmi et al [] focused on the interpretation of GDPR for mobile health.
Galvin and DeMuro [] discussed how recent literature has shown that aggregated data that were previously considered deidentified have been shown to be reidentifiable. The review states that data storage and transmission of mobile health data remain a security concern. Moreover, the review points out that because of the lack of privacy policy, or because of the complex language used in such policies, there may be lack of consumer informed consent when using mobile health apps.
Clinical Assurance, Credibility of Information, or EvidenceEleven (73%) of the 15 review articles [-,-,,,] included in this umbrella review mentioned clinical assurance, credibility of information of digital health apps, and evidence. Hensher et al [] stated that in 2 studies included in their review, third-party sponsorships were deemed important due to a possible conflict of interest between developers of an app and sponsors, affecting developers’ credibility. Moshi et al [] found that credibility of information of mobile medical apps was assessed by 15 (33%) of 45 frameworks. Moreover, 27 (60%) of 45 frameworks included questions about mobile medical apps’ source of information. None of the frameworks assessed the health impact of mobile medical apps that provide diagnostics or information. The review concluded that none of the included frameworks met all the health technology assessment criteria (see —Moshi et al []) set forth. A total of 3 (7%) of 45 frameworks specifically asked about randomized controlled trials. Lagan et al [], which expands on the work done by Moshi et al [], has shown that more frameworks include questions around clinical foundation than in their previous article published in 2019 [], indicating an increase of interest around clinical foundation assessment.
Muro-Culebras et al [] stated that many authors use their own personalized questionnaires for assessment, specifically designed to assess the characteristics of their specific digital health apps. The review article stated that because the approach is a personalized questionnaire, there is greater flexibility than in other generic tools. However, lack of validation and reliability (such as inter-rater reliability) raises questions about their suitability for digital health app assessment. Muro-Culebras et al [] concluded that highly validated tools for the assessment of digital health apps are still a largely unexplored topic in the market. Similarly, Grundy [] stated that there is a lack of measurement tool validation. Furthermore, users do not appear to have much awareness of the source or validity of health information of a digital health app. Akbar et al [] found that digital health apps lack domain expert involvement regarding the app content and provide poor evidence base and poor validation.
User Experience, Value, Efficacy or Effectiveness, or EngagementEleven (73%) of 15 review articles [-,-,,,] mentioned user experience, value, efficacy, or engagement. Hensher et al [] found that aspects of user experience have been frequently assessed by the assessment frameworks. Moreover, the review stated that there is limited evidence on how to evaluate the value domain in the literature. They speculate that this could be due to the current landscape of digital health apps’ market being fast and evolving, and the subjectivity of value. The review also stated that studies to demonstrate apps’ efficacy and value for money are not undertaken often. Moshi et al [] stated that of 45 frameworks included in the review, all assessed effectiveness to some extent. Of 45 frameworks, 11 assessed user satisfaction and 30 assessed technical efficacy of mobile medical apps.
Maramba et al [] focused on methods of usability testing of eHealth applications. Questionnaires, task completion, think-aloud protocol, interviews, heuristic testing, and focus groups were the most frequently used methods of assessment, whereas methods such as eye tracking were rarely used. The review concludes that more investigation needs to be made into assessing usability of eHealth applications. Muro-Culebras et al [] found that usability among the 8 frameworks assessed in the review was commonly assessed together with engagement, aesthetics, or functionality. Moreover, the review found that 2 (75%) of the 8 frameworks had a user assessor and a professional assessor version. Woulfe et al [] point out that many digital health apps are not based on any behavior change theory, and in many cases, effectiveness is inadequately assessed. They also address the possibility of using different assessment methodologies in high-, low-, and middle-income countries.
Lagan et al [] point out that subjective user experience may limit generalizability and standardization of frameworks. This is because assessment would reflect the experience of the assessor. The review further points out that although subjective in nature, information on user friendliness, visual appeal, and interface design may be of great interest to the user and a good predictor of user engagement.
Akbar et al [] stated that users should be involved in usability testing of digital health apps. Their review indicates that consumers were able to identify many critical issues with digital health apps, such as incorrect information, inappropriate response to their needs, gaps in features, and faults with alarms. Akbar et al [] suggest that allowing users who are consumers of digital health apps to be involved in usability testing will enable the identification and resolution of usability problems before the apps are made available to the public.
Grundy [] pointed out that the assessment frameworks mainly focused on content quality and usability, with less attention given to design, security and privacy, functionality, user-perceived value, and ethical issues. Nouri et al [] stated that usability was treated in different ways by different articles. For example, one article considered usability a subclass of ease of use, and other articles placed ease of use under usability. Azad-Khaneghah et al [] found that many of the usability and quality rating scales are targeted at professionals. Moreover, the review found that System Usability Scale (SUS) was the most widely used framework or scale (12/40 studies), mainly due to its simplicity. Similarly, Hajesmaeel-Gohari et al [] found that general questionnaires with fewer questions and higher reliability, such as SUS, have been used more often. Furthermore, the review recommends using frameworks such as the mHealth App Usability Questionnaire [] that were specifically designed for mobile apps, unlike the SUS.
SafetySafety (in different contexts) has been mentioned by 9 (60%) of the 15 review articles [-,,-]. Safety and risks of digital health apps can arise from different factors. Akbar et al [] elaborate on different safety concerns and risks associated with the use of digital health apps. The review found 67 safety concerns related to the quality of content, which are grouped into the following 5 categories: incorrect information, incomplete information, variation in content, incorrect output, and inappropriate response to consumer needs. Akbar et al [] found 13 safety concerns related to software functionality. These are grouped into 5 more categories: gaps in features, lack of validation for user input, delayed processing, response to health dangers, and faulty alarms. Akbar et al [] further discuss consequence of safety concerns, for example, how one of the digital health apps led to dangerous levels of alcohol consumption among a group of 341 students. Overall, many of the frameworks do not cover the necessary criteria to quality assess digital health apps [,]. Moshi et al [] consider safety as part of the assessment criteria for digital health apps (see ).
Features or FunctionalitySix (40%) of the 15 review articles [-,] considered feature or functionality as criteria to be assessed, referred to as technical characteristics by Moshi et al []. Furthermore, Nurgalieva et al [] mention that feature assessment was common when assessing security and privacy. Lagan et al [] stated that features related to ease of use and visual appeal may be the most important drive of user engagement of mental health apps. Moreover, “subjective questions” around user friendliness, visual appeal and interface design, although difficult to standardize and to assess, may be the greatest predictor of user engagement. Lagan et al [] and Hensher et al [] identified digital health apps’ features as part of the assessment criteria (). Moshi et al [] and Nouri et al [] included functionality and technical characteristics, respectively, as part of their assessment criteria for digital health apps. The terms “features,” “functionality,” and “technical characteristics” seem to overlap in the review articles.
Nouri et al [] stated that 2 (9%) of 23 review articles provided dynamic assessment criteria based on the use cases and features of specific digital health apps, meaning that the assessment criteria were selected for apps based on the use case. They provide an example of the criterion “accuracy of the calculations,” being used only when an app provides at least 1 calculation. Akbar et al [] stated that many digital health apps had gaps in features that inadequately supported user tasks. They gave an example of “Tele dermatology” apps that did not account for allergies or current medication status.
CostCost as a criterion has been mentioned by 1 (7%) of the 15 review articles []. Moshi et al [] stated that there may be a cost barrier to accessing mobile medical apps because they may contain in-app purchases or require a subscription. They further stated that only 1 (2%) of 45 frameworks assessed cost-effectiveness (in terms of economic assessment) and 11 (24%) of 45 reviewed the cost in terms of price to download or in-app purchases. Nine (60%) of the 15 review articles [,,,,,,,,] mentioned the cost of digital health apps, the cost of data breaches, or equipment cost, but not as a criterion for quality assessment.
Ethical or Legal IssuesEthical or legal concepts have been mentioned by 10 (67%) of the 15 review articles [-,,,,-]. Nurgalieva et al [] and Benjumea et al [] discuss ethical and legal concepts regarding data and data breaches. Nurgalieva et al [] discuss how in their review of privacy and security of digital health apps, there was a considerable lack of discussion in the reviewed articles (n=83) regarding privacy or security ethics. Grundy [] stated that quality assessment frameworks give less attention to ethical issues. The author also stated that legal compliance around content and intellectual property is an aspect of quality regarding commercial apps.
Moshi et al [] discuss health technology assessment criteria for the assessment of mobile medical apps, including ethical and legal aspects. The review found that 4 (9%) of 45 frameworks discussed legal aspects and 24 (53%) of 45 frameworks discussed ethics. Hensher et al [] also included ethics and legal aspects as part of their assessment criteria (see ). For example, in the reviews of both Moshi et al [] and Hensher et al [], ethical aspects included privacy policies and legal aspects with mention of disclaimers. Nouri et al [] included ethical issues as part of the assessment criteria (see ). Akbar et al [] stated that health care professionals may hesitate to promote digital health apps in part due to legal issues.
Assessment Methods and MetricsHensher et al [] from Deakin University conducted a scoping review focusing on the time frame from 2011 to April 2020 using search terms that were synonyms of “health apps,” “evaluation,” and “frameworks” []. This review examined 97 evaluation frameworks and studies that included general digital health app evaluation frameworks, such as the Mobile App Rating Scale (MARS), and more domain-specific frameworks, such as the SUS and Software Usability Measurement Inventory (SUMI).
The scoring and rating techniques varied within the different frameworks, that is, 23% of frameworks used a 5-point scale, 6% a 3-point scale, 3% a 7-point scale, 2% a 4-point scale, and 1% a 10-point scale. A total of 24% frameworks did not elaborate on the scaling system, 20% used a mixed approach, 13% were dichotomous, and 8% did not use a scaling system []. Moreover, the frameworks’ scoring modalities also varied as 37% did not report a score, 23% used a mean score, 13% used a total sum, 11% used mixed approaches, 9% other, and 6% did not use scoring at all []. In Nurgalieva et al [], it appears that the evaluation of self-declared data from app developers was the most common privacy assessment method.
Hensher et al [] examined the domain or criteria needed to evaluate digital health apps and found that user experience together with information validity has been the most evaluated criteria. However, this scoping review included frameworks such as SUS and SUMI, and such frameworks that are designed to evaluate usability are not tailored to digital health apps. In their “count,” if a framework evaluated an aspect of user experience (UX), it was considered a scale for evaluating UX. Usability is only one aspect of UX.
Lagan et al [] found that most evaluation frameworks for health apps were concerned with evidence, clinical foundation, and privacy. This study suggests that it is unclear whether engagement has been adequately predicted with the existing frameworks. The study also suggests that a balance between objective and subjective questions is a challenge for evaluation frameworks.
This umbrella review included 15 review articles that were obtained via a systematic search of the literature (see and ). The objective of this review was to give a holistic summary of current methods and frameworks used for quality assessment of digital health apps. It appears that frameworks are a common way of assessing digital health apps. Four (27%) of the 15 review articles tried to establish appropriate assessment criteria for digital health apps. Two (50%) of these 4 review articles [,] reviewed frameworks for digital health app assessment and derived their criteria. The other 2 (50%) of these 4 review articles [,] had predefined criteria based on previous research in the area (see ). In the review articles, there was a lack of discussion regarding the unethical use of dark patterns, defined by UX dictionary [] as “deceptive design patterns used to mislead users to make them do something they would not do on their own. They are primarily used to generate sales, increase subscriptions, and hit target business numbers.” Also, there was no mention of equality, diversity, and/or inclusion.
MARS has been the most frequently used framework for quality assessment according to 2 review articles [,]. Across the review articles included in this study, 13 (87%) of 15 included data privacy or security as a criterion. Hence, data privacy or security was the most common criterion. The least mentioned criterion was cost, with 1 (7%) of 15 review articles mentioning it (see ). Having an overarching assessment framework would reduce the need to apply several separate frameworks in the pursuit of identifying digital health app quality. However, when using frameworks for the quality assessment of digital health apps, it is important to remember the various limitations of frameworks. For example:
A framework can contain a lot of questions that sometimes need to be answered in a specific order. Hence, using it may require training.A framework may not “capture” all necessary aspects of a product or system for evaluation.A framework may be better at “capturing” one aspect of a product or system, for example, effectively evaluate ease of use, but lack in clinical assurance.Evaluation is never perfect; frameworks rely on good interrater and intrarater reliability.Frameworks are limited by their domain or condition area, meaning a framework may be suitable to assess all digital health apps (such as Enlight []), but not pick up on a condition area that is specific to quality issues, such as insulin intake for diabetes apps.Speculation can be made that health condition–specific frameworks may provide a more accurate view of quality. This is because specific features may be required by digital health apps targeted at a specific health condition. A framework for a specific health condition, for example, may include questions such as insulin intake for diabetic users of an app. A generic, all-encompassing frameworks may include questions such as “did the app development involve a relevant health expert?” which could only indicate, not verify, whether necessary features are included.
Woulfe et al [] point out that a framework coined Enlight [] is a far-reaching and all-encompassing framework for the assessment of digital health apps. However, Enlight includes many questions, more than other generic frameworks used for the assessment of digital health apps. Hence, using it may take more time and curtail its use. However, the use of such frameworks may help in the mitigation of variety of problems with digital health apps [],9], whereas using inadequate assessment frameworks may lead to overlooking flaws associated with the use of digital health apps. Hence, choosing a framework that adheres to all or most of the criteria mentioned in , such as DTAC [] or Enlight [], should enable a selection of a good quality app, despite not being focused on a specific condition area.
When developing new quality assessment frameworks, it may be helpful for the assessor to assess related criteria together; however, on the flip side, it can create confusion. For example, Nurgalieva et al [] pointed out that exclusively focusing on security can lead to increased surveillance, which can introduce privacy risks. It can be speculated that if one criterion is being assessed, such as security, then privacy is also being assessed, leading to the omission of questions or areas in the assessment where security and privacy may be in conflict. Hence, caution needs to be taken when merging criteria into one in a framework. provides a list of recommendations to improve the development of quality assessment frameworks.
Textbox 2. Recommendations to improve the development of quality assessment frameworks.Standardize definitions of the criteria used for the quality assessment of digital health apps.When choosing any framework, ensure that it has been validated regarding its content and inter-rater reliability.Choosing a framework that adheres to all or most of the criteria mentioned in should enable a selection of a good quality app.Ensure that improving compliance with one criterion does not sabotage the other. For example, an increase in security does not sabotage data privacy.Ensure that when frameworks combine multiple criteria into one, for example, data privacy and security, there are no omissions of questions related to the original criteria.More focus on the unethical use of dark patterns in app design.Include criteria related to equality, diversity, and inclusion in order to ensure that digital health apps are widely available to different groups of people.Choosing a framework for a specific health condition may allow for the assessment of specific or necessary features that would not be covered by a generic framework.Ensure that third-party sponsorships do not lead to conflict of interest between developers of an app and sponsors, thus affecting the developers’ credibility.Ensure that the language in the privacy policy is easily understandable. Frameworks should point out when privacy policy of an app contains language that is unnecessarily unclear or vague.Specifying the context in which safety concerns are being assessed can reduce confusion, for example, safety related to data privacy or evidence for clinical assurance.LimitationsThis review is based on 15 review articles, most commonly scoping reviews (n=6, 40%), followed by systematic reviews (n=4, 27%), narrative reviews (n=4, 27%), and a rapid review (n=1, 7%). Using different search queries and searching wider publication dates could yield more results. The screening of articles was conducted by 1 coauthor. This review only included article reviews that contained the word “review” in the title. Any review about quality of digital health apps that was condition specific (eg, diabetes) was excluded from this umbrella review.
ConclusionsThe majority of frameworks do not meet all the criteria identified from the reviewed articles. Safety concerns associated with the use of digital health apps may be mitigated with the use of quality frameworks. Some criteria for the assessment of digital health apps may conflict with each other. For example, overly focusing on security may lead to privacy concerns. Research indicates that subjective questions, although difficult to standardized, may be the most useful when assessing engagement.
The study discussed in this paper has been conducted as part of a PhD Co-operative Awards in Science and Technology (CAST) award, with funding from the Department for the Economy in Northern Ireland and the ORCHA (Organisation for the Review of Care and Health Applications) in the United Kingdom.
None declared.
Edited by A Mavragani; submitted 20.03.24; peer-reviewed by YC Foong, FA Causio; comments to author 07.06.24; revised version received 23.07.24; accepted 25.07.24; published 10.10.24.
©Maciej Marek Zych, Raymond Bond, Maurice Mulvenna, Jorge Martinez Carracedo, Lu Bai, Simon Leigh. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 10.10.2024.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.
Comments (0)