Mobile App Rating Scale: A New Tool for Assessing the Quality of Health Mobile Apps
Stoyan Stoyanov,Leanne Hides,David J. Kavanagh,Oksana Zelenko,Dian Tjondronegoro,Madhavan Mani +5 more
Reads0
Chats0
TLDR
The MARS is a simple, objective, and reliable tool for classifying and assessing the quality of mobile health apps and can also be used to provide a checklist for the design and development of new high quality health apps.Abstract:
Background: The use of mobile apps for health and well being promotion has grown exponentially in recent years. Yet, there is currently no app-quality assessment tool beyond “star”-ratings. Objective: The objective of this study was to develop a reliable, multidimensional measure for trialling, classifying, and rating the quality of mobile health apps. Methods: A literature search was conducted to identify articles containing explicit Web or app quality rating criteria published between January 2000 and January 2013. Existing criteria for the assessment of app quality were categorized by an expert panel to develop the new Mobile App Rating Scale (MARS) subscales, items, descriptors, and anchors. There were sixty well being apps that were randomly selected using an iTunes search for MARS rating. There were ten that were used to pilot the rating procedure, and the remaining 50 provided data on interrater reliability. Results: There were 372 explicit criteria for assessing Web or app quality that were extracted from 25 published papers, conference proceedings, and Internet resources. There were five broad categories of criteria that were identified including four objective quality scales: engagement, functionality, aesthetics, and information quality; and one subjective quality scale; which were refined into the 23-item MARS. The MARS demonstrated excellent internal consistency (alpha = .90) and interrater reliability intraclass correlation coefficient (ICC = .79). Conclusions: The MARS is a simple, objective, and reliable tool for classifying and assessing the quality of mobile health apps. It can also be used to provide a checklist for the design and development of new high quality health apps.read more
Citations
More filters
Journal ArticleDOI
Development and Validation of the User Version of the Mobile Application Rating Scale (uMARS)
TL;DR: In this paper, the authors describe the development and reliability testing of an end-user version of the Mobile Application Rating Scale (uMARS), which is a simple tool that can be reliably used by end-users to assess the quality of mobile health apps.
Journal ArticleDOI
Objective User Engagement With Mental Health Apps: Systematic Search and Panel-Based Usage Analysis.
TL;DR: The pattern of daily use presented a descriptive peak toward the evening for apps incorporating most techniques (tracker, psychoeducation, and peer support) except mindfulness/meditation, which exhibited two peaks (morning and night).
Journal ArticleDOI
Review and Evaluation of Mindfulness-Based iPhone Apps.
TL;DR: Though many apps claim to be mindfulness-related, most were guided meditation apps, timers, or reminders and very few had high ratings on the MARS subscales of visual aesthetics, engagement, functionality or information quality.
Journal ArticleDOI
Mobile Health Apps to Facilitate Self-Care: A Qualitative Study of User Experiences.
TL;DR: A qualitative exploration of how health consumers use apps for health monitoring, their perceived benefits from use of health apps, and suggestions for improvement of healthapps is presented.
Journal ArticleDOI
A review and content analysis of engagement, functionality, aesthetics, information quality, and change techniques in the most popular commercial apps for weight management
TL;DR: Popular apps assessed have overall moderate quality and include behavioural tracking features and a range of change techniques associated with behaviour change, although more attention to information quality and evidence-based content is warranted to improve their quality.
References
More filters
Journal ArticleDOI
Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement
TL;DR: Moher et al. as mentioned in this paper introduce PRISMA, an update of the QUOROM guidelines for reporting systematic reviews and meta-analyses, which is used in this paper.
Journal ArticleDOI
Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement
TL;DR: A structured summary is provided including, as applicable, background, objectives, data sources, study eligibility criteria, participants, interventions, study appraisal and synthesis methods, results, limitations, conclusions and implications of key findings.
Journal ArticleDOI
Intraclass correlations: uses in assessing rater reliability.
TL;DR: In this article, the authors present guidelines for choosing among six different forms of the intraclass correlation for reliability studies in which n target are rated by k judges, and the confidence intervals for each of the forms are reviewed.
Journal Article
Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement
David Moher,A. Liberati,Jennifer Tetzlaff,Douglas G. Altman,Gerd Antes,David C. Atkins,Virginia Barbour,Nick Barrowman,Jesse A. Berlin,Jocalyn Clark,Mike Clarke,Deborah J. Cook,Roberto D'Amico,Jonathan J Deeks,Philip J. Devereaux,Kay Dickersin,Matthias Egger,E Ernst,Peter C. Gøtzsche,Jeremy M. Grimshaw,G. H. Guyatt,Julian P T Higgins,Ioannidis Jpa.,Jos Kleijnen,Tom Lang,Nicola Magrini,D McNamee,Lorenzo Moja,Cynthia D. Mulrow,Maryann Napoli,Andrew D Oxman,B Pham,Drummond Rennie,Margaret Sampson,Kenneth F. Schulz,Paul G. Shekelle,David Tovey,Peter Tugwell +37 more
TL;DR: The PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) as discussed by the authors is an update of the QUOROM guidelines for reporting systematic reviews and meta-analyses.
Book
Questionnaire Design, Interviewing and Attitude Measurement
TL;DR: The second edition of Dr Bram Oppenheim's established work, like the first, is a practical teaching text of survey methods as mentioned in this paper, which includes interviewing (both clip-board and depth interviewing), sampling and research design, data analysis, and a special chapter on pilot work.