scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Annual Research Review: Digital health interventions for children and young people with mental health problems – a systematic and meta‐review

TL;DR: The findings provide some support for the clinical benefit of DHIs, particularly computerised cognitive behavioural therapy (cCBT), for depression and anxiety in adolescents and young adults.
Abstract: Digital health interventions (DHIs), including computer-assisted therapy, smartphone apps and wearable technologies, are heralded as having enormous potential to improve uptake and accessibility, efficiency, clinical effectiveness and personalisation of mental health interventions. It is generally assumed that DHIs will be preferred by children and young people (CYP) given their ubiquitous digital activity. However, it remains uncertain whether: DHIs for CYP are clinically and cost-effective, CYP prefer DHIs to traditional services, DHIs widen access and how they should be evaluated and adopted by mental health services. This review evaluates the evidence-base for DHIs and considers the key research questions and approaches to evaluation and implementation. We conducted a meta-review of scoping, narrative, systematic or meta-analytical reviews investigating the effectiveness of DHIs for mental health problems in CYP. We also updated a systematic review of randomised controlled trials (RCTs) of DHIs for CYP published in the last 3 years. Twenty-one reviews were included in the meta-review. The findings provide some support for the clinical benefit of DHIs, particularly computerised cognitive behavioural therapy (cCBT), for depression and anxiety in adolescents and young adults. The systematic review identified 30 new RCTs evaluating DHIs for attention deficit/hyperactivity disorder (ADHD), autism, anxiety, depression, psychosis, eating disorders and PTSD. The benefits of DHIs in managing ADHD, autism, psychosis and eating disorders are uncertain, and evidence is lacking regarding the cost-effectiveness of DHIs. Key methodological limitations make it difficult to draw definitive conclusions from existing clinical trials of DHIs. Issues include variable uptake and engagement with DHIs, lack of an agreed typology/taxonomy for DHIs, small sample sizes, lack of blinded outcome assessment, combining different comparators, short-term follow-up and poor specification of the level of human support. Research and practice recommendations are presented that address the key research questions and methodological issues for the evaluation and clinical implementation of DHIs for CYP.
Citations
More filters
Journal ArticleDOI
TL;DR: There is currently insufficient research evidence to support the effectiveness of apps for children, preadolescents, and adolescents with mental health problems, and methodologically robust research studies evaluating their safety, efficacy, and effectiveness are promptly needed.
Abstract: Background: There are an increasing number of mobile apps available for adolescents with mental health problems and an increasing interest in assimilating mobile health (mHealth) into mental health services Despite the growing number of apps available, the evidence base for their efficacy is unclear Objective: This review aimed to systematically appraise the available research evidence on the efficacy and acceptability of mobile apps for mental health in children and adolescents younger than 18 years Methods: The following were systematically searched for relevant publications between January 2008 and July 2016: APA PsychNet, ACM Digital Library, Cochrane Library, Community Care Inform-Children, EMBASE, Google Scholar, PubMed, Scopus, Social Policy and Practice, Web of Science, Journal of Medical Internet Research, Cyberpsychology, Behavior and Social Networking, and OpenGrey Abstracts were included if they described mental health apps (targeting depression, bipolar disorder, anxiety disorders, self-harm, suicide prevention, conduct disorder, eating disorders and body image issues, schizophrenia, psychosis, and insomnia) for mobile devices and for use by adolescents younger than 18 years Results: A total of 24 publications met the inclusion criteria These described 15 apps, two of which were available to download Two small randomized trials and one case study failed to demonstrate a significant effect of three apps on intended mental health outcomes Articles that analyzed the content of six apps for children and adolescents that were available to download established that none had undergone any research evaluation Feasibility outcomes suggest acceptability of apps was good and app usage was moderate Conclusions: Overall, there is currently insufficient research evidence to support the effectiveness of apps for children, preadolescents, and adolescents with mental health problems Given the number and pace at which mHealth apps are being released on app stores, methodologically robust research studies evaluating their safety, efficacy, and effectiveness is promptly needed [J Med Internet Res 2017;19(5):e176]

306 citations


Cites background from "Annual Research Review: Digital hea..."

  • ...As with these other technology-based interventions, using mHealth apps with support from a therapist offers one strategy for increasing longer-term engagement [24,58]....

    [...]

  • ...It is important to also note that although adolescents may have positive attitudes toward mHealth, it does not necessarily mean they would prefer it over a face-to-face intervention [24]....

    [...]

  • ...Two systematic reviews exploring the evidence for digital health interventions (including computerized CBT, mobile phone apps, and wearable technologies) for children and young people with mental health problems in 2014 and 2016 [6,23,24] identified randomized controlled trials (RCTs) for only two apps (Mobiletype and FindMe)....

    [...]

  • ...Although important additions to the literature, the systematic reviews only included RCTs and so did not include feasibility studies providing information on acceptability [6,23,24]....

    [...]

Journal ArticleDOI
02 Dec 2019
TL;DR: Although some trials showed potential of apps targeting mental health symptoms, using smartphone apps as standalone psychological interventions cannot be recommended based on the current level of evidence.
Abstract: While smartphone usage is ubiquitous, and the app market for smartphone apps targeted at mental health is growing rapidly, the evidence of standalone apps for treating mental health symptoms is still unclear. This meta-analysis investigated the efficacy of standalone smartphone apps for mental health. A comprehensive literature search was conducted in February 2018 on randomized controlled trials investigating the effects of standalone apps for mental health in adults with heightened symptom severity, compared to a control group. A random-effects model was employed. When insufficient comparisons were available, data was presented in a narrative synthesis. Outcomes included assessments of mental health disorder symptom severity specifically targeted at by the app. In total, 5945 records were identified and 165 full-text articles were screened for inclusion by two independent researchers. Nineteen trials with 3681 participants were included in the analysis: depression (k = 6), anxiety (k = 4), substance use (k = 5), self-injurious thoughts and behaviors (k = 4), PTSD (k = 2), and sleep problems (k = 2). Effects on depression (Hedges’ g = 0.33, 95%CI 0.10–0.57, P = 0.005, NNT = 5.43, I2 = 59%) and on smoking behavior (g = 0.39, 95%CI 0.21–0.57, NNT = 4.59, P ≤ 0.001, I2 = 0%) were significant. No significant pooled effects were found for anxiety, suicidal ideation, self-injury, or alcohol use (g = −0.14 to 0.18). Effect sizes for single trials ranged from g = −0.05 to 0.14 for PTSD and g = 0.72 to 0.84 for insomnia. Although some trials showed potential of apps targeting mental health symptoms, using smartphone apps as standalone psychological interventions cannot be recommended based on the current level of evidence.

248 citations

19 Aug 2015
TL;DR: In this paper, the authors conducted a systematic review of mindfulness-based iPhone mobile apps and evaluated their quality using a recently developed expert rating scale, the Mobile Application Rating Scale (MARS), which also aimed to describe features of selected high-quality mindfulness apps.
Abstract: Background There is growing evidence for the positive impact of mindfulness on wellbeing. Mindfulness-based mobile apps may have potential as an alternative delivery medium for training. While there are hundreds of such apps, there is little information on their quality. Objective This study aimed to conduct a systematic review of mindfulness-based iPhone mobile apps and to evaluate their quality using a recently-developed expert rating scale, the Mobile Application Rating Scale (MARS). It also aimed to describe features of selected high-quality mindfulness apps. Methods A search for “mindfulness” was conducted in iTunes and Google Apps Marketplace. Apps that provided mindfulness training and education were included. Those containing only reminders, timers or guided meditation tracks were excluded. An expert rater reviewed and rated app quality using the MARS engagement, functionality, visual aesthetics, information quality and subjective quality subscales. A second rater provided MARS ratings on 30% of the apps for inter-rater reliability purposes. Results The “mindfulness” search identified 700 apps. However, 94 were duplicates, 6 were not accessible and 40 were not in English. Of the remaining 560, 23 apps met inclusion criteria and were reviewed. The median MARS score was 3.2 (out of 5.0), which exceeded the minimum acceptable score (3.0). The Headspace app had the highest average score (4.0), followed by Smiling Mind (3.7), iMindfulness (3.5) and Mindfulness Daily (3.5). There was a high level of inter-rater reliability between the two MARS raters. Conclusions Though many apps claim to be mindfulness-related, most were guided meditation apps, timers, or reminders. Very few had high ratings on the MARS subscales of visual aesthetics, engagement, functionality or information quality. Little evidence is available on the efficacy of the apps in developing mindfulness.

238 citations

Journal ArticleDOI
09 May 2018
TL;DR: MHealth apps need to be evaluated by more robust RCTs that report between-group differences before becoming prescribable, and should incorporate sensitivity analysis of trials with high risk of bias to better summarize the evidence.
Abstract: Mobile health apps aimed towards patients are an emerging field of mHealth. Their potential for improving self-management of chronic conditions is significant. Here, we propose a concept of “prescribable” mHealth apps, defined as apps that are currently available, proven effective, and preferably stand-alone, i.e., that do not require dedicated central servers and continuous monitoring by medical professionals. Our objectives were to conduct an overview of systematic reviews to identify such apps, assess the evidence of their effectiveness, and to determine the gaps and limitations in mHealth app research. We searched four databases from 2008 onwards and the Journal of Medical Internet Research for systematic reviews of randomized controlled trials (RCTs) of stand-alone health apps. We identified 6 systematic reviews including 23 RCTs evaluating 22 available apps that mostly addressed diabetes, mental health and obesity. Most trials were pilots with small sample size and of short duration. Risk of bias of the included reviews and trials was high. Eleven of the 23 trials showed a meaningful effect on health or surrogate outcomes attributable to apps. In conclusion, we identified only a small number of currently available stand-alone apps that have been evaluated in RCTs. The overall low quality of the evidence of effectiveness greatly limits the prescribability of health apps. mHealth apps need to be evaluated by more robust RCTs that report between-group differences before becoming prescribable. Systematic reviews should incorporate sensitivity analysis of trials with high risk of bias to better summarize the evidence, and should adhere to the relevant reporting guideline.

193 citations

Journal ArticleDOI
TL;DR: There is an urgent need for an agreement about appropriate standards, principles and practices in research and evaluation of these tools, and leaders in mHealth research, industry and health care systems from around the globe seek here to promote consensus on implementing these standards and principles.

190 citations

References
More filters
Journal ArticleDOI
TL;DR: This study’s findings can provide practical guidelines to steer partnership programs within the academic and clinical bodies, with the aim of providing a collaborative partnership approach to clinical education.
Abstract: The aim of our systematic review was to retrieve and integrate relevant evidence related to the process of formation and implementation of the academic–service partnership, with the aim of reformin...

41,134 citations

Journal ArticleDOI

12,729 citations

Journal ArticleDOI
TL;DR: A measurement tool for the 'assessment of multiple systematic reviews' (AMSTAR) was developed that consists of 11 items and has good face and content validity for measuring the methodological quality of systematic reviews.
Abstract: Our objective was to develop an instrument to assess the methodological quality of systematic reviews, building upon previous tools, empirical evidence and expert consensus. A 37-item assessment tool was formed by combining 1) the enhanced Overview Quality Assessment Questionnaire (OQAQ), 2) a checklist created by Sacks, and 3) three additional items recently judged to be of methodological importance. This tool was applied to 99 paper-based and 52 electronic systematic reviews. Exploratory factor analysis was used to identify underlying components. The results were considered by methodological experts using a nominal group technique aimed at item reduction and design of an assessment tool with face and content validity. The factor analysis identified 11 components. From each component, one item was selected by the nominal group. The resulting instrument was judged to have face and content validity. A measurement tool for the 'assessment of multiple systematic reviews' (AMSTAR) was developed. The tool consists of 11 items and has good face and content validity for measuring the methodological quality of systematic reviews. Additional studies are needed with a focus on the reproducibility and construct validity of AMSTAR, before strong recommendations can be made on its use.

3,583 citations


"Annual Research Review: Digital hea..." refers methods in this paper

  • ...The AMSTAR tool was used to assess the methodological quality of systematic reviews and meta-analyses included in the meta-review (Shea et al., 2007)....

    [...]

Journal ArticleDOI
TL;DR: These findings demonstrate the feasibility of developing standardized definitions of BCTs included in behavioral interventions and highlight problematic variability in the reporting of intervention content.
Abstract: Objective: Without standardized definitions of the techniques included in behavior change interventions, it is difficult to faithfully replicate effective interventions and challenging to identify techniques contributing to effectiveness across interventions. This research aimed to develop and test a theory-linked taxonomy of generally applicable behavior change techniques (BCTs). Design: Twenty-six BCTs were defined. Two psychologists used a 5-page coding manual to independently judge the presence or absence of each technique in published intervention descriptions and in intervention manuals. Results: Three systematic reviews yielded 195 published descriptions. Across 78 reliability tests (i.e., 26 techniques applied to 3 reviews), the average kappa per technique was 0.79, with 93% of judgments being agreements. Interventions were found to vary widely in the range and type of techniques used, even when targeting the same behavior among similar participants. The average agreement for intervention manuals was 85%, and a comparison of BCTs identified in 13 manuals and 13 published articles describing the same interventions generated a technique correspondence rate of 74%, with most mismatches (73%) arising from identification of a technique in the manual but not in the article. Conclusions: These findings demonstrate the feasibility of developing standardized definitions of BCTs included in behavioral interventions and highlight problematic variability in the reporting of intervention content.

2,321 citations


"Annual Research Review: Digital hea..." refers methods in this paper

  • ...An agreed working taxonomy of digital mental health interventions, similar to that developed for behaviour change interventions (the Behaviour Change Technique/BCT Taxonomy Project; Abraham & Michie, 2008), is required to enable interventions to be appropriately categorised and analysed....

    [...]

Journal ArticleDOI
TL;DR: The findings suggest that mental disorders affect a significant number of children and adolescents worldwide and the pooled prevalence estimates and the identification of sources of heterogeneity have important implications to service, training, and research planning around the world.
Abstract: Background The literature on the prevalence of mental disorders affecting children and adolescents has expanded significantly over the last three decades around the world. Despite the field having matured significantly, there has been no meta-analysis to calculate a worldwide-pooled prevalence and to empirically assess the sources of heterogeneity of estimates. Methods We conducted a systematic review of the literature searching in PubMed, PsycINFO, and EMBASE for prevalence studies of mental disorders investigating probabilistic community samples of children and adolescents with standardized assessments methods that derive diagnoses according to the DSM or ICD. Meta-analytical techniques were used to estimate the prevalence rates of any mental disorder and individual diagnostic groups. A meta-regression analysis was performed to estimate the effect of population and sample characteristics, study methods, assessment procedures, and case definition in determining the heterogeneity of estimates. Results We included 41 studies conducted in 27 countries from every world region. The worldwide-pooled prevalence of mental disorders was 13.4% (CI 95% 11.3–15.9). The worldwide prevalence of any anxiety disorder was 6.5% (CI 95% 4.7–9.1), any depressive disorder was 2.6% (CI 95% 1.7–3.9), attention-deficit hyperactivity disorder was 3.4% (CI 95% 2.6–4.5), and any disruptive disorder was 5.7% (CI 95% 4.0–8.1). Significant heterogeneity was detected for all pooled estimates. The multivariate metaregression analyses indicated that sample representativeness, sample frame, and diagnostic interview were significant moderators of prevalence estimates. Estimates did not vary as a function of geographic location of studies and year of data collection. The multivariate model explained 88.89% of prevalence heterogeneity, but residual heterogeneity was still significant. Additional meta-analysis detected significant pooled difference in prevalence rates according to requirement of funcional impairment for the diagnosis of mental disorders. Conclusions Our findings suggest that mental disorders affect a significant number of children and adolescents worldwide. The pooled prevalence estimates and the identification of sources of heterogeneity have important implications to service, training, and research planning around the world.

2,219 citations