scispace - formally typeset
Search or ask a question
JournalISSN: 1362-0347

Evidence-based Mental Health 

BMJ
About: Evidence-based Mental Health is an academic journal published by BMJ. The journal publishes majorly in the area(s): Bipolar disorder & Depression (differential diagnoses). It has an ISSN identifier of 1362-0347. Over the lifetime, 2150 publications have been published receiving 14908 citations. The journal is also known as: EBMH online & EBMH.


Papers
More filters
Journal ArticleDOI
TL;DR: This publication describes how to perform a meta-analysis with the freely available statistical software environment R, using a working example taken from the field of mental health.
Abstract: Objective Meta-analysis is of fundamental importance to obtain an unbiased assessment of the available evidence. In general, the use of meta-analysis has been increasing over the last three decades with mental health as a major research topic. It is then essential to well understand its methodology and interpret its results. In this publication, we describe how to perform a meta-analysis with the freely available statistical software environment R, using a working example taken from the field of mental health. Methods R package meta is used to conduct standard meta-analysis. Sensitivity analyses for missing binary outcome data and potential selection bias are conducted with R package metasens. All essential R commands are provided and clearly described to conduct and report analyses. Results The working example considers a binary outcome: we show how to conduct a fixed effect and random effects meta-analysis and subgroup analysis, produce a forest and funnel plot and to test and adjust for funnel plot asymmetry. All these steps work similar for other outcome types. Conclusions R represents a powerful and flexible tool to conduct meta-analyses. This publication gives a brief glimpse into the topic and provides directions to more advanced meta-analysis methods available in R.

2,021 citations

Journal ArticleDOI
TL;DR: Jacobson et al proposed a method of determining reliable and clinically significant change (RCSC) that summarises changes at the level of the individual in the context of observed changes for the whole sample.
Abstract: Where outcomes are unequivocal (life or death; being able to walk v being paralysed) clinicians, researchers, and patients find it easy to speak the same language in evaluating results. However, in much of mental health work initial states and outcomes of treatments are measured on continuous scales and the distribution of the “normal” often overlaps with the range of the “abnormal.” In this situation, clinicians and researchers often talk different languages about change data, and both are probably poor at conveying their thoughts to patients. Researchers traditionally compare means between groups. Their statistical methods, using distributions of the scores before and after treatment to suggest whether change is a sampling artefact or a chance finding, have been known for many years.1 By contrast, clinicians are more often concerned with changes in particular individuals they are treating and often dichotomise outcome as “success” or “failure.” The number needed to treat (NNT) method of presenting results has gone some way to bridge this gap but often uses arbitrary criteria on which to dichotomise change into “success” and “failure.” A typical example is the criterion of a 50% drop on the Hamilton Depression Rating Scale score. A method bridging these approaches would assist the translation of research results into clinical practice. Jacobson et al proposed a method of determining reliable and clinically significant change (RCSC) that summarises changes at the level of the individual in the context of observed changes for the whole sample.2, 3–5 Their methods are applicable, in one form or another, to the measurement of change on any continuous scale for any clinical problem, although they have been reported primarily in the psychotherapy research literature. The broad concept of reliable and clinically significant change rests on 2 questions being addressed at the level of each …

418 citations

Journal ArticleDOI
TL;DR: Current challenges surrounding user engagement with mental health smartphone apps are reviewed, and several solutions are proposed and successful examples of mental health apps with high engagement are highlighted.
Abstract: The potential of smartphone apps to improve quality and increase access to mental health care is increasingly clear. Yet even in the current global mental health crisis, real-world uptake of smartphone apps by clinics or consumers remains low. To understand this dichotomy, this paper reviews current challenges surrounding user engagement with mental health smartphone apps. While smartphone engagement metrics and reporting remains heterogeneous in the literature, focusing on themes offers a framework to identify underlying trends. These themes suggest that apps are not designed with service users in mind, do not solve problems users care most about, do not respect privacy, are not seen as trustworthy and are unhelpful in emergencies. Respecting these current issues surrounding mental health app engagement, we propose several solutions and highlight successful examples of mental health apps with high engagement. Further research is necessary to better characterise engagement with mental health apps and identify best practices for design, testing and implementation.

388 citations

Journal ArticleDOI
TL;DR: The following editorial from Dr Michael Boyle is invited to highlight the key methodological issues involved in the critical appraisal of prevalence studies.
Abstract: As stated in the first issue of Evidence-Based Mental Health, we are planning to widen the scope of the journal to include studies answering additional types of clinical questions. One of our first priorities has been to develop criteria for studies providing information about the prevalence of psychiatric disorders, both in the population and in specific clinical settings. We invited the following editorial from Dr Michael Boyle to highlight the key methodological issues involved in the critical appraisal of prevalence studies. The next stage is to develop valid and reliable criteria for selecting prevalence studies for inclusion in the journal. We welcome our readers contribution to this process. You are a geriatric psychiatrist providing consultation and care to elderly residents living in several nursing homes. The previous 3 patients referred to you have met criteria for depression, and you are beginning to wonder if the prevalence of this disorder is high enough to warrant screening. Alternatively, you are a child youth worker on a clinical service for disruptive behaviour disorders. It seems that all of the children being treated by the team come from economically disadvantaged families. Rather than treating these children on a case by case basis, the team has discussed developing an experimental community initiative in a low income area of the city. You are beginning to wonder if the prevalence of disruptive behaviour disorders is high enough in poor areas to justify such a programme. Prevalence studies of psychiatric disorder take a sample of respondents to estimate the frequency and distribution of these conditions in larger groups. All of these studies involve sampling, cross sectional assessments of disorder, the collection of ancillary information, and data analysis. Interest in prevalence may extend from a particular clinical setting (a narrow focus) to an entire nation (a broad focus). In …

286 citations

Journal ArticleDOI
TL;DR: A critical educational review of published umbrella reviews is presented, focusing on the essential practical steps required to produce robust umbrella reviews in the medical field, and 10 key points to consider for conducting robust umbrella Reviews are discussed.
Abstract: Objective Evidence syntheses such as systematic reviews and meta-analyses provide a rigorous and transparent knowledge base for translating clinical research into decisions, and thus they represent the basic unit of knowledge in medicine. Umbrella reviews are reviews of previously published systematic reviews or meta-analyses. Therefore, they represent one of the highest levels of evidence synthesis currently available, and are becoming increasingly influential in biomedical literature. However, practical guidance on how to conduct umbrella reviews is relatively limited. Methods We present a critical educational review of published umbrella reviews, focusing on the essential practical steps required to produce robust umbrella reviews in the medical field. Results The current manuscript discusses 10 key points to consider for conducting robust umbrella reviews. The points are: ensure that the umbrella review is really needed, prespecify the protocol, clearly define the variables of interest, estimate a common effect size, report the heterogeneity and potential biases, perform a stratification of the evidence, conduct sensitivity analyses, report transparent results, use appropriate software and acknowledge the limitations. We illustrate these points through recent examples from umbrella reviews and suggest specific practical recommendations. Conclusions The current manuscript provides a practical guidance for conducting umbrella reviews in medical areas. Researchers, clinicians and policy makers might use the key points illustrated here to inform the planning, conduction and reporting of umbrella reviews in medicine.

252 citations

Performance
Metrics
No. of papers from the Journal in previous years
YearPapers
202231
202139
202031
201944
201854
201762