scispace - formally typeset
Search or ask a question
Author

Helen Roberts

Bio: Helen Roberts is an academic researcher from Institute of Education. The author has contributed to research in topics: Systematic review & Psychological intervention. The author has an hindex of 10, co-authored 24 publications receiving 2054 citations.

Papers
More filters
DOI
01 Jan 2006
TL;DR: Titles & abstract s Titles & abstracts include N=54 Include N=121 Including N=89 Ex exclude N = 0 Exclude N = 1,024 Exclusion N = 21.
Abstract: Titles & abstract s Titles & abstracts Include N=54 Include N=121 Include N=89 Exclude N = 0 Exclude N = 1,024 Exclude N = 21

2,189 citations

Journal ArticleDOI
TL;DR: Qualitative research is as likely to remain unpublished as quantitative research, and the quality of reporting of study methods and findings in the abstract was positively related to the likelihood of publication.
Abstract: Background: Less than half of studies presented at conferences remain unpublished two years later, and these studies differ systematically from those that are published. In particular, the unpublished studies are less likely to report statistically significant findings, and this introduces publication bias. This has been well documented for quantitative studies, but has never been explored in relation to qualitative research. Methods: We reviewed the abstracts of qualitative research presented at the 1998 (n = 110) and 1999 (n = 114) British Sociological Association (BSA) Medical Sociology meetings, and attempted to locate those studies in databases or by contacting authors. We also appraised the quality of reporting in each abstract. Results: We found an overall publication rate for these qualitative studies of 44.2%. This is nearly identical to the publication rate for quantitative research. The quality of reporting of study methods and findings in the abstract was positively related to the likelihood of publication. Conclusion: Qualitative research is as likely to remain unpublished as quantitative research. Moreover, non-publication appears to be related to the quality of reporting of methodological information in the original abstract, perhaps because this is a proxy for a study with clear objectives and clear findings. This suggests a mechanism by which “qualitative publication bias” might work: qualitative studies that do not show clear, or striking, or easily described findings may simply disappear from view. One implication of this is that, as with quantitative research, systematic reviews of qualitative studies may be biased if they rely only on published papers.

62 citations

Journal ArticleDOI
TL;DR: The question of ‘what works’ is a fundamental one not only for politicians and policy makers who need to devise or implement policies on everything from reducing juvenile crime to increasing the national wealth, but it is also fundamental for citizens on the receiving end of interventions.
Abstract: The question of ‘what works’ is a fundamental one not only for politicians and policy makers who need to devise or implement policies on everything from reducing juvenile crime to increasing the national wealth, but it is also fundamental for citizens on the receiving end of interventions. The observation that some things work better than others (and other things work not at all) is commonplace. So is scepticism among the public and professionals about grand claims for the effectiveness of policies, particularly given our understanding that modest interventions normally have modest effects. Whilst research can help in informing decisions about what works, conflicting research findings, and simple information overload often simply cloud the issue. Literature reviews may be designed to solve (or at least address) the problems of information management, but these reviews may themselves conflict. Take, for example, literature reviews of the effectiveness of mentoring in young people to reduce anti-social behaviour. The findings of reviews may conflict not just because of differences in inclusion criteria but because authors appraise and synthesize information on the outcomes differently (for example, not differentiating between more and less objective sources of outcome data, which vary in the extent to which they are prone to bias). Moreover, the outcomes themselves – stated satisfaction with the service, higher self-esteem, or a reduc

38 citations

Journal ArticleDOI
TL;DR: The paper suggests culture, gender, religion and youth influence BME teenagers in aspects of sexual relationships, and that these social markers may have different contextual meanings for individuals.
Abstract: Objectives . (1) To explore sexual behaviour and relationships amongst Black and minority ethnic (BME) teenagers in East London. (2) To examine how these relationships are shaped by culture, gender, peer norms and religion. (3) To describe the implications for sexual health policy and practice in urban, multicultural areas. Design . This report draws primarily on the qualitative arm of a mixed methods study which collected data from 126 young people, aged 15–18, largely through focus groups in the London boroughs of Hackney, Newham and Tower Hamlets. Results . Previous research has reported culture influencing the patterning of risk/protection amongst BME groups. Our data suggest that this is mediated by gender, religion and youth. Religion reportedly influenced young women's sexual behaviour in multiple ways. Young people described gendered norms in meeting and flirting with partners, and the role of mobile phones and peer pressure. Conclusion . Our paper suggests culture, gender, religion and youth infl...

32 citations


Cited by
More filters
Journal ArticleDOI
02 Jan 2015-BMJ
TL;DR: The PRISMA-P checklist as mentioned in this paper provides 17 items considered to be essential and minimum components of a systematic review or meta-analysis protocol, as well as a model example from an existing published protocol.
Abstract: Protocols of systematic reviews and meta-analyses allow for planning and documentation of review methods, act as a guard against arbitrary decision making during review conduct, enable readers to assess for the presence of selective reporting against completed reviews, and, when made publicly available, reduce duplication of efforts and potentially prompt collaboration. Evidence documenting the existence of selective reporting and excessive duplication of reviews on the same or similar topics is accumulating and many calls have been made in support of the documentation and public availability of review protocols. Several efforts have emerged in recent years to rectify these problems, including development of an international register for prospective reviews (PROSPERO) and launch of the first open access journal dedicated to the exclusive publication of systematic review products, including protocols (BioMed Central's Systematic Reviews). Furthering these efforts and building on the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) guidelines, an international group of experts has created a guideline to improve the transparency, accuracy, completeness, and frequency of documented systematic review and meta-analysis protocols--PRISMA-P (for protocols) 2015. The PRISMA-P checklist contains 17 items considered to be essential and minimum components of a systematic review or meta-analysis protocol.This PRISMA-P 2015 Explanation and Elaboration paper provides readers with a full understanding of and evidence about the necessity of each item as well as a model example from an existing published protocol. This paper should be read together with the PRISMA-P 2015 statement. Systematic review authors and assessors are strongly encouraged to make use of PRISMA-P when drafting and appraising review protocols.

9,361 citations

Journal ArticleDOI
TL;DR: Stigma has a small- to moderate-sized negative effect on help-seeking and Ethnic minorities, youth, men and those in military and health professions were disproportionately deterred by stigma.
Abstract: BACKGROUND: Individuals often avoid or delay seeking professional help for mental health problems. Stigma may be a key deterrent to help-seeking but this has not been reviewed systematically. Our systematic review addressed the overarching question: What is the impact of mental health-related stigma on help-seeking for mental health problems? Subquestions were: (a) What is the size and direction of any association between stigma and help-seeking? (b) To what extent is stigma identified as a barrier to help-seeking? (c) What processes underlie the relationship between stigma and help-seeking? (d) Are there population groups for which stigma disproportionately deters help-seeking? METHOD: Five electronic databases were searched from 1980 to 2011 and references of reviews checked. A meta-synthesis of quantitative and qualitative studies, comprising three parallel narrative syntheses and subgroup analyses, was conducted. RESULTS: The review identified 144 studies with 90,189 participants meeting inclusion criteria. The median association between stigma and help-seeking was d = - 0.27, with internalized and treatment stigma being most often associated with reduced help-seeking. Stigma was the fourth highest ranked barrier to help-seeking, with disclosure concerns the most commonly reported stigma barrier. A detailed conceptual model was derived that describes the processes contributing to, and counteracting, the deterrent effect of stigma on help-seeking. Ethnic minorities, youth, men and those in military and health professions were disproportionately deterred by stigma. CONCLUSIONS: Stigma has a small- to moderate-sized negative effect on help-seeking. Review findings can be used to help inform the design of interventions to increase help-seeking.

1,938 citations

Journal ArticleDOI
TL;DR: Improving adolescent health worldwide requires improving young people's daily life with families and peers and in schools, addressing risk and protective factors in the social environment at a population level, and focusing on factors that are protective across various health outcomes.

1,648 citations

Journal ArticleDOI
TL;DR: The findings of their systematic overview that assessed the impact of eHealth solutions on the quality and safety of health care are reported.
Abstract: Background There is considerable international interest in exploiting the potential of digital solutions to enhance the quality and safety of health care. Implementations of transformative eHealth technologies are underway globally, often at very considerable cost. In order to assess the impact of eHealth solutions on the quality and safety of health care, and to inform policy decisions on eHealth deployments, we undertook a systematic review of systematic reviews assessing the effectiveness and consequences of various eHealth technologies on the quality and safety of care. Methods and Findings We developed novel search strategies, conceptual maps of health care quality, safety, and eHealth interventions, and then systematically identified, scrutinised, and synthesised the systematic review literature. Major biomedical databases were searched to identify systematic reviews published between 1997 and 2010. Related theoretical, methodological, and technical material was also reviewed. We identified 53 systematic reviews that focused on assessing the impact of eHealth interventions on the quality and/or safety of health care and 55 supplementary systematic reviews providing relevant supportive information. This systematic review literature was found to be generally of substandard quality with regards to methodology, reporting, and utility. We thematically categorised eHealth technologies into three main areas: (1) storing, managing, and transmission of data; (2) clinical decision support; and (3) facilitating care from a distance. We found that despite support from policymakers, there was relatively little empirical evidence to substantiate many of the claims made in relation to these technologies. Whether the success of those relatively few solutions identified to improve quality and safety would continue if these were deployed beyond the contexts in which they were originally developed, has yet to be established. Importantly, best practice guidelines in effective development and deployment strategies are lacking. Conclusions There is a large gap between the postulated and empirically demonstrated benefits of eHealth technologies. In addition, there is a lack of robust research on the risks of implementing these technologies and their cost-effectiveness has yet to be demonstrated, despite being frequently promoted by policymakers and “techno-enthusiasts” as if this was a given. In the light of the paucity of evidence in relation to improvements in patient outcomes, as well as the lack of evidence on their cost-effectiveness, it is vital that future eHealth technologies are evaluated against a comprehensive set of measures, ideally throughout all stages of the technology's life cycle. Such evaluation should be characterised by careful attention to socio-technical factors to maximise the likelihood of successful implementation and adoption. Please see later in the article for the Editors' Summary

1,309 citations

Journal ArticleDOI
16 Jan 2020-BMJ
TL;DR: The development of theSWiM guideline for the synthesis of quantitative data of intervention effects is described and the nine SWiM reporting items with accompanying explanations and examples are presented.
Abstract: In systematic reviews that lack data amenable to meta-analysis, alternative synthesis methods are commonly used, but these methods are rarely reported. This lack of transparency in the methods can cast doubt on the validity of the review findings. The Synthesis Without Meta-analysis (SWiM) guideline has been developed to guide clear reporting in reviews of interventions in which alternative synthesis methods to meta-analysis of effect estimates are used. This article describes the development of the SWiM guideline for the synthesis of quantitative data of intervention effects and presents the nine SWiM reporting items with accompanying explanations and examples.

1,275 citations