scispace - formally typeset
Search or ask a question
Author

Alessandro Liberati

Bio: Alessandro Liberati is an academic researcher from University of Modena and Reggio Emilia. The author has contributed to research in topics: Breast cancer & Systematic review. The author has an hindex of 46, co-authored 144 publications receiving 167184 citations. Previous affiliations of Alessandro Liberati include Mario Negri Institute for Pharmacological Research & Cochrane Collaboration.


Papers
More filters
Journal ArticleDOI
22 May 2008-BMJ
TL;DR: Guideline panellists have differing opinions on whether resource use should influence decisions on individual patients as discussed by the authors, and they may find dealing with such considerations challenging, and may find it difficult to deal with such decisions.
Abstract: Guideline panellists have differing opinions on whether resource use should influence decisions on individual patients. As medical care costs rise, resource use considerations become more compelling, but panellists may find dealing with such considerations challenging

358 citations

Journal ArticleDOI
TL;DR: A pilot test of the GRADE approach to grading evidence and recommendations found the approach to be clear, understandable and sensible and some modifications were made in the approach and it was agreed that more information was needed in the evidence profiles.
Abstract: Background Systems that are used by different organisations to grade the quality of evidence and the strength of recommendations vary. They have different strengths and weaknesses. The GRADE Working Group has developed an approach that addresses key shortcomings in these systems. The aim of this study was to pilot test and further develop the GRADE approach to grading evidence and recommendations.

317 citations

Journal ArticleDOI
TL;DR: This updated review of RCTs conducted almost 20 years ago suggests that follow-up programs based on regular physical examinations and yearly mammography alone are as effective as more intensive approaches based onregular performance of laboratory and instrumental tests in terms of timeliness of recurrence detection, overall survival and quality of life.
Abstract: Background Follow-up examinations are commonly performed after primary treatment for women with breast cancer. They are used to detect recurrences at an early (asymptomatic) stage. This is an update of a Cochrane review first published in 2000. Objectives To assess the effectiveness of different policies of follow-up for distant metastases on mortality, morbidity and quality of life in women treated for stage I, II or III breast cancer. Search methods For this 2014 review update, we searched the Cochrane Breast Cancer Group's Specialised Register (4 July 2014), MEDLINE (4 July 2014), Embase (4 July 2014), CENTRAL (2014, Issue 3), the World Health Organization (WHO) International Clinical Trials Registry Platform (4 July 2014) and ClinicalTrials.gov (4 July 2014). References from retrieved articles were also checked. Selection criteria All randomised controlled trials (RCTs) assessing the effectiveness of different policies of follow-up after primary treatment were reviewed for inclusion. Data collection and analysis Two review authors independently assessed trials for eligibility for inclusion in the review and risk of bias. Data were pooled in an individual patient data meta-analysis for the two RCTs testing the effectiveness of different follow-up schemes. Subgroup analyses were conducted by age, tumour size and lymph node status. Main results Since 2000, one new trial has been published; the updated review now includes five RCTs involving 4023 women with breast cancer (clinical stage I, II or III). Two trials involving 2563 women compared follow-up based on clinical visits and mammography with a more intensive scheme including radiological and laboratory tests. After pooling the data, no significant differences in overall survival (hazard ratio (HR) 0.98, 95% confidence interval (CI) 0.84 to 1.15, two studies, 2563 participants, high-quality evidence), or disease-free survival (HR 0.84, 95% CI 0.71 to 1.00, two studies, 2563 participants, low-quality evidence) emerged. No differences in overall survival and disease-free survival emerged in subgroup analyses according to patient age, tumour size and lymph node status before primary treatment. In 1999, 10-year follow-up data became available for one trial of these trials, and no significant differences in overall survival were found. No difference was noted in quality of life measures (one study, 639 participants, high-quality evidence). The new included trial, together with a previously included trial involving 1264 women compared follow-up performed by a hospital-based specialist versus follow-up performed by general practitioners. No significant differences were noted in overall survival (HR 1.07, 95% CI 0.64 to 1.78, one study, 968 participants, moderate-quality evidence), time to detection of recurrence (HR 1.06, 95% CI 0.76 to 1.47, two studies, 1264 participants, moderate-quality evidence), and quality of life (one study, 356 participants, high-quality evidence). Patient satisfaction was greater among patients treated by general practitioners. One RCT involving 196 women compared regularly scheduled follow-up visits versus less frequent visits restricted to the time of mammography. No significant differences emerged in interim use of telephone and frequency of general practitioners's consultations. Authors' conclusions This updated review of RCTs conducted almost 20 years ago suggests that follow-up programs based on regular physical examinations and yearly mammography alone are as effective as more intensive approaches based on regular performance of laboratory and instrumental tests in terms of timeliness of recurrence detection, overall survival and quality of life. In two RCTs, follow-up care performed by trained and not trained general practitioners working in an organised practice setting had comparable effectiveness to that delivered by hospital-based specialists in terms of overall survival, recurrence detection, and quality of life.

304 citations

Journal ArticleDOI
05 May 2005-BMJ
TL;DR: Cochrane reviews fared better than systematic reviews published in paper based journals in terms of assessment of methodological quality of primary studies, although they both largely failed to take it into account in the interpretation of results.
Abstract: Objectives To describe how the methodological quality of primary studies is assessed in systematic reviews and whether the quality assessment is taken into account in the interpretation of results. Data sources Cochrane systematic reviews and systematic reviews in paper based journals. Study selection 965 systematic reviews (809 Cochrane reviews and 156 paper based reviews) published between 1995 and 2002. Data synthesis The methodological quality of primary studies was assessed in 854 of the 965 systematic reviews (88.5%). This occurred more often in Cochrane reviews than in paper based reviews (93.9% v 60.3%, P < 0.0001). Overall, only 496 (51.4%) used the quality assessment in the analysis and interpretation of the results or in their discussion, with no significant differences between Cochrane reviews and paper based reviews (52% v 49%, P = 0.58). The tools and methods used for quality assessment varied widely. Conclusions Cochrane reviews fared better than systematic reviews published in paper based journals in terms of assessment of methodological quality of primary studies, although they both largely failed to take it into account in the interpretation of results. Methods for assessment of methodological quality by systematic reviews are still in their infancy and there is substantial room for improvement.

228 citations

Journal ArticleDOI
TL;DR: There was evidence that quality has improved over time and that the increasing tendency of involving a biostatistician in the research team was positively associated with the improvement of the internal validity but not with the external.
Abstract: The methodology of randomized control trials (RCTs) of the primary treatment of early breast cancer has been reviewed using a quantitative method. Sixty-three RCTs comparing various treatment modalities tested on over 34,000 patients and reported in 119 papers were evaluated according to a standardized scoring system. A percentage score was developed to assess the internal validity of a study (referring to the quality of its design and execution) and its external validity (referring to presentation of information required to determine its generalizability). An overall score was also calculated as the combination of the two. The mean overall score for the 63 RCTs was 50% (95% confidence interval [CI] = 46% to 54%) with small and nonstatistically significant differences between types of trial. The most common methodologic deficiencies encountered in these studies were related to the randomization process (only 27 of the 63 RCTs adopted a truly blinded procedure), the handling of withdrawals (only 26 RCTs in...

194 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Moher et al. as mentioned in this paper introduce PRISMA, an update of the QUOROM guidelines for reporting systematic reviews and meta-analyses, which is used in this paper.
Abstract: David Moher and colleagues introduce PRISMA, an update of the QUOROM guidelines for reporting systematic reviews and meta-analyses

62,157 citations

Journal Article
TL;DR: The QUOROM Statement (QUality Of Reporting Of Meta-analyses) as mentioned in this paper was developed to address the suboptimal reporting of systematic reviews and meta-analysis of randomized controlled trials.
Abstract: Systematic reviews and meta-analyses have become increasingly important in health care. Clinicians read them to keep up to date with their field,1,2 and they are often used as a starting point for developing clinical practice guidelines. Granting agencies may require a systematic review to ensure there is justification for further research,3 and some health care journals are moving in this direction.4 As with all research, the value of a systematic review depends on what was done, what was found, and the clarity of reporting. As with other publications, the reporting quality of systematic reviews varies, limiting readers' ability to assess the strengths and weaknesses of those reviews. Several early studies evaluated the quality of review reports. In 1987, Mulrow examined 50 review articles published in 4 leading medical journals in 1985 and 1986 and found that none met all 8 explicit scientific criteria, such as a quality assessment of included studies.5 In 1987, Sacks and colleagues6 evaluated the adequacy of reporting of 83 meta-analyses on 23 characteristics in 6 domains. Reporting was generally poor; between 1 and 14 characteristics were adequately reported (mean = 7.7; standard deviation = 2.7). A 1996 update of this study found little improvement.7 In 1996, to address the suboptimal reporting of meta-analyses, an international group developed a guidance called the QUOROM Statement (QUality Of Reporting Of Meta-analyses), which focused on the reporting of meta-analyses of randomized controlled trials.8 In this article, we summarize a revision of these guidelines, renamed PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses), which have been updated to address several conceptual and practical advances in the science of systematic reviews (Box 1). Box 1 Conceptual issues in the evolution from QUOROM to PRISMA

46,935 citations

Journal ArticleDOI
04 Sep 2003-BMJ
TL;DR: A new quantity is developed, I 2, which the authors believe gives a better measure of the consistency between trials in a meta-analysis, which is susceptible to the number of trials included in the meta- analysis.
Abstract: Cochrane Reviews have recently started including the quantity I 2 to help readers assess the consistency of the results of studies in meta-analyses. What does this new quantity mean, and why is assessment of heterogeneity so important to clinical practice? Systematic reviews and meta-analyses can provide convincing and reliable evidence relevant to many aspects of medicine and health care.1 Their value is especially clear when the results of the studies they include show clinically important effects of similar magnitude. However, the conclusions are less clear when the included studies have differing results. In an attempt to establish whether studies are consistent, reports of meta-analyses commonly present a statistical test of heterogeneity. The test seeks to determine whether there are genuine differences underlying the results of the studies (heterogeneity), or whether the variation in findings is compatible with chance alone (homogeneity). However, the test is susceptible to the number of trials included in the meta-analysis. We have developed a new quantity, I 2, which we believe gives a better measure of the consistency between trials in a meta-analysis. Assessment of the consistency of effects across studies is an essential part of meta-analysis. Unless we know how consistent the results of studies are, we cannot determine the generalisability of the findings of the meta-analysis. Indeed, several hierarchical systems for grading evidence state that the results of studies must be consistent or homogeneous to obtain the highest grading.2–4 Tests for heterogeneity are commonly used to decide on methods for combining studies and for concluding consistency or inconsistency of findings.5 6 But what does the test achieve in practice, and how should the resulting P values be interpreted? A test for heterogeneity examines the null hypothesis that all studies are evaluating the same effect. The usual test statistic …

45,105 citations

Journal ArticleDOI
TL;DR: In this review the usual methods applied in systematic reviews and meta-analyses are outlined, and the most common procedures for combining studies with binary outcomes are described, illustrating how they can be done using Stata commands.

31,656 citations

Journal ArticleDOI
TL;DR: A structured summary is provided including, as applicable, background, objectives, data sources, study eligibility criteria, participants, interventions, study appraisal and synthesis methods, results, limitations, conclusions and implications of key findings.

31,379 citations