scispace - formally typeset
Search or ask a question
Author

Alessandro Liberati

Bio: Alessandro Liberati is an academic researcher from University of Modena and Reggio Emilia. The author has contributed to research in topics: Breast cancer & Systematic review. The author has an hindex of 46, co-authored 144 publications receiving 167184 citations. Previous affiliations of Alessandro Liberati include Mario Negri Institute for Pharmacological Research & Cochrane Collaboration.


Papers
More filters
Journal ArticleDOI
TL;DR: Consistency of the information obtained on selected items with published series of patients suggests that this methodology is worth a wider testing as a simple, inexpensive tool for routinely monitoring the care of cancer patients and the impact on it of organizational and educational interventions.

9 citations

Book ChapterDOI
16 Nov 2007

8 citations

Journal ArticleDOI
01 Dec 1991-Tumori
TL;DR: The distribution of ER and PgR profiles was similar in relation to family history of breast cancer, reproductive events and other selected epidemiologic characteristics of the patients, and ER status and concentrations were independent of menopausal status after adjustment for age.
Abstract: A total of 1095 patients with operable breast cancer and en-rolled in a randomized clinical trial were analysed for estrogen (ER) and progesterone (PgR) receptor content of their primary tumor, and the relationships between steroid receptor status and several epidemiologic characteristics were studied. The proportion of ER+ and median ER levels increased with age: compared to women younger than 40, those aged 66 or more were approximately three times more likely to have an ER+ tumor (OR = 3.0, 95% C.I. = 1.6–5.7). This difference tended to be more marked after comparison between patients with ER > 100 fmol/mg protein and ER- within the same age groups: OR = 7.04, 95 % C.I. = 2.89–17.12. No association emerged between age and PgR. ER status and concentrations were independent of menopausal status after adjustment for age, whereas the proportion of PgR+ and PgR levels were significantly lower in postmenopausal patients of the same age. The distribution of ER and PgR profiles was similar in relation to family history of breast cancer, reproductive events and other selected epidemiologic characteristics of the patients.

8 citations

Journal ArticleDOI
TL;DR: The experimental treatments that gain the verdict of non inferiority in published trials do not appear to be systematically less effective than the standard treatments, and the findings are reassuring considering the criticism that has been levelled at non-inferiority trials.
Abstract: In a typical clinical trial, two treatments are compared to determine which is better, or if both are the same. The design of the classic, parallel group randomized trial involves formulating a null hypothesis of no difference between interventions and identifying a clinically relevant difference ( ) that researchers do not wish to overlook on the primary end-point. These trials are refered as ‘superiority trials’ (STs) as investigators hope to reject the null hypothesis demonstrating a difference between interventions. In an ST, the type I error is falsely finding a treatment effect when there is none, and a type II error is failing to detect a treatment effect when truly one exists. In contrast, a non-inferiority trial (NIT) seeks to determine whether a new intervention is no worse than a reference intervention within a pre-specified non-inferiority margin (from – to 0)—that is, a clinically irrelevant difference—for the primary outcome. The null hypothesis under which an NIT is designed is that the experimental intervention is worse than the standard treatment and that the absence of a relevant difference can be demonstrated by rejecting it. In NITs, the null and alternative hypotheses are reversed compared with STs: a type I error is the erroneous acceptance of an inferior new treatment, whereas a type II error is the erroneous rejection of a truly non-inferior treatment. It is the very nature of the NIT design that makes it susceptible to bias and misuse unless (i) the research question has a strong rationale; (ii) the effectiveness of the standard treatment is solid and (iii) the end point(s) on which the has been chosen for assessing non-inferiority are appropriate. Recognizing that NIT may—under specified circumstances—be useful does not mean that difficulties in design, conduct, analysis and interpretation can be overlooked, especially as NITs can be (mis) used to study new marketable products with questionable, or no, innovation, producing results only to obtain regulatory authority approval. The a priori concern, together with empirical evidence of NITs’ inappropriate use, fuels the current debate between those supporting NIT and those detracting it on both pragmatic and ethical grounds. The paper by Soonowala et al. published in this issue of the Journal should be read against this background. The paper is stated to be an attempt to ‘to address the concerns by performing a meta-analysis of non inferiority trials to see whether the systematic use of too large non inferiority margins or systematic bias in designs, conduct or reporting skewed the overall results’. Authors searched relevant NITs across a variety of clinical questions and pooled data to see whether the suspicion of a systematic bias could, or could not, be confirmed. They conclude that ‘the experimental treatments that gain the verdict of non inferiority in published trials do not appear to be systematically less effective than the standard treatments’, and then go a step further stating that ‘the findings are reassuring considering the criticism that has been levelled at non-inferiority trials’. Do the data support these conclusions? Hardly so, and I will now briefly discuss why. The statistical methodologies used by the authors are appropriate and rigorous. However, the results of the paper, and even more its implications, are not easy to interpret and readers should consider whether (i) the study addressed the crucial questions about NTIs; (ii) the findings provide a better understanding of the issues and (iii) the results provide clear indications of where to go next. I believe that Soonowala et al.’s study has, in the above respects, important limitations that the authors largely acknowledge. Among the most important are (i) the search strategy was far from comprehensive; (ii) publication bias cannot be ruled out and may have led to the failure to identify some relevant NITs and (iii) the design of the NIT, in particular the clinical rationale for choice of the non-inferiority Published by Oxford University Press on behalf of the International Epidemiological Association

7 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Moher et al. as mentioned in this paper introduce PRISMA, an update of the QUOROM guidelines for reporting systematic reviews and meta-analyses, which is used in this paper.
Abstract: David Moher and colleagues introduce PRISMA, an update of the QUOROM guidelines for reporting systematic reviews and meta-analyses

62,157 citations

Journal Article
TL;DR: The QUOROM Statement (QUality Of Reporting Of Meta-analyses) as mentioned in this paper was developed to address the suboptimal reporting of systematic reviews and meta-analysis of randomized controlled trials.
Abstract: Systematic reviews and meta-analyses have become increasingly important in health care. Clinicians read them to keep up to date with their field,1,2 and they are often used as a starting point for developing clinical practice guidelines. Granting agencies may require a systematic review to ensure there is justification for further research,3 and some health care journals are moving in this direction.4 As with all research, the value of a systematic review depends on what was done, what was found, and the clarity of reporting. As with other publications, the reporting quality of systematic reviews varies, limiting readers' ability to assess the strengths and weaknesses of those reviews. Several early studies evaluated the quality of review reports. In 1987, Mulrow examined 50 review articles published in 4 leading medical journals in 1985 and 1986 and found that none met all 8 explicit scientific criteria, such as a quality assessment of included studies.5 In 1987, Sacks and colleagues6 evaluated the adequacy of reporting of 83 meta-analyses on 23 characteristics in 6 domains. Reporting was generally poor; between 1 and 14 characteristics were adequately reported (mean = 7.7; standard deviation = 2.7). A 1996 update of this study found little improvement.7 In 1996, to address the suboptimal reporting of meta-analyses, an international group developed a guidance called the QUOROM Statement (QUality Of Reporting Of Meta-analyses), which focused on the reporting of meta-analyses of randomized controlled trials.8 In this article, we summarize a revision of these guidelines, renamed PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses), which have been updated to address several conceptual and practical advances in the science of systematic reviews (Box 1). Box 1 Conceptual issues in the evolution from QUOROM to PRISMA

46,935 citations

Journal ArticleDOI
04 Sep 2003-BMJ
TL;DR: A new quantity is developed, I 2, which the authors believe gives a better measure of the consistency between trials in a meta-analysis, which is susceptible to the number of trials included in the meta- analysis.
Abstract: Cochrane Reviews have recently started including the quantity I 2 to help readers assess the consistency of the results of studies in meta-analyses. What does this new quantity mean, and why is assessment of heterogeneity so important to clinical practice? Systematic reviews and meta-analyses can provide convincing and reliable evidence relevant to many aspects of medicine and health care.1 Their value is especially clear when the results of the studies they include show clinically important effects of similar magnitude. However, the conclusions are less clear when the included studies have differing results. In an attempt to establish whether studies are consistent, reports of meta-analyses commonly present a statistical test of heterogeneity. The test seeks to determine whether there are genuine differences underlying the results of the studies (heterogeneity), or whether the variation in findings is compatible with chance alone (homogeneity). However, the test is susceptible to the number of trials included in the meta-analysis. We have developed a new quantity, I 2, which we believe gives a better measure of the consistency between trials in a meta-analysis. Assessment of the consistency of effects across studies is an essential part of meta-analysis. Unless we know how consistent the results of studies are, we cannot determine the generalisability of the findings of the meta-analysis. Indeed, several hierarchical systems for grading evidence state that the results of studies must be consistent or homogeneous to obtain the highest grading.2–4 Tests for heterogeneity are commonly used to decide on methods for combining studies and for concluding consistency or inconsistency of findings.5 6 But what does the test achieve in practice, and how should the resulting P values be interpreted? A test for heterogeneity examines the null hypothesis that all studies are evaluating the same effect. The usual test statistic …

45,105 citations

Journal ArticleDOI
TL;DR: In this review the usual methods applied in systematic reviews and meta-analyses are outlined, and the most common procedures for combining studies with binary outcomes are described, illustrating how they can be done using Stata commands.

31,656 citations

Journal ArticleDOI
TL;DR: A structured summary is provided including, as applicable, background, objectives, data sources, study eligibility criteria, participants, interventions, study appraisal and synthesis methods, results, limitations, conclusions and implications of key findings.

31,379 citations