scispace - formally typeset
Search or ask a question
Author

Alessandro Liberati

Bio: Alessandro Liberati is an academic researcher from University of Modena and Reggio Emilia. The author has contributed to research in topics: Breast cancer & Systematic review. The author has an hindex of 46, co-authored 144 publications receiving 167184 citations. Previous affiliations of Alessandro Liberati include Mario Negri Institute for Pharmacological Research & Cochrane Collaboration.


Papers
More filters
Journal ArticleDOI
23 Sep 2011-BMJ
TL;DR: The updating speed of Dynamed clearly led the others, and a qualitative analysis of updating mechanisms is needed to determine whether greater speed corresponds to more appropriate incorporation of new information.
Abstract: Objective To evaluate the ability of international point of care information summaries to update evidence relevant to medical practice. Design Prospective cohort bibliometric analysis. Setting Top five point of care information summaries (Clinical Evidence, EBMGuidelines, eMedicine, Dynamed, UpToDate) ranked for coverage of medical conditions, editorial quality, and evidence based methodology. Main outcome measures From June 2009 to May 2010 we measured the incidence of research findings relating to potentially eligible newsworthy evidence. As samples, we chose systematic reviews rated as relevant by international research networks (such as, Evidence-Based Medicine, ACP Journal Club, and the Cochrane Collaboration). Every month we assessed whether each sampled review was cited in at least one chapter of the five summaries. The cumulative updating rate was analysed with Kaplan-Meier curves. Results From April to December 2009, 128 reviews were retrieved; 53% (68) from the literature surveillance journals and 47% (60) from the Cochrane Library. At nine months, Dynamed had cited 87% of the sampled reviews, while the other summaries had cited less than 50%. The updating speed of Dynamed clearly led the others. For instance, the hazard ratios for citations in EBM Guidelines and Clinical Evidence versus the top performer were 0.22 (95% confidence interval 0.17 to 0.29) and 0.03 (0.01 to 0.05). Conclusions Point of care information summaries include evidence relevant to practice at different speeds. A qualitative analysis of updating mechanisms is needed to determine whether greater speed corresponds to more appropriate incorporation of new information.

63 citations

Journal ArticleDOI
22 Jul 2009-Trials
TL;DR: While the launch of the WHO minimum data set seemed to positively influence registries with better standardisation of approaches, individual registry entries are largely incomplete.
Abstract: Background Since September 2005 the International Committee of Medical Journal Editors has required that trials be registered in accordance with the World Health Organization (WHO) minimum dataset, in order to be considered for publication. The objective is to evaluate registries' and individual trial records' compliance with the 2006 version of the WHO minimum data set.

62 citations

Journal ArticleDOI
30 Aug 1997-BMJ
TL;DR: The quality and relevance of much clinical research fall short of patients' needs because clinical research has been delegated largely to the pharmaceutical industry, whose main motivation is its own economic welfare.
Abstract: The quality and relevance of much clinical research fall short of patients' needs.1 Although there are many reasons for this, one is that clinical research has been delegated largely to the pharmaceutical industry, whose main motivation is its own economic welfare.2 Another reason is that research priorities do not flow from a transparent process where the views of all the relevant stakeholders are equally considered. With very rare exceptions—the case of the AIDS advocacy movement is an exemplar—patients and consumers have no voice in how research is prioritised, funded, and monitored. Indeed, the presence of lay people on research ethics commitees is common but there is a widespread belief that they are rarely influential.3 Even among progressive scientists and health professionals, a paternalistic attitude still prevails. They do not believe that patients and consumers can improve the decision making process as, they say, consumers lack the necessary knowledge and skills. But successful efforts to shift …

62 citations

Journal ArticleDOI
TL;DR: Despite the differences in their objectives, design and methods of sampling, these studies indicate that an explicit, diagnosis-independent and standardized instrument such as the AEP can help to uncover a substantial amount of the potentially avoidable use of hospital resources in the Italian context.
Abstract: This paper reports on the general features and findings of 11 studies conducted in Italy on appropriateness of hospital admission and days of stay using the Appropriateness Evaluation Protocol (AEP). Studies have been grouped for presentation in two categories. The first comprises six heterogeneous studies illustrating different ways of targeting the use of the AEP: two used it to assess appropriateness of admission in an emergency room setting, two measured appropriateness of days of stay in patients with AIDS and nosocomial infections and finally two others evaluated hospital days in a group of elderly patients and “before and after” the institution of a domiciliary nursing service, respectively. The second group comprises five more homogeneous utilization review studies aimed at assessing inappropriateness of admissions and days of stay in medical/surgical departments of large hospitals in northern Italy. Besides detecting a substantial amount of inappropriateness in admission (range = 25–38%) and days of stay (range = 28–49%) this latter group of studies suggests that delays in execution and reporting of laboratory investigations, unavailability of operating rooms and delays due to difficulties in transferring patients to long-term care facilities are the most common causes of inappropriate days of stay. Despite the differences in their objectives, design and methods of sampling, these studies indicate that an explicit, diagnosis-independent and standardized instrument such as the AEP can help to uncover a substantial amount of the potentially avoidable use of hospital resources in the Italian context.

60 citations

Journal ArticleDOI
TL;DR: The model of carcinogenesis for epithelial ovarian cancer appears, therefore, to be more complex than is indicated simply by the total duration of 'ovarian activity.
Abstract: The total duration of 'ovulatory activity' or 'ovulatory age' has been reported to be the strongest indicator of the risk of ovarian cancer. In the case-control study examined in this paper this variable was found to be a strong correlate of the risk of ovarian cancer. However, the finding that in older women the major determinant of 'ovulatory age' was age at menopause (which is a very unreliable indicator of 'ovarian activity'), and that age at menopause by itself was related to the risk of ovarian cancer as strongly as the total duration of ovulatory age, threw doubt on the biological consistency of that model. Furthermore, the protection conferred by pregnancies was different at different ages, and age at first pregnancy was more strongly associated with the risk of ovarian cancer than the actual number of pregnancies. The model of carcinogenesis for epithelial ovarian cancer appears, therefore, to be more complex than is indicated simply by the total duration of 'ovarian activity.'

54 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Moher et al. as mentioned in this paper introduce PRISMA, an update of the QUOROM guidelines for reporting systematic reviews and meta-analyses, which is used in this paper.
Abstract: David Moher and colleagues introduce PRISMA, an update of the QUOROM guidelines for reporting systematic reviews and meta-analyses

62,157 citations

Journal Article
TL;DR: The QUOROM Statement (QUality Of Reporting Of Meta-analyses) as mentioned in this paper was developed to address the suboptimal reporting of systematic reviews and meta-analysis of randomized controlled trials.
Abstract: Systematic reviews and meta-analyses have become increasingly important in health care. Clinicians read them to keep up to date with their field,1,2 and they are often used as a starting point for developing clinical practice guidelines. Granting agencies may require a systematic review to ensure there is justification for further research,3 and some health care journals are moving in this direction.4 As with all research, the value of a systematic review depends on what was done, what was found, and the clarity of reporting. As with other publications, the reporting quality of systematic reviews varies, limiting readers' ability to assess the strengths and weaknesses of those reviews. Several early studies evaluated the quality of review reports. In 1987, Mulrow examined 50 review articles published in 4 leading medical journals in 1985 and 1986 and found that none met all 8 explicit scientific criteria, such as a quality assessment of included studies.5 In 1987, Sacks and colleagues6 evaluated the adequacy of reporting of 83 meta-analyses on 23 characteristics in 6 domains. Reporting was generally poor; between 1 and 14 characteristics were adequately reported (mean = 7.7; standard deviation = 2.7). A 1996 update of this study found little improvement.7 In 1996, to address the suboptimal reporting of meta-analyses, an international group developed a guidance called the QUOROM Statement (QUality Of Reporting Of Meta-analyses), which focused on the reporting of meta-analyses of randomized controlled trials.8 In this article, we summarize a revision of these guidelines, renamed PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses), which have been updated to address several conceptual and practical advances in the science of systematic reviews (Box 1). Box 1 Conceptual issues in the evolution from QUOROM to PRISMA

46,935 citations

Journal ArticleDOI
04 Sep 2003-BMJ
TL;DR: A new quantity is developed, I 2, which the authors believe gives a better measure of the consistency between trials in a meta-analysis, which is susceptible to the number of trials included in the meta- analysis.
Abstract: Cochrane Reviews have recently started including the quantity I 2 to help readers assess the consistency of the results of studies in meta-analyses. What does this new quantity mean, and why is assessment of heterogeneity so important to clinical practice? Systematic reviews and meta-analyses can provide convincing and reliable evidence relevant to many aspects of medicine and health care.1 Their value is especially clear when the results of the studies they include show clinically important effects of similar magnitude. However, the conclusions are less clear when the included studies have differing results. In an attempt to establish whether studies are consistent, reports of meta-analyses commonly present a statistical test of heterogeneity. The test seeks to determine whether there are genuine differences underlying the results of the studies (heterogeneity), or whether the variation in findings is compatible with chance alone (homogeneity). However, the test is susceptible to the number of trials included in the meta-analysis. We have developed a new quantity, I 2, which we believe gives a better measure of the consistency between trials in a meta-analysis. Assessment of the consistency of effects across studies is an essential part of meta-analysis. Unless we know how consistent the results of studies are, we cannot determine the generalisability of the findings of the meta-analysis. Indeed, several hierarchical systems for grading evidence state that the results of studies must be consistent or homogeneous to obtain the highest grading.2–4 Tests for heterogeneity are commonly used to decide on methods for combining studies and for concluding consistency or inconsistency of findings.5 6 But what does the test achieve in practice, and how should the resulting P values be interpreted? A test for heterogeneity examines the null hypothesis that all studies are evaluating the same effect. The usual test statistic …

45,105 citations

Journal ArticleDOI
TL;DR: In this review the usual methods applied in systematic reviews and meta-analyses are outlined, and the most common procedures for combining studies with binary outcomes are described, illustrating how they can be done using Stata commands.

31,656 citations

Journal ArticleDOI
TL;DR: A structured summary is provided including, as applicable, background, objectives, data sources, study eligibility criteria, participants, interventions, study appraisal and synthesis methods, results, limitations, conclusions and implications of key findings.

31,379 citations