scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Good practices for real-world data studies of treatment and/or comparative effectiveness: Recommendations from the joint ISPOR-ISPE Special Task Force on real-world evidence in health care decision making

TL;DR: Real‐world evidence (RWE) includes data from retrospective or prospective observational studies and observational registries and provides insights beyond those addressed by randomized controlled trials.
About: This article is published in Value in Health.The article was published on 2017-09-01 and is currently open access. It has received 378 citations till now. The article focuses on the topics: Evidence-based medicine & Health care.
Citations
More filters
Journal ArticleDOI
TL;DR: Interim findings indicate that selection of active comparator therapies with similar indications and use patterns enhances the validity of RWE, and more trial emulations are needed to understand how often and in what contexts RWE findings match RCTs.
Abstract: Background: Regulators are evaluating the use of noninterventional real-world evidence (RWE) studies to assess the effectiveness of medical products. The RCT DUPLICATE initiative (Randomized, Contr...

147 citations

Journal ArticleDOI
TL;DR: A simple framework of graphical representations is proposed that will clarify critical design choices in database analyses of the effectiveness and safety of medical products and uses standardized structure and terminology to simplify review and communication to a broad audience of decision makers.
Abstract: Pharmacoepidemiologic and pharmacoeconomic analysis of health care databases has become a vital source of evidence to support health care decision making and efficient management of health care organizations. However, decision makers often consider studies done in nonrandomized health care databases more difficult to review than randomized trials because many design choices need to be considered. This is perceived as an important barrier to decision making about the effectiveness and safety of medical products. Design flaws in longitudinal database studies are avoidable but can be unintentionally obscured in the convoluted prose of methods sections, which often lack specificity. We propose a simple framework of graphical representation that visualizes study design implementations in a comprehensive, unambiguous, and intuitive way; contains a level of detail that enables reproduction of key study design variables; and uses standardized structure and terminology to simplify review and communication to a broad audience of decision makers. Visualization of design details will make database studies more reproducible, quicker to review, and easier to communicate to a broad audience of decision makers.

120 citations

Journal ArticleDOI
TL;DR: The objective was to catalogue scientific decisions underpinning study execution that should be reported to facilitate replication and enable assessment of validity of studies conducted in large healthcare databases.
Abstract: Purpose: Defining a study population and creating an analytic dataset from longitudinal healthcare databases involves many decisions. Our objective was to catalogue scientific decisions underpinning study execution that should be reported to facilitate replication and enable assessment of validity of studies conducted in large healthcare databases. Methods: We reviewed key investigator decisions required to operate a sample of macros and software tools designed to create and analyze analytic cohorts from longitudinal streams of healthcare data. A panel of academic, regulatory, and industry experts in healthcare database analytics discussed and added to this list. Conclusion: Evidence generated from large healthcare encounter and reimbursement databases is increasingly being sought by decision-makers. Varied terminology is used around the world for the same concepts. Agreeing on terminology and which parameters from a large catalogue are the most essential to report for replicable research would improve transparency and facilitate assessment of validity. At a minimum, reporting for a database study should provide clarity regarding operational definitions for key temporal anchors and their relation to each other when creating the analytic dataset, accompanied by an attrition table and a design diagram. A substantial improvement in reproducibility, rigor and confidence in real world evidence generated from healthcare databases could be achieved with greater transparency about operational study parameters used to create analytic datasets from longitudinal healthcare databases.

118 citations

Journal ArticleDOI
TL;DR: This systematic review and meta-analysis of observational studies found that among patients undergoing radical hysterectomy for early-stage cervical cancer, minimally invasive radical HystereCTomy was associated with an elevated risk of recurrence and death compared with open surgery.
Abstract: Importance Minimally invasive techniques are increasingly common in cancer surgery. A recent randomized clinical trial has brought into question the safety of minimally invasive radical hysterectomy for cervical cancer. Objective To quantify the risk of recurrence and death associated with minimally invasive vs open radical hysterectomy for early-stage cervical cancer reported in observational studies optimized to control for confounding. Data Sources Ovid MEDLINE, Ovid Embase, PubMed, Scopus, and Web of Science (inception to March 26, 2020) performed in an academic medical setting. Study Selection In this systematic review and meta-analysis, observational studies were abstracted that used survival analyses to compare outcomes after minimally invasive (laparoscopic or robot-assisted) and open radical hysterectomy in patients with early-stage (International Federation of Gynecology and Obstetrics 2009 stage IA1-IIA) cervical cancer. Study quality was assessed with the Newcastle-Ottawa Scale and included studies with scores of at least 7 points that controlled for confounding by tumor size or stage. Data Extraction and Synthesis The Meta-analysis of Observational Studies in Epidemiology (MOOSE) checklist was used to abstract data independently by multiple observers. Random-effects models were used to pool associations and to analyze the association between surgical approach and oncologic outcomes. Main Outcomes and Measures Risk of recurrence or death and risk of all-cause mortality. Results Forty-nine studies were identified, of which 15 were included in the meta-analysis. Of 9499 patients who underwent radical hysterectomy, 49% (n = 4684) received minimally invasive surgery; of these, 57% (n = 2675) received robot-assisted laparoscopy. There were 530 recurrences and 451 deaths reported. The pooled hazard of recurrence or death was 71% higher among patients who underwent minimally invasive radical hysterectomy compared with those who underwent open surgery (hazard ratio [HR], 1.71; 95% CI, 1.36-2.15;P Conclusions and Relevance This systematic review and meta-analysis of observational studies found that among patients undergoing radical hysterectomy for early-stage cervical cancer, minimally invasive radical hysterectomy was associated with an elevated risk of recurrence and death compared with open surgery.

113 citations

Journal ArticleDOI
TL;DR: There is a need to develop hybrid trial methodology combining the best parts of traditional randomized controlled trials (RCTs) and observational study designs to produce real‐world evidence (RWE) that provides adequate scientific evidence for regulatory decision‐making.
Abstract: Purpose There is a need to develop hybrid trial methodology combining the best parts of traditional randomized controlled trials (RCTs) and observational study designs to produce real-world evidence (RWE) that provides adequate scientific evidence for regulatory decision-making. Methods This review explores how hybrid study designs that include features of RCTs and studies with real-world data (RWD) can combine the advantages of both to generate RWE that is fit for regulatory purposes. Results Some hybrid designs include randomization and use pragmatic outcomes; other designs use single-arm trial data supplemented with external comparators derived from RWD or leverage novel data collection approaches to capture long-term outcomes in a real-world setting. Some of these approaches have already been successfully used in regulatory decisions, raising the possibility that studies using RWD could increasingly be used to augment or replace traditional RCTs for the demonstration of drug effectiveness in certain contexts. These changes come against a background of long reliance on RCTs for regulatory decision-making, which are labor-intensive, costly, and produce data that can have limited applicability in real-world clinical practice. Conclusions While RWE from observational studies is well accepted for satisfying postapproval safety monitoring requirements, it has not commonly been used to demonstrate drug effectiveness for regulatory purposes. However, this position is changing as regulatory opinions, guidance frameworks, and RWD methodologies are evolving, with growing recognition of the value of using RWE that is acceptable for regulatory decision-making.

107 citations

References
More filters
Journal ArticleDOI
TL;DR: The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) initiative developed recommendations on what should be included in an accurate and complete report of an observational study, resulting in a checklist of 22 items (the STROBE statement) that relate to the title, abstract, introduction, methods, results, and discussion sections of articles.
Abstract: Much biomedical research is observational. The reporting of such research is often inadequate, which hampers the assessment of its strengths and weaknesses and of a study's generalisability. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Initiative developed recommendations on what should be included in an accurate and complete report of an observational study. We defined the scope of the recommendations to cover three main study designs: cohort, case-control, and cross-sectional studies. We convened a 2-day workshop in September 2004, with methodologists, researchers, and journal editors to draft a checklist of items. This list was subsequently revised during several meetings of the coordinating group and in e-mail discussions with the larger group of STROBE contributors, taking into account empirical evidence and methodological considerations. The workshop and the subsequent iterative process of consultation and revision resulted in a checklist of 22 items (the STROBE Statement) that relate to the title, abstract, introduction, methods, results, and discussion sections of articles. 18 items are common to all three study designs and four are specific for cohort, case-control, or cross-sectional studies. A detailed Explanation and Elaboration document is published separately and is freely available on the Web sites of PLoS Medicine, Annals of Internal Medicine, and Epidemiology. We hope that the STROBE Statement will contribute to improving the quality of reporting of observational studies.

15,454 citations

Journal ArticleDOI
TL;DR: The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Initiative developed recommendations on what should be included in an accurate and complete report of an observational study, resulting in a checklist of 22 items that relate to the title, abstract, introduction, methods, results, and discussion sections of articles.

9,603 citations

15 Aug 2006
TL;DR: In this paper, the authors discuss the implications of these problems for the conduct and interpretation of research and suggest that claimed research findings may often be simply accurate measures of the prevailing bias.
Abstract: There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser pre-selection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research.

5,003 citations

Journal ArticleDOI
01 Aug 2005-Chance
TL;DR: In this paper, the authors discuss the implications of these problems for the conduct and interpretation of research and conclude that the probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and the ratio of true to no relationships among the relationships probed in each scientifi c fi eld.
Abstract: Summary There is increasing concern that most current published research fi ndings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientifi c fi eld. In this framework, a research fi nding is less likely to be true when the studies conducted in a fi eld are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater fl exibility in designs, defi nitions, outcomes, and analytical modes; when there is greater fi nancial and other interest and prejudice; and when more teams are involved in a scientifi c fi eld in chase of statistical signifi cance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientifi c fi elds, claimed research fi ndings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research. It can be proven that most claimed research fi ndings are false.

4,999 citations

Journal ArticleDOI
TL;DR: It is shown that despite empirical psychologists’ nominal endorsement of a low rate of false-positive findings, flexibility in data collection, analysis, and reporting dramatically increases actual false- positive rates, and a simple, low-cost, and straightforwardly effective disclosure-based solution is suggested.
Abstract: In this article, we accomplish two things. First, we show that despite empirical psychologists' nominal endorsement of a low rate of false-positive findings (≤ .05), flexibility in data collection, analysis, and reporting dramatically increases actual false-positive rates. In many cases, a researcher is more likely to falsely find evidence that an effect exists than to correctly find evidence that it does not. We present computer simulations and a pair of actual experiments that demonstrate how unacceptably easy it is to accumulate (and report) statistically significant evidence for a false hypothesis. Second, we suggest a simple, low-cost, and straightforwardly effective disclosure-based solution to this problem. The solution involves six concrete requirements for authors and four guidelines for reviewers, all of which impose a minimal burden on the publication process.

4,727 citations


"Good practices for real-world data ..." refers background in this paper

  • ...Establishing study registration as an essential element in clinical research will discourage the practice of ad hoc data mining and selective choice of results that can occur in observational health care studies.(52) However, from the strict point of view of scientific discovery, study registration per se may be neither completely necessary (methodologically sound studies with large effect sizes may be useful regardless of study registration prior to their conduct) nor sufficient (study registration does not guarantee quality or prevent scientific fraud)....

    [...]