scispace - formally typeset
Search or ask a question
Author

Michael L. Cohen

Other affiliations: National Research Council
Bio: Michael L. Cohen is an academic researcher from National Academy of Sciences. The author has contributed to research in topics: American Community Survey & Census. The author has an hindex of 6, co-authored 9 publications receiving 1575 citations. Previous affiliations of Michael L. Cohen include National Research Council.

Papers
More filters
Journal ArticleDOI
TL;DR: Methods for preventing missing data and, failing that, dealing with data that are missing in clinical trials are reviewed.
Abstract: Missing data in clinical trials can have a major effect on the validity of the inferences that can be drawn from the trial. This article reviews methods for preventing missing data and, failing that, dealing with data that are missing.

1,553 citations

Journal ArticleDOI
TL;DR: This article summarizes recommendations on the design and conduct of clinical trials of a National Research Council study on missing data in clinical trials to limit missing data and addresses the panel's findings on analysis methods.
Abstract: This article summarizes recommendations on the design and conduct of clinical trials of a National Research Council study on missing data in clinical trials. Key findings of the study are that (a) substantial missing data is a serious problem that undermines the scientific credibility of causal conclusions from clinical trials; (b) the assumption that analysis methods can compensate for substantial missing data is not justified; hence (c) clinical trial design, including the choice of key causal estimands, the target population, and the length of the study, should include limiting missing data as one of its goals; (d) missing-data procedures should be discussed explicitly in the clinical trial protocol; (e) clinical trial conduct should take steps to limit the extent of missing data; (f) there is no universal method for handling missing data in the analysis of clinical trials - methods should be justified on the plausibility of the underlying scientific assumptions; and (g) when alternative assumptions are plausible, sensitivity analysis should be conducted to assess robustness of findings to these alternatives. This article focuses on the panel's recommendations on the design and conduct of clinical trials to limit missing data. A companion paper addresses the panel's findings on analysis methods.

64 citations

Book ChapterDOI
TL;DR: The use of Latin Hypercube Sampling is proposed to create a synthetic data set that reproduces many of the essential features of an original data set while providing disclosure protection and offers multiple alternatives to current methods for providing disclosureprotection for large data sets.
Abstract: We propose use of Latin Hypercube Sampling to create a synthetic data set that reproduces many of the essential features of an original data set while providing disclosure protection. The synthetic micro data can also be used to create either additive or multiplicative noise which when merged with the original data can provide disclosure protection. The technique can also be used to create hybrid micro data sets containing pre-determined mixtures of real and synthetic data. We demonstrate the basic properties of the synthetic data approach by applying the Latin Hypercube Sampling technique to a database supported a by the Energy Information Administration. The use of Latin Hypercube Sampling, along with the goal of reproducing the rank correlation structure instead of the Pearson correlation structure, has not been previously applied to the disclosure protection problem. Given its properties, this technique offers multiple alternatives to current methods for providing disclosure protection for large data sets.

38 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Both once-daily regimens of edoxaban were noninferior to warfarin with respect to the prevention of stroke or systemic embolism and were associated with significantly lower rates of bleeding and death from cardiovascular causes.
Abstract: The annualized rate of the primary end point during treatment was 1.50% with warfarin (median time in the therapeutic range, 68.4%), as compared with 1.18% with high-dose edoxaban (hazard ratio, 0.79; 97.5% confidence interval [CI], 0.63 to 0.99; P<0.001 for noninferiority) and 1.61% with low-dose edoxaban (hazard ratio, 1.07; 97.5% CI, 0.87 to 1.31; P = 0.005 for noninferiority). In the intention-to-treat analysis, there was a trend favoring high-dose edoxaban versus warfarin (hazard ratio, 0.87; 97.5% CI, 0.73 to 1.04; P = 0.08) and an unfavorable trend with low-dose edoxaban versus warfarin (hazard ratio, 1.13; 97.5% CI, 0.96 to 1.34; P = 0.10). The annualized rate of major bleeding was 3.43% with warfarin versus 2.75% with highdose edoxaban (hazard ratio, 0.80; 95% CI, 0.71 to 0.91; P<0.001) and 1.61% with low-dose edoxaban (hazard ratio, 0.47; 95% CI, 0.41 to 0.55; P<0.001). The corresponding annualized rates of death from cardiovascular causes were 3.17% versus 2.74% (hazard ratio, 0.86; 95% CI, 0.77 to 0.97; P = 0.01), and 2.71% (hazard ratio, 0.85; 95% CI, 0.76 to 0.96; P = 0.008), and the corresponding rates of the key secondary end point (a composite of stroke, systemic embolism, or death from cardiovascular causes) were 4.43% versus 3.85% (hazard ratio, 0.87; 95% CI, 0.78 to 0.96; P = 0.005), and 4.23% (hazard ratio, 0.95; 95% CI, 0.86 to 1.05; P = 0.32). CONCLUSIONS Both once-daily regimens of edoxaban were noninferior to warfarin with respect to the prevention of stroke or systemic embolism and were associated with significantly lower rates of bleeding and death from cardiovascular causes. (Funded by Daiichi Sankyo Pharma Development; ENGAGE AF-TIMI 48 ClinicalTrials.gov number, NCT00781391.)

3,988 citations

Journal ArticleDOI
09 Jan 2013-BMJ
TL;DR: The SPIRIT 2013 Explanation and Elaboration paper provides important information to promote full understanding of the checklist recommendations and strongly recommends that this explanatory paper be used in conjunction with the SPIRit Statement.
Abstract: High quality protocols facilitate proper conduct, reporting, and external review of clinical trials. However, the completeness of trial protocols is often inadequate. To help improve the content and quality of protocols, an international group of stakeholders developed the SPIRIT 2013 Statement (Standard Protocol Items: Recommendations for Interventional Trials). The SPIRIT Statement provides guidance in the form of a checklist of recommended items to include in a clinical trial protocol. This SPIRIT 2013 Explanation and Elaboration paper provides important information to promote full understanding of the checklist recommendations. For each checklist item, we provide a rationale and detailed description; a model example from an actual protocol; and relevant references supporting its importance. We strongly recommend that this explanatory paper be used in conjunction with the SPIRIT Statement. A website of resources is also available (www.spirit-statement.org). The SPIRIT 2013 Explanation and Elaboration paper, together with the Statement, should help with the drafting of trial protocols. Complete documentation of key trial elements can facilitate transparency and protocol review for the benefit of all stakeholders.

3,108 citations

Book
29 Mar 2012
TL;DR: The problem of missing data concepts of MCAR, MAR and MNAR simple solutions that do not (always) work multiple imputation in a nutshell and some dangers, some do's and some don'ts are covered.
Abstract: Basics Introduction The problem of missing data Concepts of MCAR, MAR and MNAR Simple solutions that do not (always) work Multiple imputation in a nutshell Goal of the book What the book does not cover Structure of the book Exercises Multiple imputation Historic overview Incomplete data concepts Why and when multiple imputation works Statistical intervals and tests Evaluation criteria When to use multiple imputation How many imputations? Exercises Univariate missing data How to generate multiple imputations Imputation under the normal linear normal Imputation under non-normal distributions Predictive mean matching Categorical data Other data types Classification and regression trees Multilevel data Non-ignorable methods Exercises Multivariate missing data Missing data pattern Issues in multivariate imputation Monotone data imputation Joint Modeling Fully Conditional Specification FCS and JM Conclusion Exercises Imputation in practice Overview of modeling choices Ignorable or non-ignorable? Model form and predictors Derived variables Algorithmic options Diagnostics Conclusion Exercises Analysis of imputed data What to do with the imputed data? Parameter pooling Statistical tests for multiple imputation Stepwise model selection Conclusion Exercises Case studies Measurement issues Too many columns Sensitivity analysis Correct prevalence estimates from self-reported data Enhancing comparability Exercises Selection issues Correcting for selective drop-out Correcting for non-response Exercises Longitudinal data Long and wide format SE Fireworks Disaster Study Time raster imputation Conclusion Exercises Extensions Conclusion Some dangers, some do's and some don'ts Reporting Other applications Future developments Exercises Appendices: Software R S-Plus Stata SAS SPSS Other software References Author Index Subject Index

2,156 citations

Journal ArticleDOI
TL;DR: A longer duration of delirium in the hospital was associated with worse global cognition and executive function scores at 3 and 12 months, and use of sedative or analgesic medications was not consistently associated with cognitive impairment at 3 or 12 months.
Abstract: METHODS We enrolled adults with respiratory failure or shock in the medical or surgical intensive care unit (ICU), evaluated them for in-hospital delirium, and assessed global cognition and executive function 3 and 12 months after discharge with the use of the Repeatable Battery for the Assessment of Neuropsychological Status (population age-adjusted mean [±SD] score, 100±15, with lower values indicating worse global cognition) and the Trail Making Test, Part B (population age-, sex-, and education-adjusted mean score, 50±10, with lower scores indicating worse executive function). Associations of the du ration of delirium and the use of sedative or analgesic agents with the outcomes were assessed with the use of linear regression, with adjustment for potential confounders. RESULTS Of the 821 patients enrolled, 6% had cognitive impairment at baseline, and delirium developed in 74% during the hospital stay. At 3 months, 40% of the patients had global cognition scores that were 1.5 SD below the population means (similar to scores for patients with moderate traumatic brain injury), and 26% had scores 2 SD below the population means (similar to scores for patients with mild Alzheimer’s disease). Deficits occurred in both older and younger patients and persisted, with 34% and 24% of all patients with assessments at 12 months that were similar to scores for patients with moderate traumatic brain injury and scores for patients with mild Alzheimer’s disease, respectively. A longer duration of delirium was in dependently associated with worse global cognition at 3 and 12 months (P = 0.001 and P = 0.04, respectively) and worse executive function at 3 and 12 months (P = 0.004 and P = 0.007, respectively). Use of sedative or analgesic medications was not consistently associated with cognitive impairment at 3 and 12 months. CONCLUSIONS Patients in medical and surgical ICUs are at high risk for long-term cognitive impairment. A longer duration of delirium in the hospital was associated with worse global cognition and executive function scores at 3 and 12 months. (Funded by the National Institutes of Health and others; BRAIN-ICU ClinicalTrials.gov number, NCT00392795.)

1,765 citations

Journal ArticleDOI
TL;DR: Among patients with heart failure and moderate‐to‐severe or severe secondary mitral regurgitation who remained symptomatic despite the use of maximal doses of guideline‐directed medical therapy, transcatheter mitral‐valve repair resulted in a lower rate of hospitalization forHeart failure and lower all‐cause mortality within 24 months of follow‐up than medical therapy alone.
Abstract: Background Among patients with heart failure who have mitral regurgitation due to left ventricular dysfunction, the prognosis is poor Transcatheter mitral-valve repair may improve their clinical outcomes Methods At 78 sites in the United States and Canada, we enrolled patients with heart failure and moderate-to-severe or severe secondary mitral regurgitation who remained symptomatic despite the use of maximal doses of guideline-directed medical therapy Patients were randomly assigned to transcatheter mitral-valve repair plus medical therapy (device group) or medical therapy alone (control group) The primary effectiveness end point was all hospitalizations for heart failure within 24 months of follow-up The primary safety end point was freedom from device-related complications at 12 months; the rate for this end point was compared with a prespecified objective performance goal of 880% Results Of the 614 patients who were enrolled in the trial, 302 were assigned to the device group and 312 t

1,758 citations