Showing papers in "Journal of Clinical Epidemiology in 2011"
••
TL;DR: The GRADE process begins with asking an explicit question, including specification of all important outcomes, and provides explicit criteria for rating the quality of evidence that include study design, risk of bias, imprecision, inconsistency, indirectness, and magnitude of effect.
6,093 citations
••
TL;DR: The approach of GRADE to rating quality of evidence specifies four categories-high, moderate, low, and very low-that are applied to a body of evidence, not to individual studies.
5,228 citations
••
TL;DR: Bayesian methodology offers a multitude of ways to present results from MTM models, as it enables a natural and easy estimation of all measures based on probabilities, ranks, or predictions.
2,337 citations
••
McMaster University1, University Hospital of Basel2, Autonomous University of Barcelona3, Mayo Clinic4, University at Buffalo5, University of South Florida6, Case Western Reserve University7, Oregon Health & Science University8, Duke University9, United States Department of Veterans Affairs10, University Medical Center Freiburg11
TL;DR: In the GRADE approach, randomized trials start as high-quality evidence and observational studies as low- quality evidence, but both can be rated down if most of the relevant evidence comes from studies that suffer from a high risk of bias.
2,059 citations
••
TL;DR: This article introduces a 20-part series providing guidance for the use of GRADE methodology that will appear in the Journal of Clinical Epidemiology.
1,975 citations
••
McMaster University1, University Hospital of Basel2, Autonomous University of Barcelona3, Harvard University4, Mayo Clinic5, Karolinska University Hospital6, Duke University7, Liverpool School of Tropical Medicine8, Case Western Reserve University9, University Medical Center Freiburg10, Centre for Mental Health11, Vanderbilt University12
TL;DR: It is suggested that examination of 95% confidence intervals (CIs) provides the optimal primary approach to decisions regarding imprecision and rating down the quality of evidence is required if clinical action would differ if the upper versus the lower boundary of the CI represented the truth.
1,844 citations
••
TL;DR: In this paper, the authors developed guidelines for reporting reliability and agreement studies in interrater and intra-arater reliability and agreements, and proposed 15 issues that should be addressed when reporting such studies.
1,605 citations
••
McMaster University1, Norwegian Institute of Public Health2, University of Basel3, University of London4, Oregon Health & Science University5, Autonomous University of Barcelona6, Bond University7, University at Buffalo8, University of Florida9, Health Canada10, Medical Research Council11, Case Western Reserve University12
TL;DR: Credibility is increased if subgroup effects are based on a small number of a priori hypotheses with a specified direction; subgroup comparisons come from within rather than between studies; tests of interaction generate low P-values; and have a biological rationale.
1,535 citations
••
McMaster University1, Mayo Clinic2, University Hospital of Basel3, Autonomous University of Barcelona4, University of South Florida5, United States Department of Veterans Affairs6, Case Western Reserve University7, Duke University8, University Medical Center Freiburg9, Oregon Health & Science University10, University at Buffalo11
TL;DR: In the GRADE approach, randomized trials start as high-quality evidence and observational studies as low- quality evidence, but both can be rated down if a body of evidence is associated with a high risk of publication bias.
1,295 citations
••
TL;DR: In considering the importance of a surrogate outcome, authors should rate the importanceof the patient-important outcome for which the surrogate is a substitute and subsequently rate down the quality of evidence for indirectness of outcome.
1,280 citations
••
TL;DR: Decisions regarding indirectness of patients and interventions depend on an understanding of whether biological or social factors are sufficiently different that one might expect substantial differences in the magnitude of effect.
••
McMaster University1, Norwegian Institute of Public Health2, University of Florida3, Bond University4, University at Buffalo5, Autonomous University of Barcelona6, United States Department of Veterans Affairs7, University of Basel8, Mayo Clinic9, Harvard University10, University of Freiburg11, Agency for Healthcare Research and Quality12, Oregon Health & Science University13, Case Western Reserve University14
TL;DR: Systematic review authors and guideline developers may also consider rating up quality of evidence when a dose-response gradient is present, and when all plausible confounders or biases would decrease an apparent treatment effect, or would create a spurious effect when results suggest no effect.
••
TL;DR: The combined score may offer improvements in comorbidity summarization over existing scores in similar populations and data settings and yielded positive values for two recently proposed measures of reclassification.
••
TL;DR: Recommendations for conducting quantitative synthesis, or meta-analysis, using study-level data in comparative effectiveness reviews (CERs) for the Evidence-based Practice Center (EPC) program of the Agency for Healthcare Research and Quality are established.
••
TL;DR: This study is the first to address minimally important differences (MIDs) for PROMIS measures in advanced-stage cancer patients by combining anchor- and distribution-based methods and focusing on item response theory-based MIDs estimated on a T-score scale.
••
TL;DR: The stepped wedge CRCT design has been mainly used for evaluating interventions during routine implementation, particularly for interventions that have been shown to be effective in more controlled research settings, or where there is lack of evidence of effectiveness but there is a strong belief that they will do more good than harm.
••
TL;DR: An overview of the science and practice of knowledge translation is provided and a conceptual framework developed by Graham et al., termed the knowledge-to-action cycle, provides an approach that builds on the commonalities found in an assessment of planned action theories.
••
TL;DR: This perspective disagrees with the authors that adherence assessment should involve both causal and effect indicators: “Direct measurement may be undesirable because it does not provide information on why people are not taking their medications as prescribed, which may be important for designing interventions” (page 5).
••
TL;DR: Health literacy is not consistently measured, making it difficult to interpret and compare health literacy at individual and population levels, and more comprehensive health literacy instruments need to be developed.
••
TL;DR: Use of a reporting checklist, such as the one created for this study by modifying the STARD criteria, could improve the quality of reporting of validation studies, allowing for accurate application of algorithms, and interpretation of research using health administrative data.
••
TL;DR: There is no single rule based on EPV that would guarantee an accurate estimation of logistic regression parameters, so the number of predictors, probable size of the regression coefficients based on previous literature, and correlations among the predictors must be taken into account as guidelines to determine the necessary sample size.
••
TL;DR: The terminology and concepts relevant to this bias are explored and a more systematic nomenclature than what is currently used is proposed.
••
TL;DR: The basic structure for a comprehensive CAT is suggested that requires further study to verify its overall usefulness and users of CATs should be careful about which CAT they use and how they use it.
••
TL;DR: Osteoporotic attribution scores for all fracture sites were determined by a multidisciplinary expert panel to provide an evidence-based continuum of the likelihood of a fracture being associated with osteoporosis.
••
TL;DR: The power to detect effect size 1.0 appeared to be reasonable for many practical applications with a moderate or large number of time points in the study equally divided around the intervention, and investigators should be cautious when the expected effect size is small or the number oftime points is small.
••
TL;DR: The three- component model of SF-36 scores in Japan is better than the two-component model, and it provides more useful PCS and MCS scores.
••
TL;DR: The objective of this article is to provide practical information for researchers and knowledge users as they consider what to include in dissemination and exchange plans developed as part of grant applications.
••
TL;DR: In this article, the importance of latent variables and their relationship to cause and effect indicators in psychometric analyses was emphasized. But, the authors also pointed out that the numerous intra-psychic, environmental, and sociostructural determinants of the behavior are predictive of, but not equal to, the behavior itself, and that factor analysis is one of the key approaches to determine the dimensionality of medication adherence behavior.
••
TL;DR: In this article, the authors developed two PF item pools that comprised 32 mobility and 38 upper extremity items and evaluated the scale dimensionality and sources of local dependence (LD) with factor analysis.
••
TL;DR: Empiric data from diverse meta-analyses demonstrate similar treatment effects and no large differences in heterogeneity of RoM compared with difference-based methods.