scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Clinical Epidemiology in 2011"


Journal ArticleDOI
TL;DR: The GRADE process begins with asking an explicit question, including specification of all important outcomes, and provides explicit criteria for rating the quality of evidence that include study design, risk of bias, imprecision, inconsistency, indirectness, and magnitude of effect.

6,093 citations


Journal ArticleDOI
TL;DR: The approach of GRADE to rating quality of evidence specifies four categories-high, moderate, low, and very low-that are applied to a body of evidence, not to individual studies.

5,228 citations


Journal ArticleDOI
TL;DR: Bayesian methodology offers a multitude of ways to present results from MTM models, as it enables a natural and easy estimation of all measures based on probabilities, ranks, or predictions.

2,337 citations


Journal ArticleDOI
TL;DR: In the GRADE approach, randomized trials start as high-quality evidence and observational studies as low- quality evidence, but both can be rated down if most of the relevant evidence comes from studies that suffer from a high risk of bias.

2,059 citations


Journal ArticleDOI
TL;DR: This article introduces a 20-part series providing guidance for the use of GRADE methodology that will appear in the Journal of Clinical Epidemiology.

1,975 citations


Journal ArticleDOI
TL;DR: It is suggested that examination of 95% confidence intervals (CIs) provides the optimal primary approach to decisions regarding imprecision and rating down the quality of evidence is required if clinical action would differ if the upper versus the lower boundary of the CI represented the truth.

1,844 citations


Journal ArticleDOI
TL;DR: In this paper, the authors developed guidelines for reporting reliability and agreement studies in interrater and intra-arater reliability and agreements, and proposed 15 issues that should be addressed when reporting such studies.

1,605 citations


Journal ArticleDOI
TL;DR: Credibility is increased if subgroup effects are based on a small number of a priori hypotheses with a specified direction; subgroup comparisons come from within rather than between studies; tests of interaction generate low P-values; and have a biological rationale.

1,535 citations


Journal ArticleDOI
TL;DR: In the GRADE approach, randomized trials start as high-quality evidence and observational studies as low- quality evidence, but both can be rated down if a body of evidence is associated with a high risk of publication bias.

1,295 citations


Journal ArticleDOI
TL;DR: In considering the importance of a surrogate outcome, authors should rate the importanceof the patient-important outcome for which the surrogate is a substitute and subsequently rate down the quality of evidence for indirectness of outcome.

1,280 citations


Journal ArticleDOI
TL;DR: Decisions regarding indirectness of patients and interventions depend on an understanding of whether biological or social factors are sufficiently different that one might expect substantial differences in the magnitude of effect.

Journal ArticleDOI
TL;DR: Systematic review authors and guideline developers may also consider rating up quality of evidence when a dose-response gradient is present, and when all plausible confounders or biases would decrease an apparent treatment effect, or would create a spurious effect when results suggest no effect.

Journal ArticleDOI
TL;DR: The combined score may offer improvements in comorbidity summarization over existing scores in similar populations and data settings and yielded positive values for two recently proposed measures of reclassification.

Journal ArticleDOI
TL;DR: Recommendations for conducting quantitative synthesis, or meta-analysis, using study-level data in comparative effectiveness reviews (CERs) for the Evidence-based Practice Center (EPC) program of the Agency for Healthcare Research and Quality are established.

Journal ArticleDOI
TL;DR: This study is the first to address minimally important differences (MIDs) for PROMIS measures in advanced-stage cancer patients by combining anchor- and distribution-based methods and focusing on item response theory-based MIDs estimated on a T-score scale.

Journal ArticleDOI
TL;DR: The stepped wedge CRCT design has been mainly used for evaluating interventions during routine implementation, particularly for interventions that have been shown to be effective in more controlled research settings, or where there is lack of evidence of effectiveness but there is a strong belief that they will do more good than harm.

Journal ArticleDOI
TL;DR: An overview of the science and practice of knowledge translation is provided and a conceptual framework developed by Graham et al., termed the knowledge-to-action cycle, provides an approach that builds on the commonalities found in an assessment of planned action theories.

Journal ArticleDOI
TL;DR: This perspective disagrees with the authors that adherence assessment should involve both causal and effect indicators: “Direct measurement may be undesirable because it does not provide information on why people are not taking their medications as prescribed, which may be important for designing interventions” (page 5).

Journal ArticleDOI
TL;DR: Health literacy is not consistently measured, making it difficult to interpret and compare health literacy at individual and population levels, and more comprehensive health literacy instruments need to be developed.

Journal ArticleDOI
TL;DR: Use of a reporting checklist, such as the one created for this study by modifying the STARD criteria, could improve the quality of reporting of validation studies, allowing for accurate application of algorithms, and interpretation of research using health administrative data.

Journal ArticleDOI
TL;DR: There is no single rule based on EPV that would guarantee an accurate estimation of logistic regression parameters, so the number of predictors, probable size of the regression coefficients based on previous literature, and correlations among the predictors must be taken into account as guidelines to determine the necessary sample size.

Journal ArticleDOI
TL;DR: The terminology and concepts relevant to this bias are explored and a more systematic nomenclature than what is currently used is proposed.

Journal ArticleDOI
TL;DR: The basic structure for a comprehensive CAT is suggested that requires further study to verify its overall usefulness and users of CATs should be careful about which CAT they use and how they use it.

Journal ArticleDOI
TL;DR: Osteoporotic attribution scores for all fracture sites were determined by a multidisciplinary expert panel to provide an evidence-based continuum of the likelihood of a fracture being associated with osteoporosis.

Journal ArticleDOI
TL;DR: The power to detect effect size 1.0 appeared to be reasonable for many practical applications with a moderate or large number of time points in the study equally divided around the intervention, and investigators should be cautious when the expected effect size is small or the number oftime points is small.

Journal ArticleDOI
TL;DR: The three- component model of SF-36 scores in Japan is better than the two-component model, and it provides more useful PCS and MCS scores.

Journal ArticleDOI
TL;DR: The objective of this article is to provide practical information for researchers and knowledge users as they consider what to include in dissemination and exchange plans developed as part of grant applications.

Journal ArticleDOI
TL;DR: In this article, the importance of latent variables and their relationship to cause and effect indicators in psychometric analyses was emphasized. But, the authors also pointed out that the numerous intra-psychic, environmental, and sociostructural determinants of the behavior are predictive of, but not equal to, the behavior itself, and that factor analysis is one of the key approaches to determine the dimensionality of medication adherence behavior.

Journal ArticleDOI
TL;DR: In this article, the authors developed two PF item pools that comprised 32 mobility and 38 upper extremity items and evaluated the scale dimensionality and sources of local dependence (LD) with factor analysis.

Journal ArticleDOI
TL;DR: Empiric data from diverse meta-analyses demonstrate similar treatment effects and no large differences in heterogeneity of RoM compared with difference-based methods.