scispace - formally typeset
Search or ask a question

Showing papers in "JAMA in 2011"


Journal ArticleDOI
05 Jan 2011-JAMA
TL;DR: In this pooled analysis of individual data from 9 selected cohorts, gait speed was associated with survival in older adults and predicted survival was as accurate as predicted based on age, sex, use of mobility aids, and self-reported function.
Abstract: Context Survival estimates help individualize goals of care for geriatric patients, but life tables fail to account for the great variability in survival. Physical performance measures, such as gait speed, might help account for variability, allowing clinicians to make more individualized estimates. Objective To evaluate the relationship between gait speed and survival. Design, Setting, and Participants Pooled analysis of 9 cohort studies (collected between 1986 and 2000), using individual data from 34 485 community-dwelling older adults aged 65 years or older with baseline gait speed data, followed up for 6 to 21 years. Participants were a mean (SD) age of 73.5 (5.9) years; 59.6%, women; and 79.8%, white; and had a mean (SD) gait speed of 0.92 (0.27) m/s. Main Outcome Measures Survival rates and life expectancy. Results There were 17 528 deaths; the overall 5-year survival rate was 84.8% (confidence interval [CI], 79.6%-88.8%) and 10-year survival rate was 59.7% (95% CI, 46.5%-70.6%). Gait speed was associated with survival in all studies (pooled hazard ratio per 0.1 m/s, 0.88; 95% CI, 0.87-0.90; P Conclusion In this pooled analysis of individual data from 9 selected cohorts, gait speed was associated with survival in older adults.

3,393 citations


Journal ArticleDOI
09 Feb 2011-JAMA
TL;DR: Among patients with limited SLN metastatic breast cancer treated with breast conservation and systemic therapy, the use of SLND alone compared with ALND did not result in inferior survival, and overall survival was the primary end point, with a noninferiority margin of a 1-sided hazard ratio of less than 1.3 indicating thatSLND alone is noninherited.
Abstract: (95% confidence interval [CI], 89.1%-94.5%) with ALND and 92.5% (95% CI, 90.0%95.1%) with SLND alone; 5-year disease-free survival was 82.2% (95% CI, 78.3%86.3%) with ALND and 83.9% (95% CI, 80.2%-87.9%) with SLND alone. The hazard ratio for treatment-related overall survival was 0.79 (90% CI, 0.56-1.11) without adjustment and 0.87 (90% CI, 0.62-1.23) after adjusting for age and adjuvant therapy. Conclusion Among patients with limited SLN metastatic breast cancer treated with breast conservation and systemic therapy, the use of SLND alone compared with ALND did not result in inferior survival.

2,608 citations


Journal ArticleDOI
19 Oct 2011-JAMA
TL;DR: Most current readmission risk prediction models that were designed for either comparative or clinical purposes perform poorly and although in certain settings such models may prove useful, efforts to improve their performance are needed as use becomes more widespread.
Abstract: Context Predicting hospital readmission risk is of great interest to identify which patients would benefit most from care transition interventions, as well as to risk-adjust readmission rates for the purposes of hospital comparison. Objective To summarize validated readmission risk prediction models, describe their performance, and assess suitability for clinical or administrative use. Data Sources and Study Selection The databases of MEDLINE, CINAHL, and the Cochrane Library were searched from inception through March 2011, the EMBASE database was searched through August 2011, and hand searches were performed of the retrieved reference lists. Dual review was conducted to identify studies published in the English language of prediction models tested with medical patients in both derivation and validation cohorts. Data Extraction Data were extracted on the population, setting, sample size, follow-up interval, readmission rate, model discrimination and calibration, type of data used, and timing of data collection. Data Synthesis Of 7843 citations reviewed, 30 studies of 26 unique models met the inclusion criteria. The most common outcome used was 30-day readmission; only 1 model specifically addressed preventable readmissions. Fourteen models that relied on retrospective administrative data could be potentially used to risk-adjust readmission rates for hospital comparison; of these, 9 were tested in large US populations and had poor discriminative ability (c statistic range: 0.55-0.65). Seven models could potentially be used to identify high-risk patients for intervention early during a hospitalization (c statistic range: 0.56-0.72), and 5 could be used at hospital discharge (c statistic range: 0.68-0.83). Six studies compared different models in the same population and 2 of these found that functional and social variables improved model discrimination. Although most models incorporated variables for medical comorbidity and use of prior medical services, few examined variables associated with overall health and function, illness severity, or social determinants of health. Conclusions Most current readmission risk prediction models that were designed for either comparative or clinical purposes perform poorly. Although in certain settings such models may prove useful, efforts to improve their performance are needed as use becomes more widespread.

1,593 citations


Journal ArticleDOI
12 Oct 2011-JAMA
TL;DR: Dietary supplementation with vitamin E significantly increased the risk of prostate cancer among healthy men in relatively healthy men.
Abstract: Context The initial report of the Selenium and Vitamin E Cancer Prevention Trial (SELECT) found no reduction in risk of prostate cancer with either selenium or vitamin E supplements but a statistically nonsignificant increase in prostate cancer risk with vitamin E. Longer follow-up and more prostate cancer events provide further insight into the relationship of vitamin E and prostate cancer. Objective To determine the long-term effect of vitamin E and selenium on risk of prostate cancer in relatively healthy men. Design, Setting, and Participants A total of 35 533 men from 427 study sites in the United States, Canada, and Puerto Rico were randomized between August 22, 2001, and June 24, 2004. Eligibility criteria included a prostate-specific antigen (PSA) of 4.0 ng/mL or less, a digital rectal examination not suspicious for prostate cancer, and age 50 years or older for black men and 55 years or older for all others. The primary analysis included 34 887 men who were randomly assigned to 1 of 4 treatment groups: 8752 to receive selenium; 8737, vitamin E; 8702, both agents, and 8696, placebo. Analysis reflect the final data collected by the study sites on their participants through July 5, 2011. Interventions Oral selenium (200 μg/d from L-selenomethionine) with matched vitamin E placebo, vitamin E (400 IU/d of all rac-α-tocopheryl acetate) with matched selenium placebo, both agents, or both matched placebos for a planned follow-up of a minimum of 7 and maximum of 12 years. Main Outcome Measures Prostate cancer incidence. Results This report includes 54 464 additional person-years of follow-up and 521 additional cases of prostate cancer since the primary report. Compared with the placebo (referent group) in which 529 men developed prostate cancer, 620 men in the vitamin E group developed prostate cancer (hazard ratio [HR], 1.17; 99% CI, 1.004-1.36, P = .008); as did 575 in the selenium group (HR, 1.09; 99% CI, 0.93-1.27; P = .18), and 555 in the selenium plus vitamin E group (HR, 1.05; 99% CI, 0.89-1.22, P = .46). Compared with placebo, the absolute increase in risk of prostate cancer per 1000 person-years was 1.6 for vitamin E, 0.8 for selenium, and 0.4 for the combination. Conclusion Dietary supplementation with vitamin E significantly increased the risk of prostate cancer among healthy men. Trial Registration Clinicaltrials.gov Identifier: NCT00006392

1,448 citations


Journal ArticleDOI
07 Sep 2011-JAMA
TL;DR: In comparison with no intervention, technology-enhanced simulation training in health professions education is consistently associated with large effects for outcomes of knowledge, skills, and behaviors and moderate effects for patient-related outcomes.
Abstract: Context Although technology-enhanced simulation has widespread appeal, its effectiveness remains uncertain. A comprehensive synthesis of evidence may inform the use of simulation in health professions education. Objective To summarize the outcomes of technology-enhanced simulation training for health professions learners in comparison with no intervention. Data Source Systematic search of MEDLINE, EMBASE, CINAHL, ERIC, PsychINFO, Scopus, key journals, and previous review bibliographies through May 2011. Study Selection Original research in any language evaluating simulation compared with no intervention for training practicing and student physicians, nurses, dentists, and other health care professionals. Data Extraction Reviewers working in duplicate evaluated quality and abstracted information on learners, instructional design (curricular integration, distributing training over multiple days, feedback, mastery learning, and repetitive practice), and outcomes. We coded skills (performance in a test setting) separately for time, process, and product measures, and similarly classified patient care behaviors. Data Synthesis From a pool of 10 903 articles, we identified 609 eligible studies enrolling 35 226 trainees. Of these, 137 were randomized studies, 67 were nonrandomized studies with 2 or more groups, and 405 used a single-group pretest-posttest design. We pooled effect sizes using random effects. Heterogeneity was large (I2>50%) in all main analyses. In comparison with no intervention, pooled effect sizes were 1.20 (95% CI, 1.04-1.35) for knowledge outcomes (n = 118 studies), 1.14 (95% CI, 1.03-1.25) for time skills (n = 210), 1.09 (95% CI, 1.03-1.16) for process skills (n = 426), 1.18 (95% CI, 0.98-1.37) for product skills (n = 54), 0.79 (95% CI, 0.47-1.10) for time behaviors (n = 20), 0.81 (95% CI, 0.66-0.96) for other behaviors (n = 50), and 0.50 (95% CI, 0.34-0.66) for direct effects on patients (n = 32). Subgroup analyses revealed no consistent statistically significant interactions between simulation training and instructional design features or study quality. Conclusion In comparison with no intervention, technology-enhanced simulation training in health professions education is consistently associated with large effects for outcomes of knowledge, skills, and behaviors and moderate effects for patient-related outcomes.

1,420 citations


Journal ArticleDOI
06 Apr 2011-JAMA
TL;DR: Among patients receiving opioid prescriptions for pain, higher opioid doses were associated with increased risk of opioid overdose death, and receiving both as-needed and regularly scheduled doses was not associated with overdose risk after adjustment.
Abstract: Context The rate of prescription opioid–related overdose death increased substantially in the United States over the past decade. Patterns of opioid prescribing may be related to risk of overdose mortality. Objective To examine the association of maximum prescribed daily opioid dose and dosing schedule (“as needed,” regularly scheduled, or both) with risk of opioid overdose death among patients with cancer, chronic pain, acute pain, and substance use disorders. Design Case-cohort study. Setting Veterans Health Administration (VHA), 2004 through 2008. Participants All unintentional prescription opioid overdose decedents (n = 750) and a random sample of patients (n = 154 684) among those individuals who used medical services in 2004 or 2005 and received opioid therapy for pain. Main Outcome Measure Associations of opioid regimens (dose and schedule) with death by unintentional prescription opioid overdose in subgroups defined by clinical diagnoses, adjusting for age group, sex, race, ethnicity, and comorbid conditions. Results The frequency of fatal overdose over the study period among individuals treated with opioids was estimated to be 0.04%.The risk of overdose death was directly related to the maximum prescribed daily dose of opioid medication. The adjusted hazard ratios (HRs) associated with a maximum prescribed dose of 100 mg/d or more, compared with the dose category 1 mg/d to less than 20 mg/d, were as follows: among those with substance use disorders, adjusted HR = 4.54 (95% confidence interval [CI], 2.46-8.37; absolute risk difference approximation [ARDA] = 0.14%); among those with chronic pain, adjusted HR = 7.18 (95% CI, 4.85-10.65; ARDA = 0.25%); among those with acute pain, adjusted HR = 6.64 (95% CI, 3.31-13.31; ARDA = 0.23%); and among those with cancer, adjusted HR = 11.99 (95% CI, 4.42-32.56; ARDA = 0.45%). Receiving both as-needed and regularly scheduled doses was not associated with overdose risk after adjustment. Conclusion Among patients receiving opioid prescriptions for pain, higher opioid doses were associated with increased risk of opioid overdose death.

1,253 citations


Journal ArticleDOI
22 Jun 2011-JAMA
TL;DR: In this paper, the authors investigated whether intensive-dose statin therapy is associated with increased risk of new-onset diabetes compared with moderate-dose stochastic insulin this paper.
Abstract: Context A recent meta-analysis demonstrated that statin therapy is associated with excess risk of developing diabetes mellitus. Objective To investigate whether intensive-dose statin therapy is associated with increased risk of new-onset diabetes compared with moderate-dose statin therapy. Data Sources We identified relevant trials in a literature search of MEDLINE, EMBASE, and the Cochrane Central Register of Controlled Trials (January 1, 1996, through March 31, 2011). Unpublished data were obtained from investigators. Study Selection We included randomized controlled end-point trials that compared intensive-dose statin therapy with moderate-dose statin therapy and included more than 1000 participants who were followed up for more than 1 year. Data Extraction Tabular data provided for each trial described baseline characteristics and numbers of participants developing diabetes and experiencing major cardiovascular events (cardiovascular death, nonfatal myocardial infarction or stroke, coronary revascularization). We calculated trial-specific odds ratios (ORs) for new-onset diabetes and major cardiovascular events and combined these using random-effects model meta-analysis. Between-study heterogeneity was assessed using the I 2 statistic. Results In 5 statin trials with 32 752 participants without diabetes at baseline, 2749 developed diabetes (1449 assigned intensive-dose therapy, 1300 assigned moderate-dose therapy, representing 2.0 additional cases in the intensive-dose group per 1000 patient-years) and 6684 experienced cardiovascular events (3134 and 3550, respectively, representing 6.5 fewer cases in the intensive-dose group per 1000 patient-years) over a weighted mean (SD) follow-up of 4.9 (1.9) years. Odds ratios were 1.12 (95% confidence interval [CI], 1.04-1.22; I 2 = 0%) for new-onset diabetes and 0.84 (95% CI, 0.75-0.94; I 2 = 74%) for cardiovascular events for participants receiving intensive therapy compared with moderate-dose therapy. As compared with moderate-dose statin therapy, the number needed to harm per year for intensive-dose statin therapy was 498 for new-onset diabetes while the number needed to treat per year for intensive-dose statin therapy was 155 for cardiovascular events. Conclusion In a pooled analysis of data from 5 statin trials, intensive-dose statin therapy was associated with an increased risk of new-onset diabetes compared with moderate-dose statin therapy.

1,199 citations


Journal ArticleDOI
21 Dec 2011-JAMA
TL;DR: Patients who die in the ICU following sepsis compared with patients who die of nonsepsis etiologies have biochemical, flow cytometric, and immunohistochemical findings consistent with immunosuppression, and targeted immune-enhancing therapy may be a valid approach in selected patients with sepsi.
Abstract: Context Severe sepsis is typically characterized by initial cytokine-mediated hyperinflammation. Whether this hyperinflammatory phase is followed by immunosuppression is controversial. Animal studies suggest that multiple immune defects occur in sepsis, but data from humans remain conflicting. Objectives To determine the association of sepsis with changes in host innate and adaptive immunity and to examine potential mechanisms for putative immunosuppression. Design, Setting, and Participants Rapid postmortem spleen and lung tissue harvest was performed at the bedsides of 40 patients who died in intensive care units (ICUs) of academic medical centers with active severe sepsis to characterize their immune status at the time of death (2009-2011). Control spleens (n = 29) were obtained from patients who were declared brain-dead or had emergent splenectomy due to trauma; control lungs (n = 20) were obtained from transplant donors or from lung cancer resections. Main Outcome Measures Cytokine secretion assays and immunophenotyping of cell surface receptor-ligand expression profiles were performed to identify potential mechanisms of immune dysfunction. Immunohistochemical staining was performed to evaluate the loss of immune effector cells. Results The mean ages of patients with sepsis and controls were 71.7 (SD, 15.9) and 52.7 (SD, 15.0) years, respectively. The median number of ICU days for patients with sepsis was 8 (range, 1-195 days), while control patients were in ICUs for 4 or fewer days. The median duration of sepsis was 4 days (range, 1-40 days). Compared with controls, anti-CD3/anti-CD28–stimulated splenocytes from sepsis patients had significant reductions in cytokine secretion at 5 hours: tumor necrosis factor, 5361 (95% CI, 3327-7485) pg/mL vs 418 (95% CI, 98-738) pg/mL; interferon γ, 1374 (95% CI, 550-2197) pg/mL vs 37.5 (95% CI, −5 to 80) pg/mL; interleukin 6, 3691 (95% CI, 2313-5070) vs 365 (95% CI, 87-642) pg/mL; and interleukin 10, 633 (95% CI, −269 to 1534) vs 58 (95% CI, −39 to 156) pg/mL; (P Conclusions Patients who die in the ICU following sepsis compared with patients who die of nonsepsis etiologies have biochemical, flow cytometric, and immunohistochemical findings consistent with immunosuppression. Targeted immune-enhancing therapy may be a valid approach in selected patients with sepsis.

1,192 citations


Journal ArticleDOI
16 Mar 2011-JAMA
TL;DR: In this paper, the authors evaluated the effect of high-dose compared with standard-dose clopidogrel in patients with high on-treatment platelet reactivity after percutaneous coronary intervention (PCI), but a treatment strategy for this issue was not well defined.
Abstract: Context High platelet reactivity while receiving clopidogrel has been linked to cardiovascular events after percutaneous coronary intervention (PCI), but a treatment strategy for this issue is not well defined. Objective To evaluate the effect of high-dose compared with standard-dose clopidogrel in patients with high on-treatment platelet reactivity after PCI. Design, Setting, and Patients Randomized, double-blind, active-control trial (Gauging Responsiveness with A VerifyNow assay—Impact on Thrombosis And Safety [GRAVITAS]) of 2214 patients with high on-treatment reactivity 12 to 24 hours after PCI with drug-eluting stents at 83 centers in North America between July 2008 and April 2010. Interventions High-dose clopidogrel (600-mg initial dose, 150 mg daily thereafter) or standard-dose clopidogrel (no additional loading dose, 75 mg daily) for 6 months. Main Outcome Measures The primary end point was the 6-month incidence of death from cardiovascular causes, nonfatal myocardial infarction, or stent thrombosis. The key safety end point was severe or moderate bleeding according to the Global Utilization of Streptokinase and t-PA for Occluded Coronary Arteries (GUSTO) definition. A key pharmacodynamic end point was the rate of persistently high on-treatment reactivity at 30 days. Results At 6 months, the primary end point had occurred in 25 of 1109 patients (2.3%) receiving high-dose clopidogrel compared with 25 of 1105 patients (2.3%) receiving standard-dose clopidogrel (hazard ratio [HR], 1.01; 95% confidence interval [CI], 0.58-1.76; P = .97). Severe or moderate bleeding was not increased with the high-dose regimen (15 [1.4%] vs 25 [2.3%], HR, 0.59; 95% CI, 0.31-1.11; P = .10). Compared with standard-dose clopidogrel, high-dose clopidogrel provided a 22% (95% CI, 18%-26%) absolute reduction in the rate of high on-treatment reactivity at 30 days (62%; 95% CI, 59%-65% vs 40%; 95% CI, 37%-43%; P Conclusions Among patients with high on-treatment reactivity after PCI with drug-eluting stents, the use of high-dose clopidogrel compared with standard-dose clopidogrel did not reduce the incidence of death from cardiovascular causes, nonfatal myocardial infarction, or stent thrombosis. Trial Registration clinicaltrials.gov Identifier: NCT00645918

1,174 citations


Journal ArticleDOI
02 Nov 2011-JAMA
TL;DR: Standardized incidence ratios and excess absolute risks assessing relative and absolute cancer risk in transplant recipients compared with the general population to describe the overall pattern of cancer following solid organ transplantation are described.
Abstract: Context Solid organ transplant recipients have elevated cancer risk due to immunosuppression and oncogenic viral infections. Because most prior research has concerned kidney recipients, large studies that include recipients of differing organs can inform cancer etiology. Objective To describe the overall pattern of cancer following solid organ transplantion. Design, Setting, and Participants Cohort study using linked data on solid organ transplant recipients from the US Scientific Registry of Transplant Recipients (1987-2008) and 13 state and regional cancer registries. Main Outcome Measures Standardized incidence ratios (SIRs) and excess absolute risks (EARs) assessing relative and absolute cancer risk in transplant recipients compared with the general population. Results The registry linkages yielded data on 175 732 solid organ transplants (58.4% for kidney, 21.6% for liver, 10.0% for heart, and 4.0% for lung). The overall cancer risk was elevated with 10 656 cases and an incidence of 1375 per 100 000 person-years (SIR, 2.10 [95% CI, 2.06-2.14]; EAR, 719.3 [95% CI, 693.3-745.6] per 100 000 person-years). Risk was increased for 32 different malignancies, some related to known infections (eg, anal cancer, Kaposi sarcoma) and others unrelated (eg, melanoma, thyroid and lip cancers). The most common malignancies with elevated risk were non-Hodgkin lymphoma (n = 1504; incidence: 194.0 per 100 000 person-years; SIR, 7.54 [95% CI, 7.17-7.93]; EAR, 168.3 [95% CI, 158.6-178.4] per 100 000 person-years) and cancers of the lung (n = 1344; incidence: 173.4 per 100 000 person-years; SIR, 1.97 [95% CI, 1.86-2.08]; EAR, 85.3 [95% CI, 76.2-94.8] per 100 000 person-years), liver (n = 930; incidence: 120.0 per 100 000 person-years; SIR, 11.56 [95% CI, 10.83-12.33]; EAR, 109.6 [95% CI, 102.0-117.6] per 100 000 person-years), and kidney (n = 752; incidence: 97.0 per 100 000 person-years; SIR, 4.65 [95% CI, 4.32-4.99]; EAR, 76.1 [95% CI, 69.3-83.3] per 100 000 person-years). Lung cancer risk was most elevated in lung recipients (SIR, 6.13 [95% CI, 5.18-7.21]) but also increased among other recipients (kidney: SIR, 1.46 [95% CI, 1.34-1.59]; liver: SIR, 1.95 [95% CI, 1.74-2.19]; and heart: SIR, 2.67 [95% CI, 2.40-2.95]). Liver cancer risk was elevated only among liver recipients (SIR, 43.83 [95% CI, 40.90-46.91]), who manifested exceptional risk in the first 6 months (SIR, 508.97 [95% CI, 474.16-545.66]) and a 2-fold excess risk for 10 to 15 years thereafter (SIR, 2.22 [95% CI, 1.57-3.04]). Among kidney recipients, kidney cancer risk was elevated (SIR, 6.66 [95% CI, 6.12-7.23]) and bimodal in onset time. Kidney cancer risk also was increased in liver recipients (SIR, 1.80 [95% CI, 1.40-2.29]) and heart recipients (SIR, 2.90 [95% CI, 2.32-3.59]). Conclusion Compared with the general population, recipients of a kidney, liver, heart, or lung transplant have an increased risk for diverse infection-related and unrelated cancers.

1,147 citations


Journal ArticleDOI
08 Jun 2011-JAMA
TL;DR: Among women in the general US population, simultaneous screening with CA-125 and transvaginal ultrasound compared with usual care did not reduce ovarian cancer mortality.
Abstract: Context Screening for ovarian cancer with cancer antigen 125 (CA-125) and transvaginal ultrasound has an unknown effect on mortality. Objective To evaluate the effect of screening for ovarian cancer on mortality in the Prostate, Lung, Colorectal and Ovarian (PLCO) Cancer Screening Trial. Design, Setting, and Participants Randomized controlled trial of 78 216 women aged 55 to 74 years assigned to undergo either annual screening (n = 39 105) or usual care (n = 39 111) at 10 screening centers across the United States between November 1993 and July 2001. Intervention The intervention group was offered annual screening with CA-125 for 6 years and transvaginal ultrasound for 4 years. Participants and their health care practitioners received the screening test results and managed evaluation of abnormal results. The usual care group was not offered annual screening with CA-125 for 6 years or transvaginal ultrasound but received their usual medical care. Participants were followed up for a maximum of 13 years (median [range], 12.4 years [10.9-13.0 years]) for cancer diagnoses and death until February 28, 2010. Main Outcome Measures Mortality from ovarian cancer, including primary peritoneal and fallopian tube cancers. Secondary outcomes included ovarian cancer incidence and complications associated with screening examinations and diagnostic procedures. Results Ovarian cancer was diagnosed in 212 women (5.7 per 10 000 person-years) in the intervention group and 176 (4.7 per 10 000 person-years) in the usual care group (rate ratio [RR], 1.21; 95% confidence interval [CI], 0.99-1.48). There were 118 deaths caused by ovarian cancer (3.1 per 10 000 person-years) in the intervention group and 100 deaths (2.6 per 10 000 person-years) in the usual care group (mortality RR, 1.18; 95% CI, 0.82-1.71). Of 3285 women with false-positive results, 1080 underwent surgical follow-up; of whom, 163 women experienced at least 1 serious complication (15%). There were 2924 deaths due to other causes (excluding ovarian, colorectal, and lung cancer) (76.6 per 10 000 person-years) in the intervention group and 2914 deaths (76.2 per 10 000 person-years) in the usual care group (RR, 1.01; 95% CI, 0.96-1.06). Conclusions Among women in the general US population, simultaneous screening with CA-125 and transvaginal ultrasound compared with usual care did not reduce ovarian cancer mortality. Diagnostic evaluation following a false-positive screening test result was associated with complications. Trial Registration clinicaltrials.gov Identifier: NCT00002540

Journal ArticleDOI
04 May 2011-JAMA
TL;DR: Structured exercise training that consists of aerobic exercise, resistance training, or both combined is associated with HbA(1c) reduction in patients with type 2 diabetes.
Abstract: Context Regular exercise improves glucose control in diabetes, but the association of different exercise training interventions on glucose control is unclear. Objective To conduct a systematic review and meta-analysis of randomized controlled clinical trials (RCTs) assessing associations of structured exercise training regimens (aerobic, resistance, or both) and physical activity advice with or without dietary cointervention on change in hemoglobin A1c (HbA1c) in type 2 diabetes patients. Data Sources MEDLINE, Cochrane-CENTRAL, EMBASE, ClinicalTrials.gov, LILACS, and SPORTDiscus databases were searched from January 1980 through February 2011. Study Selection RCTs of at least 12 weeks' duration that evaluated the ability of structured exercise training or physical activity advice to lower HbA1c levels as compared with a control group in patients with type 2 diabetes. Data Extraction Two independent reviewers extracted data and assessed quality of the included studies. Data Synthesis Of 4191 articles retrieved, 47 RCTs (8538 patients) were included. Pooled mean differences in HbA1c levels between intervention and control groups were calculated using a random-effects model. Overall, structured exercise training (23 studies) was associated with a decline in HbA1c level (−0.67%; 95% confidence interval [CI], −0.84% to −0.49%; I2, 91.3%) compared with control participants. In addition, structured aerobic exercise (−0.73%; 95% CI, −1.06% to −0.40%; I2, 92.8%), structured resistance training (−0.57%; 95% CI, −1.14% to −0.01%; I2, 92.5%), and both combined (−0.51%; 95% CI, −0.79% to −0.23%; I2, 67.5%) were each associated with declines in HbA1C levels compared with control participants. Structured exercise durations of more than 150 minutes per week were associated with HbA1c reductions of 0.89%, while structured exercise durations of 150 minutes or less per week were associated with HbA1C reductions of 0.36%. Overall, interventions of physical activity advice (24 studies) were associated with lower HbA1c levels (−0.43%; 95% CI, −0.59% to −0.28%; I2, 62.9%) compared with control participants. Combined physical activity advice and dietary advice was associated with decreased HbA1c (−0.58%; 95% CI, −0.74% to −0.43%; I2, 57.5%) as compared with control participants. Physical activity advice alone was not associated with HbA1c changes. Conclusions Structured exercise training that consists of aerobic exercise, resistance training, or both combined is associated with HbA1c reduction in patients with type 2 diabetes. Structured exercise training of more than 150 minutes per week is associated with greater HbA1c declines than that of 150 minutes or less per week. Physical activity advice is associated with lower HbA1c, but only when combined with dietary advice.

Journal ArticleDOI
19 Jan 2011-JAMA
TL;DR: Evidence is provided that a molecular imaging procedure can identify β-amyloid pathology in the brains of individuals during life and for the prediction of progression to dementia.
Abstract: Context The ability to identify and quantify brain β-amyloid could increase the accuracy of a clinical diagnosis of Alzheimer disease. Objective To determine if florbetapir F 18 positron emission tomographic (PET) imaging performed during life accurately predicts the presence of β-amyloid in the brain at autopsy. Design, Setting, and Participants Prospective clinical evaluation conducted February 2009 through March 2010 of florbetapir-PET imaging performed on 35 patients from hospice, long-term care, and community health care facilities near the end of their lives (6 patients to establish the protocol and 29 to validate) compared with immunohistochemistry and silver stain measures of brain β-amyloid after their death used as the reference standard. PET images were also obtained in 74 young individuals (18-50 years) presumed free of brain amyloid to better understand the frequency of a false-positive interpretation of a florbetapir-PET image. Main Outcome Measures Correlation of florbetapir-PET image interpretation (based on the median of 3 nuclear medicine physicians' ratings) and semiautomated quantification of cortical retention with postmortem β-amyloid burden, neuritic amyloid plaque density, and neuropathological diagnosis of Alzheimer disease in the first 35 participants autopsied (out of 152 individuals enrolled in the PET pathological correlation study). Results Florbetapir-PET imaging was performed a mean of 99 days (range, 1-377 days) before death for the 29 individuals in the primary analysis cohort. Fifteen of the 29 individuals (51.7%) met pathological criteria for Alzheimer disease. Both visual interpretation of the florbetapir-PET images and mean quantitative estimates of cortical uptake were correlated with presence and quantity of β-amyloid pathology at autopsy as measured by immunohistochemistry (Bonferroni ρ, 0.78 [95% confidence interval, 0.58-0.89]; P Conclusions Florbetapir-PET imaging was correlated with the presence and density of β-amyloid. These data provide evidence that a molecular imaging procedure can identify β-amyloid pathology in the brains of individuals during life. Additional studies are required to understand the appropriate use of florbetapir-PET imaging in the clinical diagnosis of Alzheimer disease and for the prediction of progression to dementia.

Journal ArticleDOI
20 Apr 2011-JAMA
TL;DR: A model using routinely obtained laboratory tests can accurately predict progression to kidney failure in patients with CKD stages 3 to 5, and was more accurate than a simpler model that included age, sex, estimated GFR, and albuminuria.
Abstract: Context Chronic kidney disease (CKD) is common. Kidney disease severity can be classified by estimated glomerular filtration rate (GFR) and albuminuria, but more accurate information regarding risk for progression to kidney failure is required for clinical decisions about testing, treatment, and referral. Objective To develop and validate predictive models for progression of CKD. Design, Setting, and Participants Development and validation of prediction models using demographic, clinical, and laboratory data from 2 independent Canadian cohorts of patients with CKD stages 3 to 5 (estimated GFR, 10-59 mL/min/1.73 m 2 ) who were referred to nephrologists between April 1, 2001, and December 31, 2008. Models were developed using Cox proportional hazards regression methods and evaluated using C statistics and integrated discrimination improvement for discrimination, calibration plots and Akaike Information Criterion for goodness of fit, and net reclassification improvement (NRI) at 1, 3, and 5 years. Main Outcome Measure Kidney failure, defined as need for dialysis or preemptive kidney transplantation. Results The development and validation cohorts included 3449 patients (386 with kidney failure [11%]) and 4942 patients (1177 with kidney failure [24%]), respectively. The most accurate model included age, sex, estimated GFR, albuminuria, serum calcium, serum phosphate, serum bicarbonate, and serum albumin (C statistic, 0.917; 95% confidence interval [CI], 0.901-0.933 in the development cohort and 0.841; 95% CI, 0.825-0.857 in the validation cohort). In the validation cohort, this model was more accurate than a simpler model that included age, sex, estimated GFR, and albuminuria (integrated discrimination improvement, 3.2%; 95% CI, 2.4%-4.2%; calibration [Nam and D’Agostino χ 2 statistic, 19 vs 32]; and reclassification for CKD stage 3 [NRI, 8.0%; 95% CI, 2.1%-13.9%] and for CKD stage 4 [NRI, 4.1%; 95% CI, −0.5% to 8.8%]). Conclusion A model using routinely obtained laboratory tests can accurately predict progression to kidney failure in patients with CKD stages 3 to 5.

Journal ArticleDOI
27 Apr 2011-JAMA
TL;DR: Neither vitamin E nor metformin was superior to placebo in attaining the primary outcome of sustained reduction in ALT level in patients with pediatric NAFLD.
Abstract: Context Nonalcoholic fatty liver disease (NAFLD) is the most common chronic liver disease in US children and adolescents and can present with advanced fibrosis or nonalcoholic steatohepatitis (NASH). No treatment has been established. Objective To determine whether children with NAFLD would improve from therapeutic intervention with vitamin E or metformin. Design, Setting, and Patients Randomized, double-blind, double-dummy, placebo-controlled clinical trial conducted at 10 university clinical research centers in 173 patients (aged 8-17 years) with biopsy-confirmed NAFLD conducted between September 2005 and March 2010. Interventions Daily dosing of 800 IU of vitamin E (58 patients), 1000 mg of metformin (57 patients), or placebo (58 patients) for 96 weeks. Main Outcome Measures The primary outcome was sustained reduction in alanine aminotransferase (ALT) defined as 50% or less of the baseline level or 40 U/L or less at visits every 12 weeks from 48 to 96 weeks of treatment. Improvements in histological features of NAFLD and resolution of NASH were secondary outcome measures. Results Sustained reduction in ALT level was similar to placebo (10/58; 17%; 95% CI, 9% to 29%) in both the vitamin E (15/58; 26%; 95% CI, 15% to 39%; P = .26) and metformin treatment groups (9/57; 16%; 95% CI, 7% to 28%; P = .83). The mean change in ALT level from baseline to 96 weeks was −35.2 U/L (95% CI, −56.9 to −13.5) with placebo vs −48.3 U/L (95% CI, −66.8 to −29.8) with vitamin E (P = .07) and −41.7 U/L (95% CI, −62.9 to −20.5) with metformin (P = .40). The mean change at 96 weeks in hepatocellular ballooning scores was 0.1 with placebo (95% CI, −0.2 to 0.3) vs −0.5 with vitamin E (95% CI, −0.8 to −0.3; P = .006) and −0.3 with metformin (95% CI, −0.6 to −0.0; P = .04); and in NAFLD activity score, −0.7 with placebo (95% CI, −1.3 to −0.2) vs −1.8 with vitamin E (95% CI, −2.4 to −1.2; P = .02) and −1.1 with metformin (95% CI, −1.7 to −0.5; P = .25). Among children with NASH, the proportion who resolved at 96 weeks was 28% with placebo (95% CI, 15% to 45%; 11/39) vs 58% with vitamin E (95% CI, 42% to 73%; 25/43; P = .006) and 41% with metformin (95% CI, 26% to 58%; 16/39; P = .23). Compared with placebo, neither therapy demonstrated significant improvements in other histological features. Conclusion Neither vitamin E nor metformin was superior to placebo in attaining the primary outcome of sustained reduction in ALT level in patients with pediatric NAFLD. Trial Registration clinicaltrials.gov Identifier: NCT00063635

Journal ArticleDOI
07 Sep 2011-JAMA
TL;DR: The median reported time dedicated to LGBT-related topics in 2009-2010 was small across US and Canadian medical schools, but the quantity, content covered, and perceived quality of instruction varied substantially.
Abstract: Context Lesbian, gay, bisexual, and transgender (LGBT) individuals experience health and health care disparities and have specific health care needs. Medical education organizations have called for LGBT-sensitive training, but how and to what extent schools educate students to deliver comprehensive LGBT patient care is unknown. Objectives To characterize LGBT-related medical curricula and associated curricular development practices and to determine deans' assessments of their institutions' LGBT-related curricular content. Design, Setting, and Participants Deans of medical education (or equivalent) at 176 allopathic or osteopathic medical schools in Canada and the United States were surveyed to complete a 13-question, Web-based questionnaire between May 2009 and March 2010. Main Outcome Measure Reported hours of LGBT-related curricular content. Results Of 176 schools, 150 (85.2%) responded, and 132 (75.0%) fully completed the questionnaire. The median reported time dedicated to teaching LGBT-related content in the entire curriculum was 5 hours (interquartile range [IQR], 3-8 hours). Of the 132 respondents, 9 (6.8%; 95% CI, 2.5%-11.1%) reported 0 hours taught during preclinical years and 44 (33.3%; 95% CI, 25.3%-41.4%) reported 0 hours during clinical years. Median US allopathic clinical hours were significantly different from US osteopathic clinical hours (2 hours [IQR, 0-4 hours] vs 0 hours [IQR, 0-2 hours]; P = .008). Although 128 of the schools (97.0%; 95% CI, 94.0%-99.9%) taught students to ask patients if they “have sex with men, women, or both” when obtaining a sexual history, the reported teaching frequency of 16 LGBT-specific topic areas in the required curriculum was lower: at least 8 topics at 83 schools (62.9%; 95% CI, 54.6%-71.1%) and all topics at 11 schools (8.3%; 95% CI, 3.6%-13.0%). The institutions' LGBT content was rated as “fair” at 58 schools (43.9%; 95% CI, 35.5%-52.4%). Suggested successful strategies to increase content included curricular material focusing on LGBT-related health and health disparities at 77 schools (58.3%, 95% CI, 49.9%-66.7%) and faculty willing and able to teach LGBT-related curricular content at 67 schools (50.8%, 95% CI, 42.2%-59.3%). Conclusion The median reported time dedicated to LGBT-related topics in 2009-2010 was small across US and Canadian medical schools, but the quantity, content covered, and perceived quality of instruction varied substantially.

Journal ArticleDOI
15 Jun 2011-JAMA
TL;DR: Elevated FGF-23 is an independent risk factor for end-stage renal disease in patients with relatively preserved kidney function and for mortality across the spectrum of chronic kidney disease.
Abstract: Context A high level of the phosphate-regulating hormone fibroblast growth factor 23 (FGF-23) is associated with mortality in patients with end-stage renal disease, but little is known about its relationship with adverse outcomes in the much larger population of patients with earlier stages of chronic kidney disease. Objective To evaluate FGF-23 as a risk factor for adverse outcomes in patients with chronic kidney disease. Design, Setting, and Participants A prospective study of 3879 participants with chronic kidney disease stages 2 through 4 who enrolled in the Chronic Renal Insufficiency Cohort between June 2003 and September 2008. Main Outcome Measures All-cause mortality and end-stage renal disease. Results At study enrollment, the mean (SD) estimated glomerular filtration rate (GFR) was 42.8 (13.5) mL/min/1.73 m 2 , and the median FGF-23 level was 145.5 RU/mL (interquartile range [IQR], 96-239 reference unit [RU]/mL). During a median follow-up of 3.5 years (IQR, 2.5-4.4 years), 266 participants died (20.3/1000 person-years) and 410 reached end-stage renal disease (33.0/1000 person-years). In adjusted analyses, higher levels of FGF-23 were independently associated with a greater risk of death (hazard ratio [HR], per SD of natural log-transformed FGF-23, 1.5; 95% confidence interval [CI], 1.3-1.7). Mortality risk increased by quartile of FGF-23: the HR was 1.3 (95% CI, 0.8-2.2) for the second quartile, 2.0 (95% CI, 1.2-3.3) for the third quartile, and 3.0 (95% CI, 1.8-5.1) for the fourth quartile. Elevated fibroblast growth factor 23 was independently associated with significantly higher risk of end-stage renal disease among participants with an estimated GFR between 30 and 44 mL/min/1.73 m 2 (HR, 1.3 per SD of FGF-23 natural log-transformed FGF-23; 95% CI, 1.04-1.6) and 45 mL/min/1.73 m 2 or higher (HR, 1.7; 95% CI, 1.1-2.4), but not less than 30 mL/min/1.73 m 2 . Conclusion Elevated FGF-23 is an independent risk factor for end-stage renal disease in patients with relatively preserved kidney function and for mortality across the spectrum of chronic kidney disease.

Journal ArticleDOI
10 Aug 2011-JAMA
TL;DR: Among older women, those with sleep-disordered breathing compared with those without sleep- disordered breathing had an increased risk of developing cognitive impairment, and measures of hypoxia, sleep fragmentation, and sleep duration were investigated as underlying mechanisms for this relationship.
Abstract: Context Sleep-disordered breathing (characterized by recurrent arousals from sleep and intermittent hypoxemia) is common among older adults. Cross-sectional studies have linked sleep-disordered breathing to poor cognition; however, it remains unclear whether sleep-disordered breathing precedes cognitive impairment in older adults. Objectives To determine the prospective relationship between sleep-disordered breathing and cognitive impairment and to investigate potential mechanisms of this association. Design, Setting, and Participants Prospective sleep and cognition study of 298 women without dementia (mean [SD] age: 82.3 [3.2] years) who had overnight polysomnography measured between January 2002 and April 2004 in a substudy of the Study of Osteoporotic Fractures. Sleep-disordered breathing was defined as an apnea-hypopnea index of 15 or more events per hour of sleep. Multivariate logistic regression was used to determine the independent association of sleep-disordered breathing with risk of mild cognitive impairment or dementia, adjusting for age, race, body mass index, education level, smoking status, presence of diabetes, presence of hypertension, medication use (antidepressants, benzodiazepines, or nonbenzodiazepine anxiolytics), and baseline cognitive scores. Measures of hypoxia, sleep fragmentation, and sleep duration were investigated as underlying mechanisms for this relationship. Main Outcome Measures Adjudicated cognitive status (normal, dementia, or mild cognitive impairment) based on data collected between November 2006 and September 2008. Results Compared with the 193 women without sleep-disordered breathing, the 105 women (35.2%) with sleep-disordered breathing were more likely to develop mild cognitive impairment or dementia (31.1% [n = 60] vs 44.8% [n = 47]; adjusted odds ratio [AOR], 1.85; 95% confidence interval [CI], 1.11-3.08). Elevated oxygen desaturation index (≥15 events/hour) and high percentage of sleep time (>7%) in apnea or hypopnea (both measures of disordered breathing) were associated with risk of developing mild cognitive impairment or dementia (AOR, 1.71 [95% CI, 1.04-2.83] and AOR, 2.04 [95% CI, 1.10-3.78], respectively). Measures of sleep fragmentation (arousal index and wake after sleep onset) or sleep duration (total sleep time) were not associated with risk of cognitive impairment. Conclusion Among older women, those with sleep-disordered breathing compared with those without sleep-disordered breathing had an increased risk of developing cognitive impairment.

Journal ArticleDOI
08 Jun 2011-JAMA
TL;DR: MSH6 mutations are associated with markedly lower cancer risks than MLH1 or MSH2 mutations, and these risks do not increase appreciably until after the age of 40 years.
Abstract: 80%), 21% (95% CI, 8%-77%), and 16% (95% CI, 8%-32%). For ovarian cancer, they were 20% (95% CI, 1%-65%), 24% (95% CI, 3%-52%), and 1% (95% CI, 0%-3%). The estimated cumulative risks by age 40 years did not exceed 2% (95% CI, 0%-7%) for endometrial cancer nor 1% (95% CI, 0%-3%) for ovarian cancer, irrespective of the gene. The estimated lifetime risks for other tumor types did not exceed 3% with any of the gene mutations.

Journal ArticleDOI
22 Jun 2011-JAMA
TL;DR: Prevalence of DKD in the United States increased from 1988 to 2008 in proportion to the prevalence of diabetes, and was stable despite increased use of glucose-lowering medications and renin-angiotensin-aldosterone system inhibitors among persons with diabetes.
Abstract: Context Diabetes is the leading cause of kidney disease in the developed world. Over time, the prevalence of diabetic kidney disease (DKD) may increase due to the expanding size of the diabetes population or decrease due to the implementation of diabetes therapies. Objective To define temporal changes in DKD prevalence in the United States. Design, Setting, and Participants Cross-sectional analyses of the Third National Health and Nutrition Examination Survey (NHANES III) from 1988-1994 (N = 15 073), NHANES 1999-2004 (N = 13 045), and NHANES 2005-2008 (N = 9588). Participants with diabetes were defined by levels of hemoglobin A 1c of 6.5% or greater, use of glucose-lowering medications, or both (n = 1431 in NHANES III; n = 1443 in NHANES 1999-2004; n = 1280 in NHANES 2005-2008). Main Outcome Measures Diabetic kidney disease was defined as diabetes with albuminuria (ratio of urine albumin to creatinine ≥30 mg/g), impaired glomerular filtration rate ( 2 estimated using the Chronic Kidney Disease Epidemiology Collaboration formula), or both. Prevalence of albuminuria was adjusted to estimate persistent albuminuria. Results The prevalence of DKD in the US population was 2.2% (95% confidence interval [CI], 1.8%-2.6%) in NHANES III, 2.8% (95% CI, 2.4%-3.1%) in NHANES 1999-2004, and 3.3% (95% CI, 2.8%-3.7%) in NHANES 2005-2008 (P <.001 for trend). The prevalence of DKD increased in direct proportion to the prevalence of diabetes, without a change in the prevalence of DKD among those with diabetes. Among persons with diabetes, use of glucose-lowering medications increased from 56.2% (95% CI, 52.1%-60.4%) in NHANES III to 74.2% (95% CI, 70.4%-78.0%) in NHANES 2005-2008 (P <.001); use of renin-angiotensin-aldosterone system inhibitors increased from 11.2% (95% CI, 9.0%-13.4%) to 40.6% (95% CI, 37.2%-43.9%), respectively (P <.001); the prevalence of impaired glomerular filtration rate increased from 14.9% (95% CI, 12.1%-17.8%) to 17.7% (95% CI, 15.2%-20.2%), respectively (P = .03); and the prevalence of albuminuria decreased from 27.3% (95% CI, 22.0%-32.7%) to 23.7% (95% CI, 19.3%-28.0%), respectively, but this was not statistically significant (P = .07). Conclusions Prevalence of DKD in the United States increased from 1988 to 2008 in proportion to the prevalence of diabetes. Among persons with diabetes, prevalence of DKD was stable despite increased use of glucose-lowering medications and renin-angiotensin-aldosterone system inhibitors.

Journal ArticleDOI
19 Oct 2011-JAMA
TL;DR: For patients with H1N1-related ARDS, referral and transfer to an ECMO center was associated with lower hospital mortality compared with matched non-ECMO-referred patients, and the results were robust to sensitivity analyses.
Abstract: Design, Setting, and Patients A cohort study in which ECMO-referred patients were defined as all patients with H1N1-related ARDS who were referred, accepted, and transferred to 1 of the 4 adult ECMO centers in the United Kingdom during the H1N1 pandemic in winter 2009-2010. The ECMO-referred patients and the non–ECMO-referred patients were matched using data from a concurrent, longitudinal cohort study (Swine Flu Triage study) of critically ill patients with suspected or confirmed H1N1. Detailed demographic, physiological, and comorbidity data were used in 3 different matching techniques (individual matching, propensity score matching, and GenMatch matching). Main Outcome Measure Survival to hospital discharge analyzed according to the intention-to-treat principle. Results Of 80 ECMO-referred patients, 69 received ECMO (86.3%) and 22 died (27.5%) prior to discharge from the hospital. From a pool of 1756 patients, there were 59 matched pairs of ECMO-referred patients and non–ECMO-referred patients identified using individual matching, 75 matched pairs identified using propensity score matching, and 75 matched pairs identified using GenMatch matching. The hospital mortality rate was 23.7% for ECMO-referred patients vs 52.5% for non–ECMO-referred patients (relative risk [RR], 0.45 [95% CI, 0.26-0.79];P=.006) when individual matching was used; 24.0% vs 46.7%, respectively (RR, 0.51 [95% CI, 0.31-0.81]; P=.008) when propensity score matching was used; and 24.0% vs 50.7%, respectively (RR, 0.47 [95% CI, 0.31-0.72]; P=.001) when GenMatch matching was used. The results were robust to sensitivity analyses, including amending the inclusion criteria and restricting the location where the non–ECMOreferred patients were treated. Conclusion For patients with H1N1-related ARDS, referral and transfer to an ECMO center was associated with lower hospital mortality compared with matched non–ECMO-referred patients.

Journal ArticleDOI
17 Aug 2011-JAMA
TL;DR: Compared with a pooled estimate of US data from cohorts initiated between 1963 and 1987, relative risks for smoking in the more recent NIH-AARP Diet and Health Study cohort were higher, with PARs for women comparable with those for men.
Abstract: =0.0%). The PAR for ever smoking in our study was 0.50 (95% CI, 0.45-0.54) in men and 0.52 (95% CI, 0.45-0.59) in women. Conclusion Compared with a pooled estimate of US data from cohorts initiated between 1963 and 1987, relative risks for smoking in the more recent NIH-AARP Diet and Health Study cohort were higher, with PARs for women comparable with those for men.

Journal ArticleDOI
15 Jun 2011-JAMA
TL;DR: While the associations between time spent viewing TV and risk of type 2 diabetes and cardiovascular disease were linear, the risk of all-cause mortality appeared to increase with TV viewing duration of greater than 3 hours per day.
Abstract: Context Prolonged television (TV) viewing is the most prevalent and pervasive sedentary behavior in industrialized countries and has been associated with morbidity and mortality. However, a systematic and quantitative assessment of published studies is not available. Objective To perform a meta-analysis of all prospective cohort studies to determine the association between TV viewing and risk of type 2 diabetes, fatal or nonfatal cardiovascular disease, and all-cause mortality. Data Sources and Study Selection Relevant studies were identified by searches of the MEDLINE database from 1970 to March 2011 and the EMBASE database from 1974 to March 2011 without restrictions and by reviewing reference lists from retrieved articles. Cohort studies that reported relative risk estimates with 95% confidence intervals (CIs) for the associations of interest were included. Data Extraction Data were extracted independently by each author and summary estimates of association were obtained using a random-effects model. Data Synthesis Of the 8 studies included, 4 reported results on type 2 diabetes (175 938 individuals; 6428 incident cases during 1.1 million person-years of follow-up), 4 reported on fatal or nonfatal cardiovascular disease (34 253 individuals; 1052 incident cases), and 3 reported on all-cause mortality (26 509 individuals; 1879 deaths during 202 353 person-years of follow-up). The pooled relative risks per 2 hours of TV viewing per day were 1.20 (95% CI, 1.14-1.27) for type 2 diabetes, 1.15 (95% CI, 1.06-1.23) for fatal or nonfatal cardiovascular disease, and 1.13 (95% CI, 1.07-1.18) for all-cause mortality. While the associations between time spent viewing TV and risk of type 2 diabetes and cardiovascular disease were linear, the risk of all-cause mortality appeared to increase with TV viewing duration of greater than 3 hours per day. The estimated absolute risk differences per every 2 hours of TV viewing per day were 176 cases of type 2 diabetes per 100 000 individuals per year, 38 cases of fatal cardiovascular disease per 100 000 individuals per year, and 104 deaths for all-cause mortality per 100 000 individuals per year. Conclusion Prolonged TV viewing was associated with increased risk of type 2 diabetes, cardiovascular disease, and all-cause mortality.

Journal ArticleDOI
16 Feb 2011-JAMA
TL;DR: Among elderly Medicare recipients, black patients were more likely to be readmitted after hospitalization for 3 common conditions, a gap that was related to both race and to the site where care was received.
Abstract: Context Understanding whether and why there are racial disparities in readmissions has implications for efforts to reduce readmissions. Objective To determine whether black patients have higher odds of readmission than white patients and whether these disparities are related to where black patients receive care. Design Using national Medicare data, we examined 30-day readmissions after hospitalization for acute myocardial infarction (MI), congestive heart failure (CHF), and pneumonia. We categorized hospitals in the top decile of proportion of black patients as minority-serving. We determined the odds of readmission for black patients compared with white patients at minority-serving vs non–minority-serving hospitals. Setting and Participants Medicare Provider Analysis Review files of more than 3.1 million Medicare fee-for-service recipients who were discharged from US hospitals in 2006-2008. Main Outcome Measure Risk-adjusted odds of 30-day readmission. Results Overall, black patients had higher readmission rates than white patients (24.8% vs 22.6%, odds ratio [OR], 1.13; 95% confidence interval [CI], 1.11-1.14; P Conclusion Among elderly Medicare recipients, black patients were more likely to be readmitted after hospitalization for 3 common conditions, a gap that was related to both race and to the site where care was received.

Journal ArticleDOI
08 Jun 2011-JAMA
TL;DR: Among patients with advanced melanoma harboring KIT alterations, treatment with imatinib mesylate results in significant clinical responses in a subset of patients, indicating positive selection for the mutated allele.
Abstract: Context Some melanomas arising from acral, mucosal, and chronically sun-damaged sites harbor activating mutations and amplification of the type III transmembrane receptor tyrosine kinase KIT. We explored the effects of KIT inhibition using imatinib mesylate in this molecular subset of disease. Objective To assess clinical effects of imatinib mesylate in patients with melanoma harboring KIT alterations. Design, Setting, and Patients A single-group, open-label, phase 2 trial at 1 community and 5 academic oncology centers in the United States of 295 patients with melanoma screened for the presence of KIT mutations and amplification between April 23, 2007, and April 16, 2010. A total of 51 cases with such alterations were identified and 28 of these patients were treated who had advanced unresectable melanoma arising from acral, mucosal, and chronically sun-damaged sites. Intervention Imatinib mesylate, 400 mg orally twice daily. Main Outcome Measures Radiographic response, with secondary end points including time to progression, overall survival, and correlation of molecular alterations and clinical response. Results Two complete responses lasting 94 (ongoing) and 95 weeks, 2 durable partial responses lasting 53 and 89 (ongoing) weeks, and 2 transient partial responses lasting 12 and 18 weeks among the 25 evaluable patients were observed. The overall durable response rate was 16% (95% confidence interval [CI], 2%-30%), with a median time to progression of 12 weeks (interquartile range [IQR], 6-18 weeks; 95% CI, 11-18 weeks), and a median overall survival of 46.3 weeks (IQR, 28 weeks-not achieved; 95% CI, 28 weeks-not achieved). Response rate was better in cases with mutations affecting recurrent hotspots or with a mutant to wild-type allelic ratio of more than 1 (40% vs 0%, P = .05), indicating positive selection for the mutated allele. Conclusions Among patients with advanced melanoma harboring KIT alterations, treatment with imatinib mesylate results in significant clinical responses in a subset of patients. Responses may be limited to tumors harboring KIT alterations of proven functional relevance. Trial Registration clinicaltrials.gov Identifier: NCT00470470

Journal ArticleDOI
06 Apr 2011-JAMA
TL;DR: This review explored potential discrepancies in treatment effects between direct and network meta-analysis results using the standardized normal method and focused only on CP/CPPS categories IIIA and IIIB to reduce heterogeneity due to disease severity and outcomes measured using National Institutes of Health Chronic Prostatitis Symptom Index scales.
Abstract: we chose the latter; indeed network meta-analysis has been shown to potentially give more reliable results because of the integration of additional information. Although studies included in our review were from different sources, they were all randomized controlled trials and hence contrasts between treatment groups within each study should be comparable. In addition, we focused only on CP/CPPS categories IIIA and IIIB to reduce heterogeneity due to disease severity and focused on outcomes measured using National Institutes of Health Chronic Prostatitis Symptom Index scales to reduce heterogeneity due to measurement error. Nevertheless, we explored potential discrepancies in treatment effects between direct and network meta-analysis results using the standardized normal method (z). Directions of treatment effect for the 2 methods were identical for all 12 comparisons; moreover, the magnitude of the effects between the 2 methods were similar except for -blocker vs placebo, where z was large and reached statistical significance (2.9380, P=.003). We believe that this is an example of increased precision of treatment effects due to the network method “borrowing” information from indirect comparisons. Third, Jackson et al disagree that study data should be expanded using a Stata command so that it could be included in the meta-analysis, questioning how we could know the distribution of data. We only used this command for the treatment responsiveness outcome, which is a dichotomous outcome and does not need any assumption about distribution, normal or otherwise. We believe that using all available data, rather than omitting studies, is an advantage and will lead to more valid estimates.

Journal ArticleDOI
13 Apr 2011-JAMA
TL;DR: Among outpatients with RA in whom adalimumab was started over 3 years, the development of antidrug antibodies was associated with lower adalicumab concentration and lower likelihood of minimal disease activity or clinical remission.
Abstract: Context Short-term data on the immunogenicity of monoclonal antibodies showed associations between the development of antidrug antibodies and diminished serum drug levels, and a diminished treatment response. Little is known about the clinical relevance of antidrug antibodies against these drugs during long-term follow-up. Objective To examine the course of antidrug antibody formation against fully human monoclonal antibody adalimumab and its clinical relevance during long-term (3-year) follow-up of patients with rheumatoid arthritis (RA). Design, Setting, and Patients Prospective cohort study February 2004-September 2008; end of follow-up was September 2010. All 272 patients were diagnosed with RA and started treatment with adalimumab in an outpatient clinic. Main Outcome Measures Disease activity was monitored and trough serum samples were obtained at baseline and 8 time points to 156 weeks. Serum adalimumab concentrations and antiadalimumab antibody titers were determined after follow-up. Treatment discontinuation, minimal disease activity, and clinical remission were compared for patients with and without antiadalimumab antibodies. Results After 3 years, 76 of 272 patients (28%) developed antiadalimumab antibodies—51 of these (67%) during the first 28 weeks of treatment. Patients without antiadalimumab antibodies had much higher adalimumab concentrations (median, 12 mg/L; IQR, 9-16 mg/L) compared with patients with antibody titers from 13 to 100 AU/mL (median, 5 mg/L; IQR, 3-9 mg/L; regression coefficient, −4.5; 95% CI, −6.0 to −2.9; P Conclusion Among outpatients with RA in whom adalimumab was started over 3 years, the development of antidrug antibodies was associated with lower adalimumab concentration and lower likelihood of minimal disease activity or clinical remission.

Journal ArticleDOI
26 Oct 2011-JAMA
TL;DR: Risk factors and risk stratification tools that identify older adults at highest risk of hospitalization-associated disability are described and a pragmatic approach toward functional status assessment in the hospital focused on evaluation of ADLs, mobility, and cognition is described.
Abstract: In older patients, acute medical illness that requires hospitalization is a sentinel event that often precipitates disability. This results in the subsequent inability to live independently and complete basic activities of daily living (ADLs). This hospitalization-associated disability occurs in approximately one-third of patients older than 70 years of age and may be triggered even when the illness that necessitated the hospitalization is successfully treated. In this article, we describe risk factors and risk stratification tools that identify older adults at highest risk of hospitalization-associated disability. We describe hospital processes that may promote hospitalization-associated disability and models of care that have been developed to prevent it. Since recognition of functional status problems is an essential prerequisite to preventing and managing disability, we also describe a pragmatic approach toward functional status assessment in the hospital focused on evaluation of ADLs, mobility, and cognition. Based on studies of acute geriatric units, we describe interventions hospitals and clinicians can consider to prevent hospitalization-associated disability in patients. Finally, we describe approaches clinicians can implement to improve the quality of life of older adults who develop hospitalization-associated disability and that of their caregivers.

Journal ArticleDOI
14 Sep 2011-JAMA
TL;DR: In the population of patients with BAV, the incidence of aortic dissection over a mean of 16 years of follow-up was low but significantly higher than in the general population, compared with the county's general population.
Abstract: Context Bicuspid aortic valve (BAV), the most common congenital heart defect, has been thought to cause frequent and severe aortic complications; however, long-term, population-based data are lacking. Objective To determine the incidence of aortic complications in patients with BAV in a community cohort and in the general population. Design, Setting, and Participants In this retrospective cohort study, we conducted comprehensive assessment of aortic complications of patients with BAV living in a population-based setting in Olmsted County, Minnesota. We analyzed long-term follow-up of a cohort of all Olmsted County residents diagnosed with definite BAV by echocardiography from 1980 to 1999 and searched for aortic complications of patients whose bicuspid valves had gone undiagnosed. The last year of follow-up was 2008-2009. Main Outcome Measure Thoracic aortic dissection, ascending aortic aneurysm, and aortic surgery. Results The cohort included 416 consecutive patients with definite BAV diagnosed by echocardiography, mean (SD) follow-up of 16 (7) years (6530 patient-years). Aortic dissection occurred in 2 of 416 patients; incidence of 3.1 (95% CI, 0.5-9.5) cases per 10 000 patient-years, age-adjusted relative-risk 8.4 (95% CI, 2.1-33.5; P = .003) compared with the county's general population. Aortic dissection incidences for patients 50 years or older at baseline and bearers of aortic aneurysms at baseline were 17.4 (95% CI, 2.9-53.6) and 44.9 (95% CI, 7.5-138.5) cases per 10 000 patient-years, respectively. Comprehensive search for aortic dissections in undiagnosed bicuspid valves revealed 2 additional patients, allowing estimation of aortic dissection incidence in bicuspid valve patients irrespective of diagnosis status (1.5; 95% CI, 0.4-3.8 cases per 10 000 patient-years), which was similar to the diagnosed cohort. Of 384 patients without baseline aneurysms, 49 developed aneurysms at follow-up, incidence of 84.9 (95% CI, 63.3-110.9) cases per 10 000 patient-years and an age-adjusted relative risk 86.2 (95% CI, 65.1-114; P Conclusions In the population of patients with BAV, the incidence of aortic dissection over a mean of 16 years of follow-up was low but significantly higher than in the general population.

Journal ArticleDOI
An Pan1, Qi Sun, Olivia I. Okereke1, Kathryn M. Rexrode, Frank B. Hu 
21 Sep 2011-JAMA
TL;DR: A systematic review and meta-analysis of prospective studies assessing the association between depression and risk of developing stroke in adults was conducted by as mentioned in this paper, who found that depression is associated with a significantly increased risk of stroke morbidity and mortality.
Abstract: Context Several studies have suggested that depression is associated with an increased risk of stroke; however, the results are inconsistent. Objective To conduct a systematic review and meta-analysis of prospective studies assessing the association between depression and risk of developing stroke in adults. Data Sources A search of MEDLINE, EMBASE, and PsycINFO databases (to May 2011) was supplemented by manual searches of bibliographies of key retrieved articles and relevant reviews. Study Selection We included prospective cohort studies that reported risk estimates of stroke morbidity or mortality by baseline or updated depression status assessed by self-reported scales or clinician diagnosis. Data Extraction Two independent reviewers extracted data on depression status at baseline, risk estimates of stroke, study quality, and methods used to assess depression and stroke. Hazard ratios (HRs) were pooled using fixed-effect or random-effects models when appropriate. Associations were tested in subgroups representing different participant and study characteristics. Publication bias was evaluated with funnel plots and Begg test. Results The search yielded 28 prospective cohort studies (comprising 317 540 participants) that reported 8478 stroke cases (morbidity and mortality) during a follow-up period ranging from 2 to 29 years. The pooled adjusted HRs were 1.45 (95% CI, 1.29-1.63; P for heterogeneity Conclusion Depression is associated with a significantly increased risk of stroke morbidity and mortality.