scispace - formally typeset
Search or ask a question

Showing papers in "JAMA Internal Medicine in 2010"


Journal ArticleDOI
TL;DR: Intensive lifestyle intervention can produce sustained weight loss and improvements in fitness, glycemic control, and CVD risk factors in individuals with type 2 diabetes.
Abstract: BACKGROUND Lifestyle interventions produce short-term improvements in glycemia and cardiovascular disease (CVD) risk factors in individuals with type 2 diabetes mellitus, but no long-term data are available. We examined the effects of lifestyle intervention on changes in weight, fitness, and CVD risk factors during a 4-year study. METHODS The Look AHEAD (Action for Health in Diabetes) trial is a multicenter randomized clinical trial comparing the effects of an intensive lifestyle intervention (ILI) and diabetes support and education (DSE; the control group) on the incidence of major CVD events in 5145 overweight or obese individuals (59.5% female; mean age, 58.7 years) with type 2 diabetes mellitus. More than 93% of participants provided outcomes data at each annual assessment. RESULTS Averaged across 4 years, ILI participants had a greater percentage of weight loss than DSE participants (-6.15% vs -0.88%; P < .001) and greater improvements in treadmill fitness (12.74% vs 1.96%; P < .001), hemoglobin A(1c) level (-0.36% vs -0.09%; P < .001), systolic (-5.33 vs -2.97 mm Hg; P < .001) and diastolic (-2.92 vs -2.48 mm Hg; P = .01) blood pressure, and levels of high-density lipoprotein cholesterol (3.67 vs 1.97 mg/dL; P < .001) and triglycerides (-25.56 vs -19.75 mg/dL; P < .001). Reductions in low-density lipoprotein cholesterol levels were greater in DSE than ILI participants (-11.27 vs -12.84 mg/dL; P = .009) owing to greater use of medications to lower lipid levels in the DSE group. At 4 years, ILI participants maintained greater improvements than DSE participants in weight, fitness, hemoglobin A(1c) levels, systolic blood pressure, and high-density lipoprotein cholesterol levels. CONCLUSIONS Intensive lifestyle intervention can produce sustained weight loss and improvements in fitness, glycemic control, and CVD risk factors in individuals with type 2 diabetes. Whether these differences in risk factors translate to reduction in CVD events will ultimately be addressed by the Look AHEAD trial. TRIAL REGISTRATION clinicaltrials.gov Identifier: NCT00017953.

1,125 citations


Journal ArticleDOI
TL;DR: The simplified PESI has similar prognostic accuracy and clinical utility and greater ease of use compared with the original PESi and is applicable to outpatients with acute pulmonary embolism.
Abstract: Methods: The study retrospectively developed a simpli- fied PESI clinical prediction rule for estimating the risk of 30-day mortality in a derivation cohort of Spanish outpa- tients. Simplified and original PESI performances were com- pared in the derivation cohort. The simplified PESI under- went retrospective external validation in an independent multinational cohort (Registro Informatizado de la Enfer- medad Tromboembolica (RIETE) cohort) of outpatients. Results: In the derivation data set, univariate logistic regression of the original 11 PESI variables led to the re- moval of variables that did not reach statistical signifi- cance and subsequently produced the simplified PESI that contained the variables of age, cancer, chronic cardiopul- monary disease, heart rate, systolic blood pressure, and oxy- hemoglobin saturation levels. The prognostic accuracy of the original and simplified PESI scores did not differ (area under the curve, 0.75 (95% confidence interval (CI), 0.69- 0.80)). The 305 of 995 patients (30.7%) who were clas- sified as low risk by the simplified PESI had a 30-day mor- tality of 1.0% (95% CI, 0.0%-2.1%) compared with 10.9% (8.5%-13.2%) in the high-risk group. In the RIETE vali- dation cohort, 2569 of 7106 patients (36.2%) who were classified as low risk by the simplified PESI had a 30-day mortality of 1.1% (95% CI, 0.7%-1.5%) compared with 8.9% (8.1%-9.8%) in the high-risk group. Conclusion: The simplified PESI has similar prognos- tic accuracy and clinical utility and greater ease of use compared with the original PESI.

959 citations


Journal ArticleDOI
TL;DR: In patients with AF, all combinations of warfarin, aspirin, and clopidogrel are associated with increased risk of nonfatal and fatal bleeding.
Abstract: Background Patients with atrial fibrillation (AF) often require anticoagulation and platelet inhibition, but data are limited on the bleeding risk of combination therapy. Methods We performed a cohort study using nationwide registries to identify all Danish patients surviving first-time hospitalization for AF between January 1, 1997, and December 31, 2006, and their posthospital therapy of warfarin, aspirin, clopidogrel, and combinations of these drugs. Cox proportional hazards models were used to estimate risks of nonfatal and fatal bleeding. Results A total of 82 854 of 118 606 patients (69.9%) surviving AF hospitalization had at least 1 prescription filled for warfarin, aspirin, or clopidogrel after discharge. During mean (SD) follow-up of 3.3 (2.6) years, 13 573 patients (11.4%) experienced a nonfatal or fatal bleeding. The crude incidence rate for bleeding was highest for dual clopidogrel and warfarin therapy (13.9% per patient-year) and triple therapy (15.7% per patient-year). Using warfarin monotherapy as a reference, the hazard ratio (95% confidence interval) for the combined end point was 0.93 (0.88-0.98) for aspirin, 1.06 (0.87-1.29) for clopidogrel, 1.66 (1.34-2.04) for aspirin-clopidogrel, 1.83 (1.72-1.96) for warfarin-aspirin, 3.08 (2.32-3.91) for warfarin-clopidogrel, and 3.70 (2.89-4.76) for warfarin-aspirin-clopidogrel. Conclusions In patients with AF, all combinations of warfarin, aspirin, and clopidogrel are associated with increased risk of nonfatal and fatal bleeding. Dual warfarin and clopidogrel therapy and triple therapy carried a more than 3-fold higher risk than did warfarin monotherapy.

790 citations


Journal ArticleDOI
TL;DR: Although RRTs have broad appeal, robust evidence to support their effectiveness in reducing hospital mortality is lacking and studies frequently found evidence that deaths were prevented out of proportion to reductions in cases of cardiopulmonary arrest, raising questions about mechanisms of improvement.
Abstract: Background Although rapid response teams (RRTs) increasingly have been adopted by hospitals, their effectiveness in reducing hospital mortality remains uncertain. We conducted a meta-analysis to assess the effect of RRTs on reducing cardiopulmonary arrest and hospital mortality rates. Methods We conducted a systematic review of studies published from January 1, 1950, through November 31, 2008, using PubMed, EMBASE, Web of Knowledge, CINAHL, and all Evidence-Based Medicine Reviews. Randomized clinical trials and prospective studies of RRTs that reported data on changes in the primary outcome of hospital mortality or the secondary outcome of cardiopulmonary arrest cases were included. Results Eighteen studies from 17 publications (with 1 treated as 2 separate studies) were identified, involving nearly 1.3 million hospital admissions. Implementation of an RRT in adults was associated with a 33.8% reduction in rates of cardiopulmonary arrest outside the intensive care unit (ICU) (relative risk [RR], 0.66; 95% confidence interval [CI], 0.54-0.80) but was not associated with lower hospital mortality rates (RR, 0.96; 95% CI, 0.84-1.09). In children, implementation of an RRT was associated with a 37.7% reduction in rates of cardiopulmonary arrest outside the ICU (RR, 0.62; 95% CI, 0.46-0.84) and a 21.4% reduction in hospital mortality rates (RR, 0.79; 95% CI, 0.63-0.98). The pooled mortality estimate in children, however, was not robust to sensitivity analyses. Moreover, studies frequently found evidence that deaths were prevented out of proportion to reductions in cases of cardiopulmonary arrest, raising questions about mechanisms of improvement. Conclusion Although RRTs have broad appeal, robust evidence to support their effectiveness in reducing hospital mortality is lacking.

721 citations


Journal ArticleDOI
TL;DR: Among nurses at 2 hospitals, the occurrence and frequency of interruptions were significantly associated with the incidence of procedural failures and clinical errors and was associated with higher procedural failure rates.
Abstract: Background: Interruptions have been implicated as a cause of clinical errors, yet, to our knowledge, no empirical studies of this relationship exist. We tested the hypothesis that interruptions during medication administration increase errors. Methods: We performed an observational study of nurses preparing and administering medications in 6 wards at 2 major teaching hospitals in Sydney, Australia. Procedural failures and interruptions were recorded during direct observation. Clinical errors were identified by comparing observational data with patients’ medication charts. A volunteer sample of 98 nurses (representing a participation rate of 82%) were observed preparing and administering 4271 medications to 720 patients over 505 hours from September 2006 through March 2008. Associations between procedural failures (10 indicators; eg, aseptic technique) and clinical errors (12 indicators; eg, wrong dose) and interruptions, and between interruptions and potential severity of failures and errors, were the main outcome measures. Results: Each interruption was associated with a 12.1% increase in procedural failures and a 12.7% increase in clinical errors. The association between interruptions and clinical errors was independent of hospital and nurse characteristics. Interruptions occurred in 53.1% of administrations (95% confidence interval [CI], 51.6%-54.6%). Of total drug administrations, 74.4% (n=3177) had at least 1 procedural failure (95% CI, 73.1%-75.7%). Administrations with no interruptions (n=2005) had a procedural failure rate of 69.6% (n =1 395; 95% CI, 67.6%-71.6%), which increased to 84.6% (n=148; 95% CI, 79.2%-89.9%) with 3 interruptions. Overall, 25.0% (n=1067; 95% CI, 23.7%26.3%) of administrations had at least 1 clinical error. Those with no interruptions had a rate of 25.3% (n=507; 95% CI, 23.4%-27.2%), whereas those with 3 interruptions had a rate of 38.9% (n=68; 95% CI, 31.6%-46.1%). Nurse experience provided no protection against making a clinical error and was associated with higher procedural failure rates. Error severity increased with interruption frequency. Without interruption, the estimated risk of a major error was 2.3%; with 4 interruptions this risk doubled to 4.7% (95% CI, 2.9%-7.4%; P.001). Conclusion: Among nurses at 2 hospitals, the occurrence and frequency of interruptions were significantly associated with the incidence of procedural failures and clinical errors.

673 citations


Journal ArticleDOI
TL;DR: Twelve months of once-weekly or twice-weekly resistance training benefited the executive cognitive function of selective attention and conflict resolution among senior women.
Abstract: Background Cognitive decline among seniors is a pressing health care issue Specific exercise training may combat cognitive decline We compared the effect of once-weekly and twice-weekly resistance training with that of twice-weekly balance and tone exercise training on the performance of executive cognitive functions in senior women Methods In this single-blinded randomized trial, 155 community-dwelling women aged 65 to 75 years living in Vancouver were randomly allocated to once-weekly (n = 54) or twice-weekly (n = 52) resistance training or twice-weekly balance and tone training (control group) (n = 49) The primary outcome measure was performance on the Stroop test, an executive cognitive test of selective attention and conflict resolution Secondary outcomes of executive cognitive functions included set shifting as measured by the Trail Making Tests (parts A and B) and working memory as assessed by verbal digit span forward and backward tests Gait speed, muscular function, and whole-brain volume were also secondary outcome measures Results Both resistance training groups significantly improved their performance on the Stroop test compared with those in the balance and tone group ( P ≤ 03) Task performance improved by 126% and 109% in the once-weekly and twice-weekly resistance training groups, respectively; it deteriorated by 05% in the balance and tone group Enhanced selective attention and conflict resolution was significantly associated with increased gait speed Both resistance training groups demonstrated reductions in whole-brain volume compared with the balance and tone group at the end of the study ( P ≤ 03) Conclusion Twelve months of once-weekly or twice-weekly resistance training benefited the executive cognitive function of selective attention and conflict resolution among senior women Trial Registration clinicaltrialsgov Identifier:NCT00426881

649 citations


Journal ArticleDOI
TL;DR: The totality of randomized clinical trials continue to demonstrate increased risk for MI although not for CV or all-cause mortality, and the current findings suggest an unfavorable benefit to risk ratio for rosiglitazone.
Abstract: Context Controversy regarding the effects of rosiglitazone therapy on myocardial infarction (MI) and cardiovascular (CV) mortality persists 3 years after a meta-analysis initially raised concerns about the use of this drug. Objective To systematically review the effects of rosiglitazone therapy on MI and mortality (CV and all-cause). Data Sources We searched MEDLINE, the Web site of the Food and Drug Administration, and the GlaxoSmithKline clinical trials registry for trials published through February 2010. Study Selection The study included all randomized controlled trials of rosiglitazone at least 24 weeks in duration that reported CV adverse events. Data Extraction Odds ratios (ORs) for MI and mortality were estimated using a fixed-effects meta-analysis of 56 trials, which included 35 531 patients: 19 509 who received rosiglitazone and 16 022 who received control therapy. Results Rosiglitazone therapy significantly increased the risk of MI (OR, 1.28; 95% confidence interval [CI], 1.02-1.63; P = .04) but not CV mortality (OR, 1.03; 95% CI, 0.78-1.36; P = .86). Exclusion of the RECORD (Rosiglitazone Evaluated for Cardiac Outcomes and Regulation of Glycemia in Diabetes) trial yielded similar results but with more elevated estimates of the OR for MI (OR, 1.39; 95% CI, 1.02-1.89; P = .04) and CV mortality (OR, 1.46; 95% CI, 0.92-2.33; P = .11). An alternative analysis pooling trials according to allocation ratios allowed inclusion of studies with no events, yielding similar results for MI (OR, 1.28; 95% CI, 1.01-1.62; P = .04) and CV mortality (OR 0.99; 95% CI, 0.75-1.32; P = .96). Conclusions Eleven years after the introduction of rosiglitazone, the totality of randomized clinical trials continue to demonstrate increased risk for MI although not for CV or all-cause mortality. The current findings suggest an unfavorable benefit to risk ratio for rosiglitazone.

595 citations


Journal ArticleDOI
TL;DR: For overweight or obese persons with above-normal BP, the addition of exercise and weight loss to the DASH diet resulted in even larger BP reductions, greater improvements in vascular and autonomic function, and reduced left ventricular mass.
Abstract: Background Although the DASH (Dietary Approaches to Stop Hypertension) diet has been shown to lower blood pressure (BP) in short-term feeding studies, it has not been shown to lower BP among free-living individuals, nor has it been shown to alter cardiovascular biomarkers of risk. Objective To compare the DASH diet alone or combined with a weight management program with usual diet controls among participants with prehypertension or stage 1 hypertension (systolic BP, 130-159 mm Hg; or diastolic BP, 85-99 mm Hg). Design and Setting Randomized, controlled trial in a tertiary care medical center with assessments at baseline and 4 months. Enrollment began October 29, 2003, and ended July 28, 2008. Participants Overweight or obese, unmedicated outpatients with high BP (N = 144). Interventions Usual diet controls, DASH diet alone, and DASH diet plus weight management. Outcome Measures The main outcome measure is BP measured in the clinic and by ambulatory BP monitoring. Secondary outcomes included pulse wave velocity, flow-mediated dilation of the brachial artery, baroreflex sensitivity, and left ventricular mass. Results Clinic-measured BP was reduced by 16.1/9.9 mm Hg (DASH plus weight management); 11.2/7.5 mm (DASH alone); and 3.4/3.8 mm (usual diet controls) ( P P P Conclusion For overweight or obese persons with above-normal BP, the addition of exercise and weight loss to the DASH diet resulted in even larger BP reductions, greater improvements in vascular and autonomic function, and reduced left ventricular mass. Clinical Trial Registration clinicaltrials.gov Identifier:NCT00571844

506 citations


Journal ArticleDOI
TL;DR: Dietary supplementation with folic acid to lower homocysteine levels had no significant effects within 5 years on cardiovascular events or on overall cancer or mortality in the populations studied.
Abstract: Elevated plasma homocysteine levels have been associated with higher risks of cardiovascular disease, but the effects on disease rates of supplementation with folic acid to lower plasma homocysteine levels are uncertain. Individual participant data were obtained for a meta-analysis of 8 large, randomized, placebo-controlled trials of folic acid supplementation involving 37 485 individuals at increased risk of cardiovascular disease. The analyses involved intention-to-treat comparisons of first events during the scheduled treatment period. There were 9326 major vascular events (3990 major coronary events, 1528 strokes, and 5068 revascularizations), 3010 cancers, and 5125 deaths. Folic acid allocation yielded an average 25% reduction in homocysteine levels. During a median follow-up of 5 years, folic acid allocation had no significant effects on vascular outcomes, with rate ratios (95% confidence intervals) of 1.01 (0.97-1.05) for major vascular events, 1.03 (0.97-1.10) for major coronary events, and 0.96 (0.87-1.06) for stroke. Likewise, there were no significant effects on vascular outcomes in any of the subgroups studied or on overall vascular mortality. There was no significant effect on the rate ratios (95% confidence intervals) for overall cancer incidence (1.05 [0.98-1.13]), cancer mortality (1.00 [0.85-1.18]) or all-cause mortality (1.02 [0.97-1.08]) during the whole scheduled treatment period or during the later years of it. Dietary supplementation with folic acid to lower homocysteine levels had no significant effects within 5 years on cardiovascular events or on overall cancer or mortality in the populations studied.

468 citations


Journal ArticleDOI
TL;DR: It is feasible to decrease medication burden in community-dwelling elderly patients and this tool would be suitable for larger randomized controlled trials in different clinical settings.
Abstract: Background Polypharmacy and inappropriate medication use is a problem in elderly patients, who are more likely to experience adverse effects from multiple treatments and less likely to obtain the same therapeutic benefit as younger populations. The Good Palliative–Geriatric Practice algorithm for drug discontinuation has been shown to be effective in reducing polypharmacy and improving mortality and morbidity in nursing home inpatients. This study reports the feasibility of this approach in community-dwelling older patients. Methods The Good Palliative–Geriatric Practice algorithm was applied to a cohort of 70 community-dwelling older patients to recommend drug discontinuations. Success rates of discontinuation, morbidity, mortality, and changes in health status were recorded. Results The mean (SD) age of the 70 patients was 82.8 (6.9) years. Forty-three patients (61%) had 3 or more and 26% had 5 or more comorbidities. The mean follow-up was 19 months. Participants used a mean (SD) of 7.7 (3.7) medications. Protocol indicated that discontinuation was recommended for 311 medications in 64 patients (58% of drugs; mean [SD], 4.4 [2.5] drugs per patient overall, 4.9 per patient who had discontinuation). Of the discontinued drug therapies, 2% were restarted because of recurrence of the original indication. Taking nonconsent and failures together, successful discontinuation was achieved in 81%. Ten elderly patients (14%) died after a mean follow-up of 13 months, with the mean age at death of 89 years. No significant adverse events or deaths were attributable to discontinuation, and 88% of patients reported global improvement in health. Conclusions It is feasible to decrease medication burden in community-dwelling elderly patients. This tool would be suitable for larger randomized controlled trials in different clinical settings.

466 citations


Journal ArticleDOI
TL;DR: This literature-based meta-analysis did not find evidence for the benefit of statin therapy on all-cause mortality in a high-risk primary prevention set-up.
Abstract: Background: Statins have been shown to reduce the risk of all-cause mortality among individuals with clinical history of coronary heart disease. However, it remains uncertain whether statins have similar mortality benefit in a high-risk primary prevention setting. Notably, all systematic reviews to date included trials that in part incorporated participants with prior cardiovascular disease (CVD) at baseline. Our objective was to reliably determine if statin therapy reduces all-cause mortality among intermediate to high-risk individuals without a history of CVD. Data Sources: Trials were identified through computerizedliteraturesearchesofMEDLINEandCochranedatabases (January 1970-May 2009) using terms related to statins, clinical trials, and cardiovascular end points and through bibliographies of retrieved studies. Study Selection: Prospective, randomized controlled trialsofstatintherapyperformedinindividualsfreefrom CVD at baseline and that reported details, or could supply data, on all-cause mortality. Data Extraction:Relevantdataincludingthenumberof patientsrandomized,meandurationoffollow-up,andthe number of incident deaths were obtained from the principalpublicationorbycorrespondencewiththeinvestigators. Data Synthesis: Data were combined from 11 studies and effect estimates were pooled using a random-effects model meta-analysis, with heterogeneity assessed with the I 2 statistic. Data were available on 65229 participants followed for approximately 244000 person-years, during which 2793 deaths occurred. The use of statins in this high-risk primary prevention setting was not associatedwithastatisticallysignificantreduction(riskratio, 0.91; 95% confidence interval, 0.83-1.01) in the risk of all-cause mortality. There was no statistical evidence of heterogeneity among studies (I 2 =23%; 95% confidence interval, 0%-61% [P=.23]). Conclusion: This literature-based meta-analysis did not findevidenceforthebenefitofstatintherapyonall-cause mortality in a high-risk primary prevention set-up.

Journal ArticleDOI
TL;DR: All forms of hyponatremia are independently associated with in-hospital mortality and heightened resource consumption.
Abstract: Background Hyponatremia is the most common electrolyte disorder encountered in hospitalized patients. Methods We evaluated whether hospital-associated hyponatremia has an independent effect on all-cause mortality, hospital length of stay (LOS), and patient disposition. This cohort study included all adult hospitalizations at an academic medical center occurring between 2000-2007 for which an admission serum sodium concentration ([Na + ]) was available (N = 53 236). We examined community-acquired hyponatremia (admission serum [Na + ], + ]), and hospital-acquired hyponatremia (nadir serum [Na + ], + ]). The independent associations of these hyponatremic presentations with in-hospital mortality, LOS, and patient disposition were evaluated using generalized estimating equations adjusted for age, sex, race, admission service, and Deyo-Charlson Comorbidity Index score. Results Community-acquired hyponatremia occurred in 37.9% of hospitalizations and was associated with adjusted odds ratios (ORs) of 1.52 (95% confidence interval [CI], 1.36-1.69) for in-hospital mortality and 1.12 (95% CI, 1.08-1.17) for discharge to a short- or long-term care facility and a 14% (95% CI, 11%-16%) adjusted increase in LOS. Hospital-acquired hyponatremia developed in 38.2% of hospitalizations longer than 1 day in which initial serum [Na + ] was 138 to 142 mEq/L. Hospital-acquired hyponatremia was associated with adjusted ORs of 1.66 (95% CI, 1.39-1.98) for in-hospital mortality and 1.64 (95% CI, 1.55-1.74) for discharge to a facility and a 64% (95% CI, 60%-68%) adjusted increase in LOS. The strength of these associations tended to increase with hyponatremia severity. Conclusions Hospital-associated hyponatremia is a common occurrence. All forms of hyponatremia are independently associated with in-hospital mortality and heightened resource consumption.

Journal ArticleDOI
TL;DR: Daily rounds by a multidisciplinary team are associated with lower mortality among medical ICU patients, and the survival benefit of intensivist physician staffing is in part explained by the presence ofMultidisciplinary teams in high-intensity physician-staffed ICUs.
Abstract: ing, the lowest odds of death were in intensive care units (ICUs) with high-intensity physician staffing and multidisciplinary care teams (OR, 0.78; 95% CI, 0.68-0.89 [P.001]), followed by ICUs with low-intensity physician staffing and multidisciplinary care teams (OR, 0.88; 95%CI,0.79-0.97[P=.01]),comparedwithhospitalswith low-intensity physician staffing but without multidisciplinary care teams. The effects of multidisciplinary care were consistent across key subgroups including patients with sepsis, patients requiring invasive mechanical ventilation, and patients in the highest quartile of severity of illness. Conclusions: Daily rounds by a multidisciplinary team are associated with lower mortality among medical ICU patients.Thesurvivalbenefitofintensivistphysicianstaffing is in part explained by the presence of multidisciplinary teams in high-intensity physician-staffed ICUs.

Journal ArticleDOI
TL;DR: Substitution of whole grains, including brown rice, for white rice may lower risk of type 2 diabetes, and the recommendation that most carbohydrate intake should come from whole grains rather than refined grains to help prevent type 2abetes is supported.
Abstract: Background Because of differences in processing and nutrients, brown rice and white rice may have different effects on risk of type 2 diabetes mellitus. We examined white and brown rice consumption in relation to type 2 diabetes risk prospectively in the Health Professionals Follow-up Study and the Nurses' Health Study I and II. Methods We prospectively ascertained and updated diet, lifestyle practices, and disease status among 39 765 men and 157 463 women in these cohorts. Results After multivariate adjustment for age and other lifestyle and dietary risk factors, higher intake of white rice (≥5 servings per week vs Conclusions Substitution of whole grains, including brown rice, for white rice may lower risk of type 2 diabetes. These data support the recommendation that most carbohydrate intake should come from whole grains rather than refined grains to help prevent type 2 diabetes.

Journal ArticleDOI
TL;DR: Low levels of vitamin D were associated with substantial cognitive decline in the elderly population studied over a 6-year period, which raises important new possibilities for treatment and prevention.
Abstract: Background: To our knowledge, no prospective study has examined the association between vitamin D and cognitive decline or dementia. Methods: We determined whether low levels of serum 25-hydroxyvitamin D (25[OH]D) were associated with an increased risk of substantial cognitive decline in the InCHIANTI population–based study conducted in Italy between 1998 and 2006 with follow-up assessments every 3 years. A total of 858 adults 65 years or older completed interviews, cognitive assessments, and medical examinations and provided blood samples. Cognitive decline was assessed using the Mini-Mental State Examination (MMSE), and substantial decline was defined as 3 or more points. The Trail-Making Tests A and B were also used, and substantial decline was defined as the worst 10% of the distribution of decline or as discontinued testing. Results: The multivariate adjusted relative risk (95% confidence interval [CI]) of substantial cognitive decline on the MMSE in participants who were severely serum 25 (OH)D deficient (levels25 nmol/L) in comparison with those with sufficient levels of 25(OH)D (75 nmol/L) was 1.60 (95% CI, 1.19-2.00). Multivariate adjusted random-effects models demonstrated that the scores of participants who were severely 25(OH)D deficient declined by an additional 0.3 MMSE points per year more than those with sufficient levels of 25(OH)D. The relative risk for substantial decline on Trail-Making Test B was 1.31 (95% CI, 1.03-1.51) among those who were severely 25(OH)D deficient compared with those with sufficient levels of 25(OH)D. No significant association was observed for Trail-Making Test A. Conclusion: Low levels of vitamin D were associated with substantial cognitive decline in the elderly population studied over a 6-year period, which raises important new possibilities for treatment and prevention. Arch Intern Med. 2010;170(13):1135-1141

Journal ArticleDOI
TL;DR: Evidence of a dose-response effect provides further support for the potentially causal nature of iatrogenic acid suppression in the development of nosocomial C difficile infection.
Abstract: Background The incidence and severity of Clostridium difficile infections are increasing. Acid-suppressive therapy has been suggested as a risk factor for C difficile, but this remains controversial. Methods We conducted a pharmacoepidemiologic cohort study, performing a secondary analysis of data collected prospectively on 101 796 discharges from a tertiary care medical center during a 5-year period. The primary exposure of interest was acid suppression therapy, classified by the most intense acid suppression therapy received (no acid suppression, histamine 2 -receptor antagonist [H 2 RA] therapy, daily proton pump inhibitor [PPI], and PPI more frequently than daily). Results As the level of acid suppression increased, the risk of nosocomial C difficile infection increased, from 0.3% (95% confidence interval [CI], 0.21%-0.31%) in patients not receiving acid suppressive therapy to 0.6% (95% CI, 0.49%-0.79%) in those receiving H 2 RA therapy, to 0.9% (95% CI, 0.80%-0.98%) in those receiving daily PPI treatment, and to 1.4% (1.15%-1.71%) in those receiving more frequent PPI therapy. After adjustment for comorbid conditions, age, antibiotics, and propensity score–based likelihood of receipt of acid-suppression therapy, the association persisted, increasing from an odds ratio of 1 (no acid suppression [reference]) to 1.53 (95% CI, 1.12-2.10) (H 2 RA), to 1.74 (95% CI, 1.39-2.18) (daily PPI), and to 2.36 (95% CI, 1.79-3.11) (more frequent PPI). Similar estimates were found with a matched cohort analysis and with nested case-control techniques. Conclusions Increasing levels of pharmacologic acid suppression are associated with increased risks of nosocomial C difficile infection. This evidence of a dose-response effect provides further support for the potentially causal nature of iatrogenic acid suppression in the development of nosocomial C difficile infection.

Journal ArticleDOI
TL;DR: The effects of nut consumption were dose related, and different types of nuts had similar effects on blood lipid levels, particularly among subjects with higher LDL-C or with lower BMI.
Abstract: Background Epidemiological studies have consistently associated nut consumption with reduced risk for coronary heart disease. Subsequently, many dietary intervention trials investigated the effects of nut consumption on blood lipid levels. The objectives of this study were to estimate the effects of nut consumption on blood lipid levels and to examine whether different factors modify the effects. Methods We pooled individual primary data from 25 nut consumption trials conducted in 7 countries among 583 men and women with normolipidemia and hypercholesterolemia who were not taking lipid-lowering medications. In a pooled analysis, we used mixed linear models to assess the effects of nut consumption and the potential interactions. Results With a mean daily consumption of 67 g of nuts, the following estimated mean reductions were achieved: total cholesterol concentration (10.9 mg/dL [5.1% change]), low-density lipoprotein cholesterol concentration (LDL-C) (10.2 mg/dL [7.4% change]), ratio of LDL-C to high-density lipoprotein cholesterol concentration (HDL-C) (0.22 [8.3% change]), and ratio of total cholesterol concentration to HDL-C (0.24 [5.6% change]) (P Conclusion Nut consumption improves blood lipid levels in a dose-related manner, particularly among subjects with higher LDL-C or with lower BMI.

Journal ArticleDOI
TL;DR: Exercise training reduces anxiety symptoms among sedentary patients who have a chronic illness and whether selected variables of theoretical or practical importance moderate the effect is determined.
Abstract: Background Anxiety often remains unrecognized or untreated among patients with a chronic illness. Exercise training may help improve anxiety symptoms among patients. We estimated the population effect size for exercise training effects on anxiety and determined whether selected variables of theoretical or practical importance moderate the effect. Methods Articles published from January 1995 to August 2007 were located using the Physical Activity Guidelines for Americans Scientific Database, supplemented by additional searches through December 2008 of the following databases: Google Scholar, MEDLINE, PsycINFO, PubMed, and Web of Science. Forty English-language articles in scholarly journals involving sedentary adults with a chronic illness were selected. They included both an anxiety outcome measured at baseline and after exercise training and random assignment to either an exercise intervention of 3 or more weeks or a comparison condition that lacked exercise. Two co-authors independently calculated the Hedges d effect sizes from studies of 2914 patients and extracted information regarding potential moderator variables. Random effects models were used to estimate sampling error and population variance for all analyses. Results Compared with no treatment conditions, exercise training significantly reduced anxiety symptoms by a mean effect Δ of 0.29 (95% confidence interval, 0.23-0.36). Exercise training programs lasting no more than 12 weeks, using session durations of at least 30 minutes, and an anxiety report time frame greater than the past week resulted in the largest anxiety improvements. Conclusion Exercise training reduces anxiety symptoms among sedentary patients who have a chronic illness.

Journal ArticleDOI
TL;DR: The comparative safety of analgesics varies depending on the safety event studied, and opioid use exhibits an increased relative risk of many safety events compared with nsNSAIDs.
Abstract: Background The safety of alternative analgesics is unclear. We examined the comparative safety of nonselective NSAIDs (nsNSAIDs), selective cyclooxygenase 2 inhibitors (coxibs), and opioids. Methods Medicare beneficiaries from Pennsylvania and New Jersey who initiated therapy with an nsNSAID, a coxib, or an opioid from January 1, 1999, through December 31, 2005, were matched on propensity scores. We studied the risk of adverse events related to analgesics using incidence rates and adjusted hazard ratios (HRs) from Cox proportional hazards regression. Results The mean age of participants was 80.0 years, and almost 85% were female. After propensity score matching, the 3 analgesic cohorts were well balanced on baseline covariates. Compared with nsNSAIDs, coxibs (HR, 1.28; 95% confidence interval [CI], 1.01-1.62) and opioids (1.77; 1.39-2.24) exhibited elevated relative risk for cardiovascular events. Gastrointestinal tract bleeding risk was reduced for coxib users (HR, 0.60; 95% CI, 0.35-1.00) but was similar for opioid users. Use of coxibs and nsNSAIDs resulted in a similar risk for fracture; however, fracture risk was elevated with opioid use (HR, 4.47; 95% CI, 3.12-6.41). Use of opioids (HR, 1.68; 95% CI, 1.37-2.07) but not coxibs was associated with an increased risk for safety events requiring hospitalization compared with use of nsNSAIDs. In addition, use of opioids (HR, 1.87; 95 CI, 1.39-2.53) but not coxibs raised the risk of all-cause mortality compared with use of nsNSAIDs. Conclusions The comparative safety of analgesics varies depending on the safety event studied. Opioid use exhibits an increased relative risk of many safety events compared with nsNSAIDs.

Journal ArticleDOI
TL;DR: The results provide compelling evidence that the diabetes-depression association is bidirectional, and participants with increased severity of symptoms who had the best depressive symptoms showed a monotonic elevated risk of developing type 2 diabetes.
Abstract: Background Although it has been hypothesized that the diabetes-depression relation is bidirectional, few studies have addressed this hypothesis in a prospective setting. Methods A total of 65 381 women aged 50 to 75 years in 1996 were observed until 2006. Clinical depression was defined as having diagnosed depression or using antidepressants, and depressed mood was defined as having clinical depression or severe depressive symptoms, ie, a 5-item Mental Health Index (MHI-5) score of 52 or less. Self-reported type 2 diabetes mellitus was confirmed by means of a supplementary questionnaire validated by medical record review. Results During 10 years of follow-up (531 097 person-years), 2844 incident cases of type 2 diabetes mellitus were documented. Compared with referents (MHI-5 score of 86-100) who had the best depressive symptom scores, participants with increased severity of symptoms (MHI-5 scores of 76-85 or 53-75, or depressed mood) showed a monotonic elevated risk of developing type 2 diabetes ( P for trend = .002 in the multivariable-adjusted model). The relative risk for individuals with depressed mood was 1.17 (95% confidence interval [CI], 1.05-1.30) after adjustment for various covariates, and participants using antidepressants were at a particularly higher relative risk (1.25; 95% CI, 1.10-1.41). In a parallel analysis, 7415 cases of incident clinical depression were documented (474 722 person-years). Compared with nondiabetic subjects, those with diabetes had a relative risk (95% CI) of developing clinical depression after controlling for all covariates of 1.29 (1.18-1.40), and it was 1.25 (1.09-1.42), 1.24 (1.09-1.41), and 1.53 (1.26-1.85) in diabetic subjects without medications, with oral hypoglycemic agents, and with insulin therapy, respectively. These associations remained significant after adjustment for diabetes-related comorbidities. Conclusion Our results provide compelling evidence that the diabetes-depression association is bidirectional.

Journal ArticleDOI
TL;DR: The single screening question accurately identified drug use in this sample of primary care patients, supporting the usefulness of this brief screen in primary care.
Abstract: anillegaldrugorusedaprescriptionmedicationfornonmedical reasons?” A response of at least 1 time was considered positive for drug use. They were also asked the 10-item Drug Abuse Screening Test (DAST-10). The reference standard was the presence or absence of current (past year) drug use or a drug use disorder (abuse or dependence) as determined by a standardized diagnostic interview.Drugusewasalsodeterminedbyoralfluidtesting for common drugs of abuse. Results:Of394eligibleprimarycarepatients,286(73%) completedtheinterview.Thesinglescreeningquestionwas 100%sensitive(95%confidenceinterval[CI],90.6%-100%) and 73.5% specific (95% CI, 67.7%-78.6%) for the detection of a drug use disorder. It was less sensitive for the detection of self-reported current drug use (92.9%; 95% CI, 86.1%-96.5%) and drug use detected by oral fluid testing orself-report(81.8%;95%CI,72.5%-88.5%).TestcharacteristicsweresimilartothoseoftheDAST-10andwereaffectedverylittlebyparticipantdemographiccharacteristics. Conclusion: The single screening question accurately identified drug use in this sample of primary care patients, supporting the usefulness of this brief screen in primary care. Arch Intern Med. 2010;170(13):1155-1160

Journal ArticleDOI
TL;DR: The combined effect of poor health behaviors on mortality was substantial, indicating that modest, but sustained, improvements to diet and lifestyle could have significant public health benefits.
Abstract: Background: Physical activity, diet, smoking, and alcohol consumption have been shown to be related to mortality. We examined prospectively the individual and combined influence of these risk factors on total and cause-specific mortality. Methods: The prospective cohort study included 4886 individuals at least 18 years old from a United Kingdom–wide population in 1984 to 1985. A health behavior score was calculated, allocating 1 point for each poor behavior: smoking; fruits and vegetables consumedless than 3 times daily; less than 2 hours physical activity per week; and weekly consumption of more than 14 units of alcohol (in women) and more than 21 units (in men) (range of points, 0-4). We examined the relationship between health behaviors and mortality using Cox models and compared it with the mortality risk associated with aging. Results: During a mean follow-up period of 20 years, 1080 participants died, 431 from cardiovascular diseases, 318 from cancer, and 331 from other causes. Adjusted hazard ratios and 95% confidence intervals (CIs) for total mortality associated with 1, 2, 3, and 4 poor health behaviors compared with those with none were 1.85 (95% CI, 1.282.68), 2.23 (95% CI, 1.55-3.20), 2.76 (95% CI, 1.91-3.99), and3.49(95%CI,2.31-5.26),respectively(Pvaluefortrend, .001). The effect of combined health behaviors was strongestforotherdeathsandweakestforcancermortality.Those with 4 compared with those with no poor health behaviors had an all-cause mortality risk equivalent to being 12 years older. Conclusion: The combined effect of poor health behaviors on mortality was substantial, indicating that modest, but sustained, improvements to diet and lifestyle could have significant public health benefits.

Journal ArticleDOI
TL;DR: Metformin use may decrease mortality among patients with diabetes when used as a means of secondary prevention, including subsets of patients in whom met formin use is not now recommended.
Abstract: Results: The mortality rates were 6.3% (95% confidence interval[CI],5.2%-7.4%)withmetforminand9.8%8.4%11.2%)withoutmetformin;theadjustedhazardratio(HR) was0.76(0.65-0.89;P.001).Associationwithlowermortalitywasconsistentamongsubgroups,noticeablyinpatients withahistoryofcongestiveheartfailure(HR,0.69;95%CI, 0.54-0.90;P=.006),patientsolderthan65years(0.77;0.620.95;P=.02),andpatientswithanestimatedcreatinineclearanceof30to60mL/min/1.73m 2 (0.64;95%CI,0.48-0.86; P=.003)(toconvertcreatinineclearancetomL/s/m 2 ,multiply by 0.0167). Conclusions: Metformin use may decrease mortality among patients with diabetes when used as a means of secondary prevention, including subsets of patients in whom metformin use is not now recommended. Metformin use should be tested prospectively in this population to confirm its effect on survival. Arch Intern Med. 2010;170(21):1892-1899

Journal ArticleDOI
TL;DR: Enhanced depression care for patients with ACS was associated with greater satisfaction, a greater reduction in depressive symptoms, and a promising improvement in prognosis.
Abstract: PATIENTS WITH ACUTE COROnary syndrome (ACS) (myocardial infarction or unstable angina) who report even subsyndromal levels of depressive symptoms are at increased risk of ACS recurrence or mortality.1,2 This increased risk is observed over many years,3 is largely independent of other known risk factors for coronary heart disease (CHD),4 is strong,5 and has a dose-response association.6 The risk is particularly high for those whose depressive symptoms persist7 or are refractory to treatment.8,9 Although the association is not found in every study10 or with every ACS patient subgroup,11 systematic reviews,1,2,12 recent international data,13 and other accumulating research indicate that depression is a marker of increased risk of CHD events and mortality in this patient population. There have been calls for depression to be recognized as a risk marker14 and recommendations that patients with CHD be regularly screened for depression and be referred for treatment.15 However, we do not know whether patients with CHD and depressive symptoms, including many with subsyndromal symptoms, should be treated. Screening for a reliable CHD risk marker without clear evidence of how to successfully treat the risk can be problematic.16 In the case of depression, the suffering associated with the disorder is arguably sufficient justification for treatment. Given the strength of the observational evidence, however, there have been surprisingly few trials to determine whether depression can be successfully treated in patients with ACS and the risk of ACS recurrence or mortality mitigated. The first sufficiently powered trial (Enhancing Recovery in Coronary Heart Disease [ENRICHD]; conducted in 2481 patients) to test this question found a significant but modest reduction in depressive symptoms but no mortality difference between cognitive behavioral depression therapy and usual care.17 A second trial (Myocardial Infarction and Depression–Intervention Trial [MIND-IT]; conducted in 331 patients) also found significant improvements in depression but no difference in the cardiac event rate between antidepressant treatment and usual care.18 These results were disappointing because the Sertraline Antidepressant Heart Attack Randomized Trial (conducted in 369 patients), although powered only for safety, had shown a promising trend for 6-month sertraline hydrochloride use to reduce the risk of severe cardiovascular events compared with placebo.19 Other small trials20 and a post hoc, post-randomization responder analysis of the ENRICHD trial21 showed similar results. Given these few trials, we do not yet know whether reducing depressive symptoms improves medical prognosis in patients with ACS. The Coronary Psychosocial Evaluation Studies (COPES) intervention trial was designed to address several reasons why previous trials may not have led to greater reductions in depressive symptoms and improvements in medical prognosis. First, the COPES trial sought to better target at-risk patients by using a 3-month observation period after ACS to eliminate patients whose symptoms spontaneously remit or respond to usual care. This strategy identifies patients with persistently elevated depressive symptoms rather than those with a diagnosis of major depressive disorder only. Second, the COPES trial adopted an approach to depression care similar to that used for the Improving Mood–Promoting Access to Collaborative Treatment (IMPACT) trial,22 including stepped care and patient preference. This approach, tailored to patients with ACS, is designed to increase the acceptance of and satisfaction with depression treatment in this population because treatment acceptance has been low in previous trials.23 We hypothesized that the COPES intervention would result in greater satisfaction with depression care and improved depressive symptoms. We also compared the rates of major adverse cardiac events (MACEs) and mortality of the depressed patients in the intervention and usual care groups with those of an observational cohort of persistently nondepressed but otherwise medically eligible patients.

Journal ArticleDOI
TL;DR: Toxic concentrations of selenium in a liquid dietary supplement resulted in a widespread outbreak and had the manufacturers been held to standards used in the pharmaceutical industry, it may have been prevented.
Abstract: Background Selenium is an element necessary for normal cellular function, but it can have toxic effects at high doses. We investigated an outbreak of acute selenium poisoning. Methods A case was defined as the onset of symptoms of selenium toxicity in a person within 2 weeks after ingesting a dietary supplement manufactured by “Company A,” purchased after January 1, 2008. We conducted case finding, administered initial and 90-day follow-up questionnaires to affected persons, and obtained laboratory data where available. Results The source of the outbreak was identified as a liquid dietary supplement that contained 200 times the labeled concentration of selenium. Of 201 cases identified in 10 states, 1 person was hospitalized. The median estimated dose of selenium consumed was 41 749 μg/d (recommended dietary allowance is 55 μg/d). Frequently reported symptoms included diarrhea (78%), fatigue (75%), hair loss (72%), joint pain (70%), nail discoloration or brittleness (61%), and nausea (58%). Symptoms persisting 90 days or longer included fingernail discoloration and loss (52%), fatigue (35%), and hair loss (29%). The mean initial serum selenium concentration of 8 patients was 751 μg/L (reference range, ≤125 μg/L). The mean initial urine selenium concentration of 7 patients was 166 μg/24 h (reference range, ≤55 μg/24 h). Conclusions Toxic concentrations of selenium in a liquid dietary supplement resulted in a widespread outbreak. Had the manufacturers been held to standards used in the pharmaceutical industry, it may have been prevented.

Journal ArticleDOI
TL;DR: The risk of MI was increased by cumulative exposure to all the studied PIs except saquinavir and particularly to amprenavir/fosamprenavIR with or without ritonavir, and lopinavir with ritonvir, whereas the association with abacavir cannot be considered causal.
Abstract: Background: The role of exposure to specific antiretroviral drugs on risk of myocardial infarction in human immunodeficiency virus (HIV)–infected patients is debated in the literature. Methods: To assess whether we confirmed the association between exposure to abacavir and risk of myocardial infarction (MI) and to estimate the impact of exposure to other nucleoside reverse transcriptase inhibitors (NRTIs), protease inhibitors (PIs), and non-NRTIs on risk of MI, we conducted a case-control study nested within the French Hospital Database on HIV. Cases (n=289) were patients who, between January 2000 and December 2006, had a prospectively recorded first definite or probable MI. Up to 5 controls (n=884), matched for age, sex, and clinical center, were selected at random with replacement among patients with no history of MI already enrolled in the database when MI was diagnosed in the corresponding case. Conditional logistic regression models were used to adjust for potential confounders. Results: Short-term/recent exposure to abacavir was associated with an increased risk of MI in the overall sample (odds ratios [ORs], 2.01; 95% confidence interval [CI], 1.113.64) but not in the subset of matched cases and controls (81%) who did not use cocaine or intravenous drugs (1.27; 0.64-2.49). Cumulative exposure to all PIs except saquinavir was associated with an increased risk of MI significant for amprenavir/fosamprenavir with or without ritonavir (OR, 1.53; 95% CI, 1.21-1.94 per year) and lopinavir with ritonavir (1.33; 1.09-1.61 per year). Exposure to all nonNRTIs was not associated with risk of MI. Conclusion: The risk of MI was increased by cumulative exposure to all the studied PIs except saquinavir and particularly to amprenavir/fosamprenavir with or without ritonavir and lopinavir with ritonavir, whereas the association with abacavir cannot be considered causal.

Journal ArticleDOI
TL;DR: The risk of recurrence is low if VTE is provoked by surgery, intermediate if provoked by a nonsurgical risk factor, and high if unprovoked, which affects whether patients with VTE should undergo short-term vs indefinite treatment.
Abstract: Background We aimed to determine the risk of recurrence for symptomatic venous thromboembolism (VTE) provoked by different transient risk factors. Data Sources MEDLINE, EMBASE, and Cochrane Collaboration Registry of Randomized Trials databases were searched. Study Selection Prospective cohort studies and randomized trials of patients with a first episode of symptomatic VTE provoked by a transient risk factor and treated for at least 3 months were identified. Data Extraction Number of patients and recurrent VTE during the 0- to 12-month and 0- to 24-month intervals after stopping therapy, study design, and provoking risk factor characteristics were extracted. Data Synthesis Annualized recurrence rates were calculated and pooled across studies. At 24 months, the rate of recurrence was 3.3% per patient-year (11 studies, 2268 patients) for all patients with a transient risk factor, 0.7% per patient-year (3 studies, 248 patients) in the subgroup with a surgical factor, and 4.2% per patient-year (3 studies, 509 patients) in the subgroup with a nonsurgical factor. In the same studies, the rate of recurrence after unprovoked VTE was 7.4% per patient-year. The rate ratio for a nonsurgical compared with a surgical factor was 3.0 and for unprovoked thrombosis compared with a nonsurgical factor was 1.8 at 24 months. Conclusions The risk of recurrence is low if VTE is provoked by surgery, intermediate if provoked by a nonsurgical risk factor, and high if unprovoked. These risks affect whether patients with VTE should undergo short-term vs indefinite treatment.

Journal ArticleDOI
TL;DR: The results emphasize the importance of WC as a risk factor for mortality in older adults, regardless of BMI.
Abstract: Methods: We examined the association between WC and mortality among 48500 men and 56343 women, 50 years or older, in the Cancer Prevention Study II Nutrition Cohort. A total of 9315 men and 5332 women died between 1997 and the end of follow-up in 2006. Results: After adjustment for BMI and other risk factors, very high levels of WC were associated with an approximately 2-fold higher risk of mortality in men and women (among men, relative risk [RR]=2.02; 95% confidence interval [CI], 1.71-2.39 for WC 120 cm compared with 90 cm; among women, RR=2.36; 95% CI, 1.98-2.82forWC110cmcomparedwith75cm).The WC was positively associated with mortality within all categories of BMI. In men, a 10-cm increase in WC was associated with RRs of 1.16 (95% CI, 1.09-1.23), 1.18 (95%CI,1.12-1.24),and1.21(95%CI,1.13-1.30)within normal(18.5to25),overweight(25to30),andobese (30) BMI categories, respectively. In women, corresponding RRs were 1.25 (95% CI, 1.18-1.32), 1.15 (95% CI, 1.08-1.22), and 1.13 (95% CI, 1.06-1.20). Conclusion:These results emphasize the importance of WC as a risk factor for mortality in older adults, regardless of BMI.

Journal ArticleDOI
TL;DR: B-type natriuretic peptide-guided therapy reduces all-cause mortality in patients with chronic HF compared with usual clinical care, especially in patients younger than 75 years.
Abstract: Background The use of plasma levels of B-type natriuretic peptides (BNPs) to guide treatment of patients with chronic heart failure (HF) has been investigated in a number of randomized controlled trials (RCTs). However, the benefits of this treatment approach have been uncertain. We therefore performed a meta-analysis to examine the overall effect of BNP-guided drug therapy on cardiovascular outcomes in patients with chronic HF. Methods We identified RCTs by systematic search of manuscripts, abstracts, and databases. Eligible RCTs were those that enrolled more than 20 patients and involved comparison of BNP-guided drug therapy vs usual clinical care of the patient with chronic HF in an outpatient setting. Results Eight RCTs with a total of 1726 patients and with a mean duration of 16 months (range, 3-24 months) were included in the meta-analysis. Overall, there was a significantly lower risk of all-cause mortality (relative risk [RR], 0.76; 95% confidence interval [CI], 0.63-0.91; P = .003) in the BNP-guided therapy group compared with the control group. In the subgroup of patients younger than 75 years, all-cause mortality was also significantly lower in the BNP-guided group (RR, 0.52; 95% CI, 0.33-0.82; P = .005). However, there was no reduction in mortality with BNP-guided therapy in patients 75 years or older (RR, 0.94; 95% CI, 0.71-1.25; P = .70). The risk of all-cause hospitalization and survival free of any hospitalization was not significantly different between groups (RR, 0.82; 95% CI, 0.64-1.05; P = .12 and RR, 1.07; 95% CI, 0.85-1.34; P = .58, respectively). The additional percentage of patients achieving target doses of angiotensin-converting enzyme inhibitors and β-blockers during the course of these trials averaged 21% and 22% in the BNP group and 11.7% and 12.5% in the control group, respectively. Conclusions B-type natriuretic peptide–guided therapy reduces all-cause mortality in patients with chronic HF compared with usual clinical care, especially in patients younger than 75 years. A component of this survival benefit may be due to increased use of agents proven to decrease mortality in chronic HF. However, there does not seem to be a reduction in all-cause hospitalization or an increase in survival free of hospitalization using this approach.

Journal ArticleDOI
TL;DR: Findings provide strong support for a positive association between BMI and pancreatic cancer risk and suggest centralized fat distribution may increase pancreatic cancers risk, especially in women.
Abstract: Methods: Pooled data were analyzed from the National Cancer Institute Pancreatic Cancer Cohort Consortium (PanScan) to study the association between prediagnostic anthropometric measures and risk of pancreatic cancer. PanScan applied a nested case-control study design and included 2170 cases and 2209 control subjects. Odds ratios (ORs) and 95% confidence intervals (CIs) were estimated using unconditional logistic regression for cohortspecific quartiles of body mass index (BMI [calculated as weight in kilograms divided by height in meters squared]), weight, height, waist circumference, and waist to hip ratio as well as conventional BMI categories (underweight, 18.5; normal weight, 18.5-24.9; overweight, 25.0-29.9; obese, 30.0-34.9; and severely obese,35.0). Models were adjusted for potential confounders. Results: In all of the participants, a positive association between increasing BMI and risk of pancreatic cancer was observed (adjusted OR for the highest vs lowest BMI quartile, 1.33; 95% CI, 1.12-1.58;Ptrend.001). In men, the adjusted OR for pancreatic cancer for the highest vs lowest quartile of BMI was 1.33 (95% CI, 1.04-1.69; Ptrend.03), and in women it was 1.34 (95% CI, 1.05-1.70; Ptrend=.01). Increased waist to hip ratio was associated with increased risk of pancreatic cancer in women (adjusted OR for the highest vs lowest quartile, 1.87; 95% CI, 1.31-2.69; Ptrend=.003) but less so in men. Conclusions: These findings provide strong support for a positive association between BMI and pancreatic cancer risk. In addition, centralized fat distribution may increase pancreatic cancer risk, especially in women.