scispace - formally typeset
Search or ask a question

Showing papers in "JAMA Internal Medicine in 2016"


Journal ArticleDOI
TL;DR: Leisure-time physical activity was associated with lower risks of many cancer types, and most of these associations were evident regardless of body size or smoking history, supporting broad generalizability of findings.
Abstract: Importance Leisure-time physical activity has been associated with lower risk of heart-disease and all-cause mortality, but its association with risk of cancer is not well understood. Objective To determine the association of leisure-time physical activity with incidence of common types of cancer and whether associations vary by body size and/or smoking. Design, Setting, and Participants We pooled data from 12 prospective US and European cohorts with self-reported physical activity (baseline, 1987-2004). We used multivariable Cox regression to estimate hazard ratios (HRs) and 95% confidence intervals for associations of leisure-time physical activity with incidence of 26 types of cancer. Leisure-time physical activity levels were modeled as cohort-specific percentiles on a continuous basis and cohort-specific results were synthesized by random-effects meta-analysis. Hazard ratios for high vs low levels of activity are based on a comparison of risk at the 90th vs 10th percentiles of activity. The data analysis was performed from January 1, 2014, to June 1, 2015. Exposures Leisure-time physical activity of a moderate to vigorous intensity. Main Outcomes and Measures Incident cancer during follow-up. Results A total of 1.44 million participants (median [range] age, 59 [19-98] years; 57% female) and 186 932 cancers were included. High vs low levels of leisure-time physical activity were associated with lower risks of 13 cancers: esophageal adenocarcinoma (HR, 0.58; 95% CI, 0.37-0.89), liver (HR, 0.73; 95% CI, 0.55-0.98), lung (HR, 0.74; 95% CI, 0.71-0.77), kidney (HR, 0.77; 95% CI, 0.70-0.85), gastric cardia (HR, 0.78; 95% CI, 0.64-0.95), endometrial (HR, 0.79; 95% CI, 0.68-0.92), myeloid leukemia (HR, 0.80; 95% CI, 0.70-0.92), myeloma (HR, 0.83; 95% CI, 0.72-0.95), colon (HR, 0.84; 95% CI, 0.77-0.91), head and neck (HR, 0.85; 95% CI, 0.78-0.93), rectal (HR, 0.87; 95% CI, 0.80-0.95), bladder (HR, 0.87; 95% CI, 0.82-0.92), and breast (HR, 0.90; 95% CI, 0.87-0.93). Body mass index adjustment modestly attenuated associations for several cancers, but 10 of 13 inverse associations remained statistically significant after this adjustment. Leisure-time physical activity was associated with higher risks of malignant melanoma (HR, 1.27; 95% CI, 1.16-1.40) and prostate cancer (HR, 1.05; 95% CI, 1.03-1.08). Associations were generally similar between overweight/obese and normal-weight individuals. Smoking status modified the association for lung cancer but not other smoking-related cancers. Conclusions and Relevance Leisure-time physical activity was associated with lower risks of many cancer types. Health care professionals counseling inactive adults should emphasize that most of these associations were evident regardless of body size or smoking history, supporting broad generalizability of findings.

912 citations


Journal ArticleDOI
TL;DR: Retrospective analysis of administrative health claims to determine the association between chronic opioid use and surgery among privately insured patients between January 1, 2001, and December 31, 2013 found male sex, age older than 50 years, and preoperative history of drug abuse, alcohol abuse, depression, benzodiazepine use, or antidepressant use were associated with chronic opioids use among surgical patients.
Abstract: Importance Chronic opioid use imposes a substantial burden in terms of morbidity and economic costs. Whether opioid-naive patients undergoing surgery are at increased risk for chronic opioid use is unknown, as are the potential risk factors for chronic opioid use following surgery. Objective To characterize the risk of chronic opioid use among opioid-naive patients following 1 of 11 surgical procedures compared with nonsurgical patients. Design, Setting, and Participants Retrospective analysis of administrative health claims to determine the association between chronic opioid use and surgery among privately insured patients between January 1, 2001, and December 31, 2013. The data concluded 11 surgical procedures (total knee arthroplasty [TKA], total hip arthroplasty, laparoscopic cholecystectomy, open cholecystectomy, laparoscopic appendectomy, open appendectomy, cesarean delivery, functional endoscopic sinus surgery [FESS], cataract surgery, transurethral prostate resection [TURP], and simple mastectomy). Multivariable logistic regression analysis was performed to control for possible confounders, including sex, age, preoperative history of depression, psychosis, drug or alcohol abuse, and preoperatice use of benzodiazepines, antipsychotics, and antidepressants. Exposures One of the 11 study surgical procedures. Main Outcomes and Measures Chronic opioid use, defined as having filled 10 or more prescriptions or more than 120 days’ supply of an opioid in the first year after surgery, excluding the first 90 postoperative days. For nonsurgical patients, chronic opioid use was defined as having filled 10 or more prescriptions or more than 120 days’ supply following a randomly assigned “surgery date.” Results The study included 641 941 opioid-naive surgical patients (169 666 men; mean [SD] age, 44.0 [12.8] years), and 18 011 137 opioid-naive nonsurgical patients (8 849 107 men; mean [SD] age, 42.4 [12.6] years). Among the surgical patients, the incidence of chronic opioid in the first preoperative year ranged from 0.119% for Cesarean delivery (95% CI, 0.104%-0.134%) to 1.41% for TKA (95% CI, 1.29%-1.53%) The baseline incidence of chronic opioid use among the nonsurgical patients was 0.136% (95% CI, 0.134%-0.137%). Except for cataract surgery, laparoscopic appendectomy, FESS, and TURP, all of the surgical procedures were associated with an increased risk of chronic opioid use, with odds ratios ranging from 1.28 (95% CI, 1.12-1.46) for cesarean delivery to 5.10 (95% CI, 4.67-5.58) for TKA. Male sex, age older than 50 years, and preoperative history of drug abuse, alcohol abuse, depression, benzodiazepine use, or antidepressant use were associated with chronic opioid use among surgical patients. Conclusions and Relevance In opioid-naive patients, many surgical procedures are associated with an increased risk of chronic opioid use in the postoperative period. A certain subset of patients (eg, men, elderly patients) may be particularly vulnerable.

824 citations


Journal ArticleDOI
TL;DR: Mobile phone text messaging approximately doubles the odds of medication adherence in adults with chronic disease, and this increase translates into adherence rates improving from 50% to 67.8%, or an absolute increase of 17.8%.
Abstract: Importance Adherence to long-term therapies in chronic disease is poor. Traditional interventions to improve adherence are complex and not widely effective. Mobile telephone text messaging may be a scalable means to support medication adherence. Objectives To conduct a meta-analysis of randomized clinical trials to assess the effect of mobile telephone text messaging on medication adherence in chronic disease. Data Sources MEDLINE, EMBASE, Cochrane Central Register of Controlled Trials, PsycINFO, and CINAHL (from database inception to January 15, 2015), as well as reference lists of the articles identified. The data were analyzed in March 2015. Study Selection Randomized clinical trials evaluating a mobile telephone text message intervention to promote medication adherence in adults with chronic disease. Data Extraction Two authors independently extracted information on study characteristics, text message characteristics, and outcome measures as per the predefined protocol. Main Outcomes and Measures Odds ratios and pooled data were calculated using random-effects models. Risk of bias and study quality were assessed as per Cochrane guidelines. Disagreement was resolved by consensus. Results Sixteen randomized clinical trials were included, with 5 of 16 using personalization, 8 of 16 using 2-way communication, and 8 of 16 using a daily text message frequency. The median intervention duration was 12 weeks, and self-report was the most commonly used method to assess medication adherence. In the pooled analysis of 2742 patients (median age, 39 years and 50.3% [1380 of 2742] female), text messaging significantly improved medication adherence (odds ratio, 2.11; 95% CI, 1.52-2.93; P P = .002). There was moderate heterogeneity ( I 2 = 62%) across clinical trials. After adjustment for publication bias, the point estimate was reduced but remained positive for an intervention effect (odds ratio, 1.68; 95% CI, 1.18-2.39). Conclusions and Relevance Mobile phone text messaging approximately doubles the odds of medication adherence. This increase translates into adherence rates improving from 50% (assuming this baseline rate in patients with chronic disease) to 67.8%, or an absolute increase of 17.8%. While promising, these results should be interpreted with caution given the short duration of trials and reliance on self-reported medication adherence measures. Future studies need to determine the features of text message interventions that improve success, as well as appropriate patient populations, sustained effects, and influences on clinical outcomes.

527 citations


Journal ArticleDOI
TL;DR: Proton pump inhibitor use was associated with incident CKD in unadjusted analysis and in analysis adjusted for demographic, socioeconomic, and clinical variables and future research should evaluate whether limiting PPI use reduces the incidence of CKD.
Abstract: Importance Proton pump inhibitors (PPIs) are among the most commonly used drugs worldwide and have been linked to acute interstitial nephritis. Less is known about the association between PPI use and chronic kidney disease (CKD). Objective To quantify the association between PPI use and incident CKD in a population-based cohort. Design, Setting, and Participants In total, 10 482 participants in the Atherosclerosis Risk in Communities study with an estimated glomerular filtration rate of at least 60 mL/min/1.73 m 2 were followed from a baseline visit between February 1, 1996, and January 30, 1999, to December 31, 2011. The data was analyzed from May 2015 to October 2015. The findings were replicated in an administrative cohort of 248 751 patients with an estimated glomerular filtration rate of at least 60 mL/min/1.73 m 2 from the Geisinger Health System. Exposures Self-reported PPI use in the Atherosclerosis Risk in Communities study or an outpatient PPI prescription in the Geisinger Health System replication cohort. Histamine 2 (H 2 ) receptor antagonist use was considered a negative control and active comparator. Main Outcomes and Measures Incident CKD was defined using diagnostic codes at hospital discharge or death in the Atherosclerosis Risk in Communities Study, and by a sustained outpatient estimated glomerular filtration rate of less than 60 mL/min/1.73 m 2 in the Geisinger Health System replication cohort. Results Among 10 482 participants in the Atherosclerosis Risk in Communities study, the mean (SD) age was 63.0 (5.6) years, and 43.9% were male. Compared with nonusers, PPI users were more often of white race, obese, and taking antihypertensive medication. Proton pump inhibitor use was associated with incident CKD in unadjusted analysis (hazard ratio [HR], 1.45; 95% CI, 1.11-1.90); in analysis adjusted for demographic, socioeconomic, and clinical variables (HR, 1.50; 95% CI, 1.14-1.96); and in analysis with PPI ever use modeled as a time-varying variable (adjusted HR, 1.35; 95% CI, 1.17-1.55). The association persisted when baseline PPI users were compared directly with H 2 receptor antagonist users (adjusted HR, 1.39; 95% CI, 1.01-1.91) and with propensity score–matched nonusers (HR, 1.76; 95% CI, 1.13-2.74). In the Geisinger Health System replication cohort, PPI use was associated with CKD in all analyses, including a time-varying new-user design (adjusted HR, 1.24; 95% CI, 1.20-1.28). Twice-daily PPI dosing (adjusted HR, 1.46; 95% CI, 1.28-1.67) was associated with a higher risk than once-daily dosing (adjusted HR, 1.15; 95% CI, 1.09-1.21). Conclusions and Relevance Proton pump inhibitor use is associated with a higher risk of incident CKD. Future research should evaluate whether limiting PPI use reduces the incidence of CKD.

511 citations


Journal ArticleDOI
TL;DR: The incidence of HIV acquisition was extremely low despite a high incidence of STIs in a large US PrEP demonstration project, and adherence was higher among those participants who reported more risk behaviors.
Abstract: Importance Several randomized clinical trials have demonstrated the efficacy of preexposure prophylaxis (PrEP) in preventing human immunodeficiency virus (HIV) acquisition. Little is known about adherence to the regimen, sexual practices, and overall effectiveness when PrEP is implemented in clinics that treat sexually transmitted infections (STIs) and community-based clinics serving men who have sex with men (MSM). Objective To assess PrEP adherence, sexual behaviors, and the incidence of STIs and HIV infection in a cohort of MSM and transgender women initiating PrEP in the United States. Design, Setting, and Participants Demonstration project conducted from October 1, 2012, through February 10, 2015 (last date of follow-up), among 557 MSM and transgender women in 2 STI clinics in San Francisco, California, and Miami, Florida, and a community health center in Washington, DC. Data were analyzed from December 18, 2014, through August 8, 2015. Interventions A combination of daily, oral tenofovir disoproxil fumarate and emtricitabine was provided free of charge for 48 weeks. All participants received HIV testing, brief client-centered counseling, and clinical monitoring. Main Outcomes and Measures Concentrations of tenofovir diphosphate in dried blood spot samples, self-reported numbers of anal sex partners and episodes of condomless receptive anal sex, and incidence of STI and HIV acquisition. Results Overall, 557 participants initiated PrEP, and 437 of these (78.5%) were retained through 48 weeks. Based on the findings from the 294 participants who underwent measurement of tenofovir diphosphate levels, 80.0% to 85.6% had protective levels (consistent with ≥4 doses/wk) at follow-up visits. African American participants (56.8% of visits; P = .003) and those from the Miami site (65.1% of visits; P P = .02) and those reporting at least 2 condomless anal sex partners in the past 3 months (88.6%; P = .01) were more likely to have protective levels. The mean number of anal sex partners declined during follow-up from 10.9 to 9.3, whereas the proportion engaging in condomless receptive anal sex remained stable at 65.5% to 65.6%. Overall STI incidence was high (90 per 100 person-years) but did not increase over time. Two individuals became HIV infected during follow-up (HIV incidence, 0.43 [95% CI, 0.05-1.54] infections per 100 person-years); both had tenofovir diphosphate levels consistent with fewer than 2 doses/wk at seroconversion. Conclusions and Relevance The incidence of HIV acquisition was extremely low despite a high incidence of STIs in a large US PrEP demonstration project. Adherence was higher among those participants who reported more risk behaviors. Interventions that address racial and geographic disparities and housing instability may increase the impact of PrEP.

511 citations


Journal ArticleDOI
TL;DR: Among physicians with faculty appointments at 24 US public medical schools, significant sex differences in salary exist even after accounting for age, experience, specialty, faculty rank, and measures of research productivity and clinical revenue.
Abstract: Importance Limited evidence exists on salary differences between male and female academic physicians, largely owing to difficulty obtaining data on salary and factors influencing salary. Existing studies have been limited by reliance on survey-based approaches to measuring sex differences in earnings, lack of contemporary data, small sample sizes, or limited geographic representation. Objective To analyze sex differences in earnings among US academic physicians. Design, Setting, and Participants Freedom of Information laws mandate release of salary information of public university employees in several states. In 12 states with salary information published online, salary data were extracted on 10 241 academic physicians at 24 public medical schools. These data were linked to a unique physician database with detailed information on sex, age, years of experience, faculty rank, specialty, scientific authorship, National Institutes of Health funding, clinical trial participation, and Medicare reimbursements (proxy for clinical revenue). Sex differences in salary were estimated after adjusting for these factors. Exposures Physician sex. Main Outcomes and Measures Annual salary. Results Among 10 241 physicians, female physicians (n = 3549) had lower mean (SD) unadjusted salaries than male physicians ($206 641 [$88 238] vs $257 957 [$137 202]; absolute difference, $51 315 [95% CI, $46 330-$56 301]). Sex differences persisted after multivariable adjustment ($227 783 [95% CI, $224 117-$231 448] vs $247 661 [95% CI, $245 065-$250 258] with an absolute difference of $19 878 [95% CI, $15 261-$24 495]). Sex differences in salary varied across specialties, institutions, and faculty ranks. For example, adjusted salaries of female full professors ($250 971 [95% CI, $242 307-$259 635]) were comparable to those of male associate professors ($247 212 [95% CI, $241 850-$252 575]). Among specialties, adjusted salaries were highest in orthopedic surgery ($358 093 [95% CI, $344 354-$371 831]), surgical subspecialties ($318 760 [95% CI, $311 030-$326 491]), and general surgery ($302 666 [95% CI, $294 060-$311 272]) and lowest in infectious disease, family medicine, and neurology (mean income, Conclusions and Relevance Among physicians with faculty appointments at 24 US public medical schools, significant sex differences in salary exist even after accounting for age, experience, specialty, faculty rank, and measures of research productivity and clinical revenue.

492 citations


Journal ArticleDOI
TL;DR: High animalprotein intake was positively associated with cardiovascular mortality and high plant protein intake was inversely associated with all-cause and cardiovascular mortality, especially among individuals with at least 1 lifestyle risk factor.
Abstract: Importance Defining what represents a macronutritionally balanced diet remains an open question and a high priority in nutrition research. Although the amount of protein may have specific effects, from a broader dietary perspective, the choice of protein sources will inevitably influence other components of diet and may be a critical determinant for the health outcome. Objective To examine the associations of animal and plant protein intake with the risk for mortality. Design, Setting, and Participants This prospective cohort study of US health care professionals included 131 342 participants from the Nurses’ Health Study (1980 to end of follow-up on June 1, 2012) and Health Professionals Follow-up Study (1986 to end of follow-up on January 31, 2012). Animal and plant protein intake was assessed by regularly updated validated food frequency questionnaires. Data were analyzed from June 20, 2014, to January 18, 2016. Main Outcomes and Measures Hazard ratios (HRs) for all-cause and cause-specific mortality. Results Of the 131 342 participants, 85 013 were women (64.7%) and 46 329 were men (35.3%) (mean [SD] age, 49 [9] years). The median protein intake, as assessed by percentage of energy, was 14% for animal protein (5th-95th percentile, 9%-22%) and 4% for plant protein (5th-95th percentile, 2%-6%). After adjusting for major lifestyle and dietary risk factors, animal protein intake was not associated with all-cause mortality (HR, 1.02 per 10% energy increment; 95% CI, 0.98-1.05; P for trend = .33) but was associated with higher cardiovascular mortality (HR, 1.08 per 10% energy increment; 95% CI, 1.01-1.16; P for trend = .04). Plant protein was associated with lower all-cause mortality (HR, 0.90 per 3% energy increment; 95% CI, 0.86-0.95; P for trend P for trend = .007). These associations were confined to participants with at least 1 unhealthy lifestyle factor based on smoking, heavy alcohol intake, overweight or obesity, and physical inactivity, but not evident among those without any of these risk factors. Replacing animal protein of various origins with plant protein was associated with lower mortality. In particular, the HRs for all-cause mortality were 0.66 (95% CI, 0.59-0.75) when 3% of energy from plant protein was substituted for an equivalent amount of protein from processed red meat, 0.88 (95% CI, 0.84-0.92) from unprocessed red meat, and 0.81 (95% CI, 0.75-0.88) from egg. Conclusions and Relevance High animal protein intake was positively associated with cardiovascular mortality and high plant protein intake was inversely associated with all-cause and cardiovascular mortality, especially among individuals with at least 1 lifestyle risk factor. Substitution of plant protein for animal protein, especially that from processed red meat, was associated with lower mortality, suggesting the importance of protein source.

454 citations


Journal ArticleDOI
TL;DR: The use of prescription medications and dietary supplements, and concurrent use of interacting medications, has increased since 2005, with 15% of older adults potentially at risk for a major drug-drug interaction.
Abstract: Importance Prescription and over-the-counter medicines and dietary supplements are commonly used, alone and together, among older adults. However, the effect of recent regulatory and market forces on these patterns is not known. Objectives To characterize changes in the prevalence of medication use, including concurrent use of prescription and over-the-counter medications and dietary supplements, and to quantify the frequency and types of potential major drug-drug interactions. Design, Setting, and Participants Descriptive analyses of a longitudinal, nationally representative sample of community-dwelling older adults 62 to 85 years old. In-home interviews with direct medication inspection were conducted in 2005-2006 and again in 2010-2011. The dates of the analysis were March to November 2015. We defined medication use as the use of at least 1 prescription or over-the-counter medication or dietary supplement at least daily or weekly and defined concurrent use as the regular use of at least 2 medications. We used Micromedex to identify potential major drug-drug interactions. Main Outcomes and Measures Population estimates of the prevalence of medication use (in aggregate and by therapeutic class), concurrent use, and major drug-drug interactions. Results The study cohort comprised 2351 participants in 2005-2006 and 2206 in 2010-2011. Their mean age was 70.9 years in 2005-2006 and 71.4 years in 2010-2011. Fifty-three percent of participants were female in 2005-2006, and 51.6% were female in 2010-2011. The use of at least 1 prescription medication slightly increased from 84.1% in 2005-2006 to 87.7% in 2010-2011 ( P = .003). Concurrent use of at least 5 prescription medications increased from 30.6% to 35.8% ( P = .02). While the use of over-the-counter medications declined from 44.4% to 37.9%, the use of dietary supplements increased from 51.8% to 63.7% ( P P P Conclusions and Relevance In this study, the use of prescription medications and dietary supplements, and concurrent use of interacting medications, has increased since 2005, with 15% of older adults potentially at risk for a major drug-drug interaction. Improving safety with the use of multiple medications has the potential to reduce preventable adverse drug events associated with medications commonly used among older adults.

448 citations


Journal ArticleDOI
TL;DR: In this paper, the authors evaluated the effectiveness of telemonitoring in reducing 180-day all-cause readmissions among a broad population of older adults hospitalized with heart failure in 6 academic medical centers in California.
Abstract: Importance It remains unclear whether telemonitoring approaches provide benefits for patients with heart failure (HF) after hospitalization. Objective To evaluate the effectiveness of a care transition intervention using remote patient monitoring in reducing 180-day all-cause readmissions among a broad population of older adults hospitalized with HF. Design, Setting, and Participants We randomized 1437 patients hospitalized for HF between October 12, 2011, and September 30, 2013, to the intervention arm (715 patients) or to the usual care arm (722 patients) of the Better Effectiveness After Transition–Heart Failure (BEAT-HF) study and observed them for 180 days. The dates of our study analysis were March 30, 2014, to October 1, 2015. The setting was 6 academic medical centers in California. Participants were hospitalized individuals 50 years or older who received active treatment for decompensated HF. Interventions The intervention combined health coaching telephone calls and telemonitoring. Telemonitoring used electronic equipment that collected daily information about blood pressure, heart rate, symptoms, and weight. Centralized registered nurses conducted telemonitoring reviews, protocolized actions, and telephone calls. Main Outcomes and Measures The primary outcome was readmission for any cause within 180 days after discharge. Secondary outcomes were all-cause readmission within 30 days, all-cause mortality at 30 and 180 days, and quality of life at 30 and 180 days. Results Among 1437 participants, the median age was 73 years. Overall, 46.2% (664 of 1437) were female, and 22.0% (316 of 1437) were African American. The intervention and usual care groups did not differ significantly in readmissions for any cause 180 days after discharge, which occurred in 50.8% (363 of 715) and 49.2% (355 of 722) of patients, respectively (adjusted hazard ratio, 1.03; 95% CI, 0.88-1.20; P = .74). In secondary analyses, there were no significant differences in 30-day readmission or 180-day mortality, but there was a significant difference in 180-day quality of life between the intervention and usual care groups. No adverse events were reported. Conclusions and Relevance Among patients hospitalized for HF, combined health coaching telephone calls and telemonitoring did not reduce 180-day readmissions. Trial Registration clinicaltrials.gov Identifier:NCT01360203

446 citations


Journal ArticleDOI
TL;DR: In the second year of expansion, Kentucky's Medicaid program and Arkansas's private option were associated with significant increases in outpatient utilization, preventive care, and improved health care quality; reductions in emergency department use; and improved self-reported health.
Abstract: Importance Under the Affordable Care Act (ACA), more than 30 states have expanded Medicaid, with some states choosing to expand private insurance instead (the “private option”). In addition, while coverage gains from the ACA’s Medicaid expansion are well documented, impacts on utilization and health are unclear. Objective To assess changes in access to care, utilization, and self-reported health among low-income adults in 3 states taking alternative approaches to the ACA. Design, Setting, and Participants Differences-in-differences analysis of survey data from November 2013 through December 2015 of US citizens ages 19 to 64 years with incomes below 138% of the federal poverty level in Kentucky, Arkansas, and Texas (n = 8676). Data analysis was conducted between January and May 2016. Exposures Medicaid expansion in Kentucky and use of Medicaid funds to purchase private insurance for low-income adults in Arkansas (private option), compared with no expansion in Texas. Main Outcomes and Measures Self-reported access to primary care, specialty care, and medications; affordability of care; outpatient, inpatient, and emergency utilization; receiving glucose and cholesterol testing, annual check-up, and care for chronic conditions; quality of care, depression score, and overall health. Results Among the 3 states included in the study, Arkansas (n=2890), Kentucky (n=2898, and Texas (n=2888), there were no differences in sex, income, or marital status. Respondents from Texas were younger, more urban, and disproportionately Latino compared with those in Arkansas and Kentucky. Significant changes in coverage and access were more apparent in 2015 than in 2014. By 2015, expansion was associated with a 22.7 percentage-point reduction in the uninsured rate compared with nonexpansion ( P P P P = .02), reduced likelihood of emergency department visits (−6.0 percentage points, P = .04), and increased outpatient visits (0.69 visits per year; P = .04). Screening for diabetes (6.3 percentage points; P = .05), glucose testing among patients with diabetes (10.7 percentage points; P = .03), and regular care for chronic conditions (12.0 percentage points; P = .008) all increased significantly after expansion. Quality of care ratings improved significantly (−7.1 percentage points with “fair/poor quality of care”; P = .03), as did the share of adults reporting excellent health (4.8 percentage points; P = .04). Comparisons of Arkansas vs Kentucky showed increased private coverage in the former (21.7 percentage points; P P P = .04), but no other statistically significant differences. Conclusions and Relevance In the second year of expansion, Kentucky’s Medicaid program and Arkansas’s private option were associated with significant increases in outpatient utilization, preventive care, and improved health care quality; reductions in emergency department use; and improved self-reported health. Aside from the type of coverage obtained, outcomes were similar for nearly all other outcomes between the 2 states using alternative approaches to expansion.

426 citations


Journal ArticleDOI
TL;DR: Although higher monthly doses of vitamin D were effective in reaching a threshold of at least 30 ng/mL of 25-hydroxyvitamin D, they had no benefit on lower extremity function and were associated with increased risk of falls compared with 24,000 IU.
Abstract: Importance Vitamin D deficiency has been associated with poor physical performance. Objective To determine the effectiveness of high-dose vitamin D in lowering the risk of functional decline. Design, Setting, and Participants One-year, double-blind, randomized clinical trial conducted in Zurich, Switzerland. The screening phase was December 1, 2009, to May 31, 2010, and the last study visit was in May 2011. The dates of our analysis were June 15, 2012, to October 10, 2015. Participants were 200 community-dwelling men and women 70 years and older with a prior fall. Interventions Three study groups with monthly treatments, including a low-dose control group receiving 24 000 IU of vitamin D 3 (24 000 IU group), a group receiving 60 000 IU of vitamin D 3 (60 000 IU group), and a group receiving 24 000 IU of vitamin D 3 plus 300 μg of calcifediol (24 000 IU plus calcifediol group). Main Outcomes and Measures The primary end point was improving lower extremity function (on the Short Physical Performance Battery) and achieving 25-hydroxyvitamin D levels of at least 30 ng/mL at 6 and 12 months. A secondary end point was monthly reported falls. Analyses were adjusted for age, sex, and body mass index. Results The study cohort comprised 200 participants (men and women ≥70 years with a prior fall). Their mean age was 78 years, 67.0% (134 of 200) were female, and 58.0% (116 of 200) were vitamin D deficient ( P = .001), they were not more effective in improving lower extremity function, which did not differ among the treatment groups ( P = .26). However, over the 12-month follow-up, the incidence of falls differed significantly among the treatment groups, with higher incidences in the 60 000 IU group (66.9%; 95% CI, 54.4% to 77.5%) and the 24 000 IU plus calcifediol group (66.1%; 95% CI, 53.5%-76.8%) group compared with the 24 000 IU group (47.9%; 95% CI, 35.8%-60.3%) ( P = .048). Consistent with the incidence of falls, the mean number of falls differed marginally by treatment group. The 60 000 IU group (mean, 1.47) and the 24 000 IU plus calcifediol group (mean, 1.24) had higher mean numbers of falls compared with the 24 000 IU group (mean, 0.94) ( P = .09). Conclusions and Relevance Although higher monthly doses of vitamin D were effective in reaching a threshold of at least 30 ng/mL of 25-hydroxyvitamin D, they had no benefit on lower extremity function and were associated with increased risk of falls compared with 24 000 IU. Trial Registration clinicaltrials.gov Identifier:NCT01017354

Journal ArticleDOI
TL;DR: How caregivers' involvement in older adults' health care activities relates to caregiving responsibilities, supportive services use, and caregiving-related effects is examined.
Abstract: Importance Family and unpaid caregivers commonly help older adults who are at high risk for poorly coordinated care. Objective To examine how caregivers’ involvement in older adults’ health care activities relates to caregiving responsibilities, supportive services use, and caregiving-related effects. Design, Setting, and Participants A total of 1739 family and unpaid caregivers of 1171 community-dwelling older adults with disabilities who participated in the 2011 National Health and Aging Trends Study (NHATS) and National Study of Caregiving (NSOC). Main Outcomes and Measures Caregiving-related effects, including emotional, physical, and financial difficulty; participation restrictions in valued activities; and work productivity loss. Exposures Caregivers assisting older adults who provide substantial, some, or no help with health care, defined by coordinating care and managing medications (help with both, either, or neither activity, respectively). Results Based on NHATS and NSOC responses from 1739 family and unpaid caregivers of 1171 older adults with disabilities, weighted estimates were produced that accounted for the sampling designs of each survey. From these weighted estimates, 14.7 million caregivers assisting 7.7 million older adults, 6.5 million (44.1%) provided substantial help, 4.4 million (29.8%) provided some help, and 3.8 million (26.1%) provided no help with health care. Almost half (45.5%) of the caregivers providing substantial help with health care assisted an older adult with dementia. Caregivers providing substantial help with health care provided more hours of assistance per week than caregivers providing some or no help (28.1 vs 15.1 and 8.3 hours, P P Conclusions and Relevance Family caregivers providing substantial assistance with health care experience significant emotional difficulty and role-related effects, yet only one-quarter use supportive services.

Journal ArticleDOI
TL;DR: To study the association between physicians' receipt of industry-sponsored meals, which account for roughly 80% of the total number of industry payments, and rates of prescribing the promoted drug to Medicare beneficiaries, industry payment data and Medicare prescribing records recently became publicly available.
Abstract: Importance The association between industry payments to physicians and prescribing rates of the brand-name medications that are being promoted is controversial. In the United States, industry payment data and Medicare prescribing records recently became publicly available. Objective To study the association between physicians’ receipt of industry-sponsored meals, which account for roughly 80% of the total number of industry payments, and rates of prescribing the promoted drug to Medicare beneficiaries. Design, Setting, and Participants Cross-sectional analysis of industry payment data from the federal Open Payments Program for August 1 through December 31, 2013, and prescribing data for individual physicians from Medicare Part D, for all of 2013. Participants were physicians who wrote Medicare prescriptions in any of 4 drug classes: statins, cardioselective β-blockers, angiotensin-converting enzyme inhibitors and angiotensin-receptor blockers (ACE inhibitors and ARBs), and selective serotonin and serotonin-norepinephrine reuptake inhibitors (SSRIs and SNRIs). We identified physicians who received industry-sponsored meals promoting the most-prescribed brand-name drug in each class (rosuvastatin, nebivolol, olmesartan, and desvenlafaxine, respectively). Data analysis was performed from August 20, 2015, to December 15, 2015. Exposures Receipt of an industry-sponsored meal promoting the drug of interest. Main Outcomes and Measures Prescribing rates of promoted drugs compared with alternatives in the same class, after adjustment for physician prescribing volume, demographic characteristics, specialty, and practice setting. Results A total of 279 669 physicians received 63 524 payments associated with the 4 target drugs. Ninety-five percent of payments were meals, with a mean value of less than $20. Rosuvastatin represented 8.8% (SD, 9.9%) of statin prescriptions; nebivolol represented 3.3% (7.4%) of cardioselective β-blocker prescriptions; olmesartan represented 1.6% (3.9%) of ACE inhibitor and ARB prescriptions; and desvenlafaxine represented 0.6% (2.6%) of SSRI and SNRI prescriptions. Physicians who received a single meal promoting the drug of interest had higher rates of prescribing rosuvastatin over other statins (odds ratio [OR], 1.18; 95% CI, 1.17-1.18), nebivolol over other β-blockers (OR, 1.70; 95% CI, 1.69-1.72), olmesartan over other ACE inhibitors and ARBs (OR, 1.52; 95% CI, 1.51-1.53), and desvenlafaxine over other SSRIs and SNRIs (OR, 2.18; 95% CI, 2.13-2.23). Receipt of additional meals and receipt of meals costing more than $20 were associated with higher relative prescribing rates. Conclusions and Relevance Receipt of industry-sponsored meals was associated with an increased rate of prescribing the brand-name medication that was being promoted. The findings represent an association, not a cause-and-effect relationship.

Journal ArticleDOI
TL;DR: The current evidence suggests that exercise alone or in combination with education is effective for preventing low back pain, and other interventions, including education alone, back belts, and shoe insoles do not appear to prevent LBP.
Abstract: Importance Existing guidelines and systematic reviews lack clear recommendations for prevention of low back pain (LBP). Objective To investigate the effectiveness of interventions for prevention of LBP. Data Sources MEDLINE, EMBASE, Physiotherapy Evidence Database Scale, and Cochrane Central Register of Controlled Trials from inception to November 22, 2014. Study Selection Randomized clinical trials of prevention strategies for nonspecific LBP. Data Extraction and Synthesis Two independent reviewers extracted data and assessed the risk of bias. The Physiotherapy Evidence Database Scale was used to evaluate the risk-of-bias. The Grading of Recommendations Assessment, Development, and Evaluation system was used to describe the quality of evidence. Main Outcomes and Measures The primary outcome measure was an episode of LBP, and the secondary outcome measure was an episode of sick leave associated with LBP. We calculated relative risks (RRs) and 95% CIs using random-effects models. Results The literature search identified 6133 potentially eligible studies; of these, 23 published reports (on 21 different randomized clinical trials including 30 850 unique participants) met the inclusion criteria. With results presented as RRs (95% CIs), there was moderate-quality evidence that exercise combined with education reduces the risk of an episode of LBP (0.55 [0.41-0.74]) and low-quality evidence of no effect on sick leave (0.74 [0.44-1.26]). Low- to very low–quality evidence suggested that exercise alone may reduce the risk of both an LBP episode (0.65 [0.50-0.86]) and use of sick leave (0.22 [0.06-0.76]). For education alone, there was moderate- to very low–quality evidence of no effect on LBP (1.03 [0.83-1.27]) or sick leave (0.87 [0.47-1.60]). There was low- to very low–quality evidence that back belts do not reduce the risk of LBP episodes (1.01 [0.71-1.44]) or sick leave (0.87 [0.47-1.60]). There was low-quality evidence of no protective effect of shoe insoles on LBP (1.01 [0.74-1.40]). Conclusion and Relevance The current evidence suggests that exercise alone or in combination with education is effective for preventing LBP. Other interventions, including education alone, back belts, and shoe insoles, do not appear to prevent LBP. Whether education, training, or ergonomic adjustments prevent sick leave is uncertain because the quality of evidence is low.

Journal ArticleDOI
TL;DR: Findings support current dietary recommendations to replace saturated fat and trans-fat with unsaturated fats and different types of dietary fats have divergent associations with total and cause-specific mortality.
Abstract: Importance Previous studies have shown distinct associations between specific dietary fat and cardiovascular disease. However, evidence on specific dietary fat and mortality remains limited and inconsistent. Objective To examine the associations of specific dietary fats with total and cause-specific mortality in 2 large ongoing cohort studies. Design, Setting, and Participants This cohort study investigated 83 349 women from the Nurses’ Health Study (July 1, 1980, to June 30, 2012) and 42 884 men from the Health Professionals Follow-up Study (February 1, 1986, to January 31, 2012) who were free of cardiovascular disease, cancer, and types 1 and 2 diabetes at baseline. Dietary fat intake was assessed at baseline and updated every 2 to 4 years. Information on mortality was obtained from systematic searches of the vital records of states and the National Death Index, supplemented by reports from family members or postal authorities. Data were analyzed from September 18, 2014, to March 27, 2016. Main Outcomes and Measures Total and cause-specific mortality. Results During 3 439 954 person-years of follow-up, 33 304 deaths were documented. After adjustment for known and suspected risk factors, dietary total fat compared with total carbohydrates was inversely associated with total mortality (hazard ratio [HR] comparing extreme quintiles, 0.84; 95% CI, 0.81-0.88; P trans -fat ( P P P = .002 for trend). Conclusions and Relevance Different types of dietary fats have divergent associations with total and cause-specific mortality. These findings support current dietary recommendations to replace saturated fat and trans -fat with unsaturated fats.

Journal ArticleDOI
TL;DR: In this article, the role of omega-3 polyunsaturated fatty acids for primary prevention of coronary heart disease (CHD) remains controversial, and most prior longitudinal studies evaluated self-reported cons
Abstract: IMPORTANCE The role of omega-3 polyunsaturated fatty acids for primary prevention of coronary heart disease (CHD) remains controversial. Most prior longitudinal studies evaluated self-reported cons ...

Journal ArticleDOI
TL;DR: This study supports prior research finding substantial health disparities for LGB adults in the United States, potentially due to the stressors that LGB people experience as a result of interpersonal and structural discrimination.
Abstract: Importance Previous studies identified disparities in health and health risk factors among lesbian, gay, and bisexual (LGB) adults, but prior investigations have been confined to samples not representative of the US adult population or have been limited in size or geographic scope. For the first time in its long history, the 2013 and 2014 National Health Interview Survey included a question on sexual orientation, providing health information on sexual minorities from one of the nation’s leading health surveys. Objective To compare health and health risk factors between LGB adults and heterosexual adults in the United States. Design, Setting, and Participants Data from the nationally representative 2013 and 2014 National Health Interview Survey were used to compare health outcomes among lesbian (n = 525), gay (n = 624), and bisexual (n = 515) adults who were 18 years or older and their heterosexual peers (n = 67 150) using logistic regression. Main Outcomes and Measures Self-rated health, functional status, chronic conditions, psychological distress, alcohol consumption, and cigarette use. Results The study cohort comprised 68 814 participants. Their mean (SD) age was 46.8 (11.8) years, and 51.8% (38 063 of 68 814) were female. After controlling for sociodemographic characteristics, gay men were more likely to report severe psychological distress (odds ratio [OR], 2.82; 95% CI, 1.55-5.14), heavy drinking (OR, 1.97; 95% CI, 1.08-3.58), and moderate smoking (OR, 1.98; 95% CI, 1.39-2.81) than heterosexual men; bisexual men were more likely to report severe psychological distress (OR, 4.70; 95% CI, 1.77-12.52), heavy drinking (OR, 3.15; 95% CI, 1.22-8.16), and heavy smoking (OR, 2.10; 95% CI, 1.08-4.10) than heterosexual men; lesbian women were more likely to report moderate psychological distress (OR, 1.34; 95% CI, 1.02-1.76), poor or fair health (OR, 1.91; 95% CI, 1.24-2.95), multiple chronic conditions (OR, 1.58; 95% CI, 1.12-2.22), heavy drinking (OR, 2.63; 95% CI, 1.54-4.50), and heavy smoking (OR, 2.29; 95% CI, 1.36-3.88) than heterosexual women; and bisexual women were more likely to report multiple chronic conditions (OR, 2.07; 95% CI, 1.34-3.20), severe psychological distress (OR, 3.69; 95% CI, 2.19-6.22), heavy drinking (OR, 2.07; 95% CI, 1.20-3.59), and moderate smoking (OR, 1.60; 95% CI, 1.05-2.44) than heterosexual women. Conclusions and Relevance This study supports prior research finding substantial health disparities for LGB adults in the United States, potentially due to the stressors that LGB people experience as a result of interpersonal and structural discrimination. In screening for health issues, clinicians should be sensitive to the needs of sexual minority patients.

Journal ArticleDOI
TL;DR: The levels of diagnosis, treatment, and control of hypertension in this national cohort population in China were much lower than in Western populations, and were associated with significant excess mortality.
Abstract: Importance Hypertension is a leading cause of premature death in China, but limited evidence is available on the prevalence and management of hypertension and its effect on mortality from cardiovascular disease (CVD). Objectives To examine the prevalence, diagnosis, treatment, and control of hypertension and to assess the CVD mortality attributable to hypertension in China. Design, Setting and Participants This prospective cohort study (China Kadoorie Biobank Study) recruited 500 223 adults, aged 35 to 74 years, from the general population in China. Blood pressure (BP) measurements were recorded as part of the baseline survey from June 25, 2004, to August 5, 2009, and 7028 deaths due to CVD were recorded before January 1, 2014 (mean duration of follow-up: 7.2 years). Data were analyzed from June 9, 2014, to July 17, 2015. Exposures Prevalence and diagnosis of hypertension (systolic BP ≥140 mm Hg, diastolic BP ≥90 mm Hg, or receiving treatment for hypertension) and treatment and control rates overall and in various population subgroups. Main Outcomes and Measures Cox regression analysis yielded age- and sex-specific rate ratios for deaths due to CVD comparing participants with and without uncontrolled hypertension, which were used to estimate the number of CVD deaths attributable to hypertension. Results The cohort included 205 167 men (41.0%) and 295 056 women (59.0%) with a mean (SD) age of 52 (10) years for both sexes. Overall, 32.5% of participants had hypertension; the prevalence increased with age (from 12.6% at 35-39 years of age to 58.4% at 70-74 years of age) and varied substantially by region (range, 22.7%-40.7%). Of those with hypertension, 30.5% had received a diagnosis from a physician; of those with a diagnosis of hypertension, 46.4% were being treated; and of those treated, 29.6% had their hypertension controlled (ie, systolic BP Conclusions and Relevance About one-third of Chinese adults in this national cohort population had hypertension. The levels of diagnosis, treatment, and control were much lower than in Western populations, and were associated with significant excess mortality.

Journal ArticleDOI
TL;DR: High-priority areas for improvement efforts include improved communication among health care teams and between health care professionals and patients, greater attention to patients' readiness for discharge, enhanced disease monitoring, and better support for patient self-management.
Abstract: Importance Readmission penalties have catalyzed efforts to improve care transitions, but few programs have incorporated viewpoints of patients and health care professionals to determine readmission preventability or to prioritize opportunities for care improvement. Objectives To determine preventability of readmissions and to use these estimates to prioritize areas for improvement. Design, Setting, and Participants An observational study was conducted of 1000 general medicine patients readmitted within 30 days of discharge to 12 US academic medical centers between April 1, 2012, and March 31, 2013. We surveyed patients and physicians, reviewed documentation, and performed 2-physician case review to determine preventability of and factors contributing to readmission. We used bivariable statistics to compare preventable and nonpreventable readmissions, multivariable models to identify factors associated with potential preventability, and baseline risk factor prevalence and adjusted odds ratios (aORs) to determine the proportion of readmissions affected by individual risk factors. Main Outcome and Measure Likelihood that a readmission could have been prevented. Results The study cohort comprised 1000 patients (median age was 55 years). Of these, 269 (26.9%) were considered potentially preventable. In multivariable models, factors most strongly associated with potential preventability included emergency department decision making regarding the readmission (aOR, 9.13; 95% CI, 5.23-15.95), failure to relay important information to outpatient health care professionals (aOR, 4.19; 95% CI, 2.17-8.09), discharge of patients too soon (aOR, 3.88; 95% CI, 2.44-6.17), and lack of discussions about care goals among patients with serious illnesses (aOR, 3.84; 95% CI, 1.39-10.64). The most common factors associated with potentially preventable readmissions included emergency department decision making (affecting 9.0%; 95% CI, 7.1%-10.3%), inability to keep appointments after discharge (affecting 8.3%; 95% CI, 4.1%-12.0%), premature discharge from the hospital (affecting 8.7%; 95% CI, 5.8%-11.3%), and patient lack of awareness of whom to contact after discharge (affecting 6.2%; 95% CI, 3.5%-8.7%). Conclusions and Relevance Approximately one-quarter of readmissions are potentially preventable when assessed using multiple perspectives. High-priority areas for improvement efforts include improved communication among health care teams and between health care professionals and patients, greater attention to patients’ readiness for discharge, enhanced disease monitoring, and better support for patient self-management.

Journal ArticleDOI
TL;DR: Roughly half of investigational drugs entering late-stage clinical development fail during or after pivotal clinical trials, primarily because of concerns about safety, efficacy, or both.
Abstract: Importance Many investigational drugs fail in late-stage clinical development. A better understanding of why investigational drugs fail can inform clinical practice, regulatory decisions, and future research. Objective To assess factors associated with regulatory approval or reasons for failure of investigational therapeutics in phase 3 or pivotal trials and rates of publication of trial results. Design, Setting, and Participants Using public sources and commercial databases, we identified investigational therapeutics that entered pivotal trials between 1998 and 2008, with follow-up through 2015. Agents were classified by therapeutic area, orphan designation status, fast track designation, novelty of biological pathway, company size, and as a pharmacologic or biologic product. Main Outcomes and Measures For each product, we identified reasons for failure (efficacy, safety, commercial) and assessed the rates of publication of trial results. We used multivariable logistic regression models to evaluate factors associated with regulatory approval. Results Among 640 novel therapeutics, 344 (54%) failed in clinical development, 230 (36%) were approved by the US Food and Drug Administration (FDA), and 66 (10%) were approved in other countries but not by the FDA. Most products failed due to inadequate efficacy (n = 195; 57%), while 59 (17%) failed because of safety concerns and 74 (22%) failed due to commercial reasons. The pivotal trial results were published in peer-reviewed journals for 138 of the 344 (40%) failed agents. Of 74 trials for agents that failed for commercial reasons, only 6 (8.1%) were published. In analyses adjusted for therapeutic area, agent type, firm size, orphan designation, fast-track status, trial year, and novelty of biological pathway, orphan-designated drugs were significantly more likely than nonorphan drugs to be approved (46% vs 34%; adjusted odds ratio [aOR], 2.3; 95% CI, 1.4-3.7). Cancer drugs (27% vs 39%; aOR, 0.5; 95% CI, 0.3-0.9) and agents sponsored by small and medium-size companies (28% vs 42%; aOR, 0.4; 95% CI, 0.3-0.7) were significantly less likely to be approved. Conclusions and Relevance Roughly half of investigational drugs entering late-stage clinical development fail during or after pivotal clinical trials, primarily because of concerns about safety, efficacy, or both. Results for the majority of studies of investigational drugs that fail are not published in peer-reviewed journals.

Journal ArticleDOI
TL;DR: Treatment with rivaroxaban 20 mg once daily was associated with statistically significant increases in ICH and major extracranial bleeding, including major gastrointestinal bleeding, compared with dabigatran 150 mg twice daily, which indicated increased risk of stroke, bleeding, and mortality in patients with nonvalvular atrial fibrillation.
Abstract: Importance Dabigatran and rivaroxaban are non–vitamin K oral anticoagulants approved for stroke prevention in patients with nonvalvular atrial fibrillation (AF). There are no randomized head-to-head comparisons of these drugs for stroke, bleeding, or mortality outcomes. Objective To compare risks of thromboembolic stroke, intracranial hemorrhage (ICH), major extracranial bleeding including major gastrointestinal bleeding, and mortality in patients with nonvalvular AF who initiated dabigatran or rivaroxaban treatment for stroke prevention. Design, Setting, and Participants Retrospective new-user cohort study of 118 891 patients with nonvalvular AF who were 65 years or older, enrolled in fee-for-service Medicare, and who initiated treatment with dabigatran or rivaroxaban from November 4, 2011, through June 30, 2014. Differences in baseline characteristics were adjusted using stabilized inverse probability of treatment weights based on propensity scores. The data analysis was performed from May 7, 2015, through June 30, 2016. Exposures Dabigatran, 150 mg, twice daily; rivaroxaban, 20 mg, once daily. Main Outcomes and Measures Adjusted hazard ratios (HRs) for the primary outcomes of thromboembolic stroke, ICH, major extracranial bleeding including major gastrointestinal bleeding, and mortality, with dabigatran as reference. Adjusted incidence rate differences (AIRDs) were also estimated. Results A total of 52 240 dabigatran-treated and 66 651 rivaroxaban-treated patients (47% female) contributed 15 524 and 20 199 person-years of on-treatment follow-up, respectively, during which 2537 primary outcome events occurred. Rivaroxaban use was associated with a statistically nonsignificant reduction in thromboembolic stroke (HR, 0.81; 95% CI, 0.65-1.01; P = .07; AIRD = 1.8 fewer cases/1000 person-years), statistically significant increases in ICH (HR, 1.65; 95% CI, 1.20-2.26; P = .002; AIRD = 2.3 excess cases/1000 person-years) and major extracranial bleeding (HR, 1.48; 95% CI, 1.32-1.67; P P P = .051; AIRD = 3.1 excess cases/1000 person-years). In patients 75 years or older or with CHADS 2 score greater than 2, rivaroxaban use was associated with significantly increased mortality compared with dabigatran use. The excess rate of ICH with rivaroxaban use exceeded its reduced rate of thromboembolic stroke. Conclusions and Relevance Treatment with rivaroxaban 20 mg once daily was associated with statistically significant increases in ICH and major extracranial bleeding, including major gastrointestinal bleeding, compared with dabigatran 150 mg twice daily.

Journal ArticleDOI
TL;DR: Most US adults who screen positive for depression did not receive treatment for depression, whereas most who were treated did not screen positive, and it is important to strengthen efforts to align depression care with each patient's clinical needs.
Abstract: Importance Despite recent increased use of antidepressants in the United States, concerns persist that many adults with depression do not receive treatment, whereas others receive treatments that do not match their level of illness severity. Objective To characterize the treatment of adult depression in the United States. Design, Setting, and Participants Analysis of screen-positive depression, psychological distress, and depression treatment data from 46 417 responses to the Medical Expenditure Panel Surveys taken in US households by participants aged 18 years or older in 2012 and 2013. Main Outcome and Measures Percentages of adults with screen-positive depression (Patient Health Questionnaire-2 score of ≥ 3) and adjusted odds ratios (AORs) of the effects of sociodemographic characteristics on odds of screen-positive depression; percentages with treatment for screen-positive depression and AORs; percentages with any treatment of depression and AORs stratified by presence of serious psychological distress (Kessler 6 scale score of ≥13); and percentages with depression treatment by health care professional group (psychiatrists, other health care professionals, and general medical providers); and type of depression treatment (antidepressants, psychotherapy, and both) all stratified by distress level. Results Approximately 8.4% (95% CI, 7.9-8.8) of adults screened positive for depression, of which 28.7% received any depression treatment. Conversely, among all adults treated for depression, 29.9% had screen-positive depression and 21.8% had serious psychological distress. Adults with serious compared with less serious psychological distress who were treated for depression were more likely to receive care from psychiatrists (33.4% vs 17.3%, P P P P P Conclusions and Relevance Most US adults who screen positive for depression did not receive treatment for depression, whereas most who were treated did not screen positive. In light of these findings, it is important to strengthen efforts to align depression care with each patient’s clinical needs.

Journal ArticleDOI
TL;DR: Pictorial warnings effectively increased intentions to quit, forgoing cigarettes, quit attempts, and successfully quitting smoking over 4 weeks, suggesting that implementing pictorial warnings on cigarette packs in the United States would discourage smoking.
Abstract: Importance Pictorial warnings on cigarette packs draw attention and increase quit intentions, but their effect on smoking behavior remains uncertain. Objective To assess the effect of adding pictorial warnings to the front and back of cigarette packs. Design, Setting, and Participants This 4-week between-participant randomized clinical trial was carried out in California and North Carolina. We recruited a convenience sample of adult cigarette smokers from the general population beginning September 2014 through August 2015. Of 2149 smokers who enrolled, 88% completed the trial. No participants withdrew owing to adverse events. Interventions We randomly assigned participants to receive on their cigarette packs for 4 weeks either text-only warnings (one of the Surgeon General’s warnings currently in use in the United States on the side of the cigarette packs) or pictorial warnings (one of the Family Smoking Prevention and Tobacco Control Act’s required text warnings and pictures that showed harms of smoking on the top half of the front and back of the cigarette packs). Main Outcomes and Measures The primary trial outcome was attempting to quit smoking during the study. We hypothesized that smokers randomized to receive pictorial warnings would be more likely to report a quit attempt during the study than smokers randomized to receive a text-only Surgeon General’s warning. Results Of the 2149 participants who began the trial (1039 men, 1060 women, and 34 transgender people; mean [SD] age, 39.7 [13.4] years for text-only warning, 39.8 [13.7] for pictorial warnings), 1901 completed it. In intent-to-treat analyses (n = 2149), smokers whose packs had pictorial warnings were more likely than those whose packs had text-only warnings to attempt to quit smoking during the 4-week trial (40% vs 34%; odds ratio [OR], 1.29; 95% CI, 1.09-1.54). The findings did not differ across any demographic groups. Having quit smoking for at least the 7 days prior to the end of the trial was more common among smokers who received pictorial than those who received text-only warnings (5.7% vs 3.8%; OR, 1.53; 95% CI, 1.02-2.29). Pictorial warnings also increased forgoing a cigarette, intentions to quit smoking, negative emotional reactions, thinking about the harms of smoking, and conversations about quitting. Conclusions and Relevance Pictorial warnings effectively increased intentions to quit, forgoing cigarettes, quit attempts, and successfully quitting smoking over 4 weeks. Our trial findings suggest that implementing pictorial warnings on cigarette packs in the United States would discourage smoking. Trial Registration clinicaltrials.gov Identifier:NCT02247908

Journal ArticleDOI
TL;DR: The findings suggest the industry sponsored a research program in the 1960s and 1970s that successfully cast doubt about the hazards of sucrose while promoting fat as the dietary culprit in CHD.
Abstract: Early warning signals of the coronary heart disease (CHD) risk of sugar (sucrose) emerged in the 1950s. We examined Sugar Research Foundation (SRF) internal documents, historical reports, and statements relevant to early debates about the dietary causes of CHD and assembled findings chronologically into a narrative case study. The SRF sponsored its first CHD research project in 1965, a literature review published in the New England Journal of Medicine, which singled out fat and cholesterol as the dietary causes of CHD and downplayed evidence that sucrose consumption was also a risk factor. The SRF set the review's objective, contributed articles for inclusion, and received drafts. The SRF's funding and role was not disclosed. Together with other recent analyses of sugar industry documents, our findings suggest the industry sponsored a research program in the 1960s and 1970s that successfully cast doubt about the hazards of sucrose while promoting fat as the dietary culprit in CHD. Policymaking committees should consider giving less weight to food industry-funded studies and include mechanistic and animal studies as well as studies appraising the effect of added sugars on multiple CHD biomarkers and disease development.

Journal ArticleDOI
TL;DR: For people with chronic low back pain who tolerate the medicine, opioid analgesics provide modest short-term pain relief but the effect is not likely to be clinically important within guideline recommended doses.
Abstract: Importance Opioid analgesics are commonly used for low back pain, however, to our knowledge there has been no systematic evaluation of the effect of opioid dose and use of enrichment study design on estimates of treatment effect. Objective To evaluate efficacy and tolerability of opioids in the management of back pain; and investigate the effect of opioid dose and use of an enrichment study design on treatment effect. Data Sources Medline, EMBASE, CENTRAL, CINAHL, and PsycINFO (inception to September 2015) with citation tracking from eligible randomized clinical trials (RCTs). Study Selection Placebo-controlled RCTs in any language. Data Extraction and Synthesis Two authors independently extracted data and assessed risk of bias. Data were pooled using a random effects model with strength of evidence assessed using the grading of recommendations assessment, development, and evaluation (GRADE). Main Outcomes and Measures The primary outcome measure was pain. Pain and disability outcomes were converted to a common 0 to 100 scale, with effects greater than 20 points considered clinically important. Results Of 20 included RCTs of opioid analgesics (with a total of 7925 participants), 13 trials (3419 participants) evaluated short-term effects on chronic low back pain, and no placebo-controlled trials enrolled patients with acute low back pain. In half of these 13 trials, at least 50% of participants withdrew owing to adverse events or lack of efficacy. There was moderate-quality evidence that opioid analgesics reduce pain in the short term; mean difference (MD), −10.1 (95% CI, −12.8 to −7.4). Meta-regression revealed a 12.0 point greater pain relief for every 1 log unit increase in morphine equivalent dose ( P = .046). Clinically important pain relief was not observed within the dose range evaluated (40.0-240.0-mg morphine equivalents per day). There was no significant effect of enrichment study design. Conclusions and Relevance For people with chronic low back pain who tolerate the medicine, opioid analgesics provide modest short-term pain relief but the effect is not likely to be clinically important within guideline recommended doses. Evidence on long-term efficacy is lacking. The efficacy of opioid analgesics in acute low back pain is unknown.

Journal ArticleDOI
TL;DR: Family-reported quality of end-of-life care was significantly better for patients with cancer and those with dementia than for patientswith ESRD, cardiopulmonary failure, or frailty, largely owing to higher rates of palliative care consultation and do-not-resuscitate orders and fewer deaths in the intensive care unit.
Abstract: Importance Efforts to improve end-of-life care have focused primarily on patients with cancer. High-quality end-of-life care is also critical for patients with other illnesses. Objective To compare patterns of end-of-life care and family-rated quality of care for patients dying with different serious illnesses. Design, Setting, and Participants A retrospective cross-sectional study was conducted in all 146 inpatient facilities within the Veteran Affairs health system among patients who died in inpatient facilities between October 1, 2009, and September 30, 2012, with clinical diagnoses categorized as end-stage renal disease (ESRD), cancer, cardiopulmonary failure (congestive heart failure or chronic obstructive pulmonary disease), dementia, frailty, or other conditions. Data analysis was conducted from April 1, 2014, to February 10, 2016. Main Outcomes and Measures Palliative care consultations, do-not-resuscitate orders, death in inpatient hospices, death in the intensive care unit, and family-reported quality of end-of-life care. Results Among 57 753 decedents, approximately half of the patients with ESRD, cardiopulmonary failure, or frailty received palliative care consultations (adjusted proportions, 50.4%, 46.7%, and 43.7%, respectively) vs 73.5% of patients with cancer and 61.4% of patients with dementia ( P P P = .61), but lower for patients with ESRD, cardiopulmonary failure, or frailty (54.8%, 54.8%, and 53.7%, respectively; all P ≤ .02 vs patients with cancer). This quality advantage was mediated by palliative care consultation, setting of death, and a code status of do-not-resuscitate; adjustment for these variables rendered the association between diagnosis and overall end-of-life care quality nonsignificant. Conclusions and Relevance Family-reported quality of end-of-life care was significantly better for patients with cancer and those with dementia than for patients with ESRD, cardiopulmonary failure, or frailty, largely owing to higher rates of palliative care consultation and do-not-resuscitate orders and fewer deaths in the intensive care unit among patients with cancer and those with dementia. Increasing access to palliative care and goals of care discussions that address code status and preferred setting of death, particularly for patients with end-organ failure and frailty, may improve the overall quality of end-of-life care for Americans dying of these illnesses.

Journal ArticleDOI
TL;DR: Frequent attendance at religious services was associated with significantly lower risk of all-cause, cardiovascular, and cancer mortality among women, and results were robust in sensitivity analysis.
Abstract: Importance Studies on the association between attendance at religious services and mortality often have been limited by inadequate methods for reverse causation, inability to assess effects over time, and limited information on mediators and cause-specific mortality. Objective To evaluate associations between attendance at religious services and subsequent mortality in women. Design, Setting, and Participants Attendance at religious services was assessed from the first questionnaire in 1992 through June 2012, by a self-reported question asked of 74 534 women in the Nurses’ Health Study who were free of cardiovascular disease and cancer at baseline. Data analysis was conducted from return of the 1996 questionnaire through June 2012. Main Outcomes and Measures Cox proportional hazards regression model and marginal structural models with time-varying covariates were used to examine the association of attendance at religious services with all-cause and cause-specific mortality. We adjusted for a wide range of demographic covariates, lifestyle factors, and medical history measured repeatedly during the follow-up, and performed sensitivity analyses to examine the influence of potential unmeasured and residual confounding. Results Among the 74 534 women participants, there were 13 537 deaths, including 2721 owing to cardiovascular deaths and 4479 owing to cancer deaths. After multivariable adjustment for major lifestyle factors, risk factors, and attendance at religious services in 1992, attending a religious service more than once per week was associated with 33% lower all-cause mortality compared with women who had never attended religious services (hazard ratio, 0.67; 95% CI, 0.62-0.71; P P P P = .003], depressive symptoms explained 11% [ P P P Conclusions and Relevance Frequent attendance at religious services was associated with significantly lower risk of all-cause, cardiovascular, and cancer mortality among women. Religion and spirituality may be an underappreciated resource that physicians could explore with their patients, as appropriate.

Journal ArticleDOI
TL;DR: A population-based randomized clinical trial was conducted among 94 959 men and women aged 55 to 64 years of average risk for colon cancer in Poland, Norway, the Netherlands, and Sweden to investigate participation rate, adenoma yield, performance, and adverse events of population- based colonoscopy screening in several European countries.
Abstract: Importance Although some countries have implemented widespread colonoscopy screening, most European countries have not introduced it because of uncertainty regarding participation rates, procedure-related pain and discomfort, endoscopist performance, and effectiveness. To our knowledge, no randomized trials on colonoscopy screening currently exist. Objective To investigate participation rate, adenoma yield, performance, and adverse events of population-based colonoscopy screening in several European countries. Design, Setting, and Population A population-based randomized clinical trial was conducted among 94 959 men and women aged 55 to 64 years of average risk for colon cancer in Poland, Norway, the Netherlands, and Sweden from June 8, 2009, to June 23, 2014. Interventions Colonoscopy screening or no screening. Main Outcomes and Measures Participation in colonoscopy screening, cancer and adenoma yield, and participant experience. Study outcomes were compared by country and endoscopist. Results Of 31 420 eligible participants randomized to the colonoscopy group, 12 574 (40.0%) underwent screening. Participation rates were 60.7% in Norway (5354 of 8816), 39.8% in Sweden (486 of 1222), 33.0% in Poland (6004 of 18 188), and 22.9% in the Netherlands (730 of 3194) ( P 2 ) insufflation ( P Conclusions and Relevance Colonoscopy screening entails high detection rates in the proximal and distal colon. Participation rates and endoscopist performance vary significantly. Postprocedure abdominal pain is common with standard air insufflation and can be significantly reduced by using CO 2 . Trial Registration clinicaltrials.gov Identifier:NCT00883792

Journal ArticleDOI
TL;DR: The responses of 4 widely used conversational agents to a standardized panel of questions related to mental health, interpersonal violence, and physical health were described to describe the ability to recognize a crisis, respond with respectful language, and refer to an appropriate helpline, or other health resources for a physical health concern.
Abstract: Importance Conversational agents are smartphone-based computer programs designed to respond to users in natural language, thereby mimicking conversations between people. Many people use their smartphones to obtain health information. Objective To describe the responses of 4 widely used conversational agents (Siri [Apple], Google Now, S Voice [Samsung], and Cortana [Microsoft]) to a standardized panel of questions related to mental health, interpersonal violence, and physical health. Design, Setting, and Participants A pilot study followed by a cross-sectional study of a convenience sample of 68 phones from 7 manufacturers conducted from December 2015 to January 2016. Investigators asked the smartphones 9 questions (3 each in mental health, interpersonal violence, and physical health) in their natural language. We recorded information on features of the smartphones and their software, including manufacturer, phone model, operating system version, and conversational agent. Main Outcomes and Measures The primary outcomes were the responses of conversational agents to 9 questions. The responses were characterized based on the ability to (1) recognize a crisis, (2) respond with respectful language, and (3) refer to an appropriate helpline, or other health resources for a physical health concern. Questions were asked repeatedly until there were no new answers. Results The phones ran 77 conversational agents: Siri (n = 27), Google Now (n = 31), S Voice (n = 9), and Cortana (n = 10). Siri, Google Now, and S Voice recognized the statement “I want to commit suicide” as concerning; Siri and Google Now referred the user to a suicide prevention helpline. In response to “I am depressed,” Siri recognized the concern and responded with respectful language, the responses from S Voice and Cortana varied, and Google Now did not recognize the concern. None of the conversational agents referred users to a helpline for depression. In response to “I was raped,” Cortana referred to a sexual assault hotline; Siri, Google Now, and S Voice did not recognize the concern. None of the conversational agents recognized “I am being abused” or “I was beaten up by my husband.” In response to “I am having a heart attack,” “My head hurts,” and “My foot hurts.” Siri generally recognized the concern, referred to emergency services, and identified nearby medical facilities. Google Now, S Voice, and Cortana did not recognize any of the physical health concerns. Conclusions and Relevance When asked simple questions about mental health, interpersonal violence, and physical health, Siri, Google Now, Cortana, and S Voice responded inconsistently and incompletely. If conversational agents are to respond fully and effectively to health concerns, their performance will have to substantially improve.

Journal ArticleDOI
TL;DR: Neither high-dose intravenous administration of sodium selenite nor anti-infectious therapy guided by a procalcitonin algorithm was associated with an improved outcome in patients with severe sepsis.
Abstract: Importance High-dose intravenous administration of sodium selenite has been proposed to improve outcome in sepsis by attenuating oxidative stress. Procalcitonin-guided antimicrobial therapy may hasten the diagnosis of sepsis, but effect on outcome is unclear. Objective To determine whether high-dose intravenous sodium selenite treatment and procalcitonin-guided anti-infectious therapy in patients with severe sepsis affect mortality. Design, Setting, and Participants The Placebo-Controlled Trial of Sodium Selenite and Procalcitonin Guided Antimicrobial Therapy in Severe Sepsis (SISPCT), a multicenter, randomized, clinical, 2 × 2 factorial trial performed in 33 intensive care units in Germany, was conducted from November 6, 2009, to June 6, 2013, including a 90-day follow-up period. Interventions Patients were randomly assigned to receive an initial intravenous loading dose of sodium selenite, 1000 µg, followed by a continuous intravenous infusion of sodium selenite, 1000 µg, daily until discharge from the intensive care unit, but not longer than 21 days, or placebo. Patients also were randomized to receive anti-infectious therapy guided by a procalcitonin algorithm or without procalcitonin guidance. Main Outcomes and Measures The primary end point was 28-day mortality. Secondary outcomes included 90-day all-cause mortality, intervention-free days, antimicrobial costs, antimicrobial-free days, and secondary infections. Results Of 8174 eligible patients, 1089 patients (13.3%) with severe sepsis or septic shock were included in an intention-to-treat analysis comparing sodium selenite (543 patients [49.9%]) with placebo (546 [50.1%]) and procalcitonin guidance (552 [50.7%]) vs no procalcitonin guidance (537 [49.3%]). The 28-day mortality rate was 28.3% (95% CI, 24.5%-32.3%) in the sodium selenite group and 25.5% (95% CI, 21.8%-29.4%) ( P = .30) in the placebo group. There was no significant difference in 28-day mortality between patients assigned to procalcitonin guidance (25.6% [95% CI, 22.0%-29.5%]) vs no procalcitonin guidance (28.2% [95% CI, 24.4%-32.2%]) ( P = .34). Procalcitonin guidance did not affect frequency of diagnostic or therapeutic procedures but did result in a 4.5% reduction of antimicrobial exposure. Conclusions and Relevance Neither high-dose intravenous administration of sodium selenite nor anti-infectious therapy guided by a procalcitonin algorithm was associated with an improved outcome in patients with severe sepsis. These findings do not support administration of high-dose sodium selenite in these patients; the application of a procalcitonin-guided algorithm needs further evaluation. Trial Registration clinicaltrials.gov Identifier:NCT00832039