scispace - formally typeset
Search or ask a question

Showing papers in "JAMA in 2017"


Journal ArticleDOI
12 Dec 2017-JAMA
TL;DR: In the setting of a challenge competition, some deep learning algorithms achieved better diagnostic performance than a panel of 11 pathologists participating in a simulation exercise designed to mimic routine pathology workflow; algorithm performance was comparable with an expert pathologist interpreting whole-slide images without time constraints.
Abstract: Importance Application of deep learning algorithms to whole-slide pathology images can potentially improve diagnostic accuracy and efficiency. Objective Assess the performance of automated deep learning algorithms at detecting metastases in hematoxylin and eosin–stained tissue sections of lymph nodes of women with breast cancer and compare it with pathologists’ diagnoses in a diagnostic setting. Design, Setting, and Participants Researcher challenge competition (CAMELYON16) to develop automated solutions for detecting lymph node metastases (November 2015-November 2016). A training data set of whole-slide images from 2 centers in the Netherlands with (n = 110) and without (n = 160) nodal metastases verified by immunohistochemical staining were provided to challenge participants to build algorithms. Algorithm performance was evaluated in an independent test set of 129 whole-slide images (49 with and 80 without metastases). The same test set of corresponding glass slides was also evaluated by a panel of 11 pathologists with time constraint (WTC) from the Netherlands to ascertain likelihood of nodal metastases for each slide in a flexible 2-hour session, simulating routine pathology workflow, and by 1 pathologist without time constraint (WOTC). Exposures Deep learning algorithms submitted as part of a challenge competition or pathologist interpretation. Main Outcomes and Measures The presence of specific metastatic foci and the absence vs presence of lymph node metastasis in a slide or image using receiver operating characteristic curve analysis. The 11 pathologists participating in the simulation exercise rated their diagnostic confidence as definitely normal, probably normal, equivocal, probably tumor, or definitely tumor. Results The area under the receiver operating characteristic curve (AUC) for the algorithms ranged from 0.556 to 0.994. The top-performing algorithm achieved a lesion-level, true-positive fraction comparable with that of the pathologist WOTC (72.4% [95% CI, 64.3%-80.4%]) at a mean of 0.0125 false-positives per normal whole-slide image. For the whole-slide image classification task, the best algorithm (AUC, 0.994 [95% CI, 0.983-0.999]) performed significantly better than the pathologists WTC in a diagnostic simulation (mean AUC, 0.810 [range, 0.738-0.884];P Conclusions and Relevance In the setting of a challenge competition, some deep learning algorithms achieved better diagnostic performance than a panel of 11 pathologists participating in a simulation exercise designed to mimic routine pathology workflow; algorithm performance was comparable with an expert pathologist interpreting whole-slide images without time constraints. Whether this approach has clinical utility will require evaluation in a clinical setting.

2,116 citations


Journal ArticleDOI
20 Jun 2017-JAMA
TL;DR: To estimate age-specific risks of breast, ovarian, and contralateral breast cancer for mutation carriers and to evaluate risk modification by family cancer history and mutation location, a large cohort study recruited in 1997-2011 provides estimates of cancer risk based on BRCA1 and BRCa2 mutation carrier status.
Abstract: Importance: The clinical management of BRCA1 and BRCA2 mutation carriers requires accurate, prospective cancer risk estimates. Objectives: To estimate age-specific risks of breast, ovarian, and contralateral breast cancer for mutation carriers and to evaluate risk modification by family cancer history and mutation location. Design, Setting, and Participants: Prospective cohort study of 6036 BRCA1 and 3820 BRCA2 female carriers (5046 unaffected and 4810 with breast or ovarian cancer or both at baseline) recruited in 1997-2011 through the International BRCA1/2 Carrier Cohort Study, the Breast Cancer Family Registry and the Kathleen Cuningham Foundation Consortium for Research into Familial Breast Cancer, with ascertainment through family clinics (94%) and population-based studies (6%). The majority were from large national studies in the United Kingdom (EMBRACE), the Netherlands (HEBON), and France (GENEPSO). Follow-up ended December 2013; median follow-up was 5 years. Exposures: BRCA1/2 mutations, family cancer history, and mutation location. Main Outcomes and Measures: Annual incidences, standardized incidence ratios, and cumulative risks of breast, ovarian, and contralateral breast cancer. Results: Among 3886 women (median age, 38 years; interquartile range [IQR], 30-46 years) eligible for the breast cancer analysis, 5066 women (median age, 38 years; IQR, 31-47 years) eligible for the ovarian cancer analysis, and 2213 women (median age, 47 years; IQR, 40-55 years) eligible for the contralateral breast cancer analysis, 426 were diagnosed with breast cancer, 109 with ovarian cancer, and 245 with contralateral breast cancer during follow-up. The cumulative breast cancer risk to age 80 years was 72% (95% CI, 65%-79%) for BRCA1 and 69% (95% CI, 61%-77%) for BRCA2 carriers. Breast cancer incidences increased rapidly in early adulthood until ages 30 to 40 years for BRCA1 and until ages 40 to 50 years for BRCA2 carriers, then remained at a similar, constant incidence (20-30 per 1000 person-years) until age 80 years. The cumulative ovarian cancer risk to age 80 years was 44% (95% CI, 36%-53%) for BRCA1 and 17% (95% CI, 11%-25%) for BRCA2 carriers. For contralateral breast cancer, the cumulative risk 20 years after breast cancer diagnosis was 40% (95% CI, 35%-45%) for BRCA1 and 26% (95% CI, 20%-33%) for BRCA2 carriers (hazard ratio [HR] for comparing BRCA2 vs BRCA1, 0.62; 95% CI, 0.47-0.82; P=.001 for difference). Breast cancer risk increased with increasing number of first- and second-degree relatives diagnosed as having breast cancer for both BRCA1 (HR for ≥2 vs 0 affected relatives, 1.99; 95% CI, 1.41-2.82; P<.001 for trend) and BRCA2 carriers (HR, 1.91; 95% CI, 1.08-3.37; P=.02 for trend). Breast cancer risk was higher if mutations were located outside vs within the regions bounded by positions c.2282-c.4071 in BRCA1 (HR, 1.46; 95% CI, 1.11-1.93; P=.007) and c.2831-c.6401 in BRCA2 (HR, 1.93; 95% CI, 1.36-2.74; P<.001). Conclusions and Relevance: These findings provide estimates of cancer risk based on BRCA1 and BRCA2 mutation carrier status using prospective data collection and demonstrate the potential importance of family history and mutation location in risk assessment.

1,733 citations


Journal ArticleDOI
Mohammad H. Forouzanfar1, Patrick Liu1, Gregory A. Roth1, Marie Ng1, Stan Biryukov1, Laurie B. Marczak1, Lily Alexander1, Kara Estep1, Kalkidan Hassen Abate2, Tomi Akinyemiju3, Raghib Ali4, Nelson Alvis-Guzman5, Peter Azzopardi, Amitava Banerjee6, Till Bärnighausen7, Till Bärnighausen8, Arindam Basu9, Tolesa Bekele10, Derrick A Bennett4, Sibhatu Biadgilign, Ferrán Catalá-López11, Ferrán Catalá-López12, Valery L. Feigin13, João C. Fernandes14, Florian Fischer15, Alemseged Aregay Gebru16, Philimon Gona17, Rajeev Gupta, Graeme J. Hankey18, Graeme J. Hankey19, Jost B. Jonas20, Suzanne E. Judd3, Young-Ho Khang21, Ardeshir Khosravi, Yun Jin Kim22, Ruth W Kimokoti23, Yoshihiro Kokubo, Dhaval Kolte24, Alan D. Lopez25, Paulo A. Lotufo26, Reza Malekzadeh, Yohannes Adama Melaku16, Yohannes Adama Melaku27, George A. Mensah28, Awoke Misganaw1, Ali H. Mokdad1, Andrew E. Moran29, Haseeb Nawaz30, Bruce Neal, Frida Namnyak Ngalesoni31, Takayoshi Ohkubo32, Farshad Pourmalek33, Anwar Rafay, Rajesh Kumar Rai, David Rojas-Rueda, Uchechukwu K.A. Sampson28, Itamar S. Santos26, Monika Sawhney34, Aletta E. Schutte35, Sadaf G. Sepanlou, Girma Temam Shifa36, Girma Temam Shifa37, Ivy Shiue38, Ivy Shiue39, Bemnet Amare Tedla40, Amanda G. Thrift41, Marcello Tonelli42, Thomas Truelsen43, Nikolaos Tsilimparis, Kingsley N. Ukwaja, Olalekan A. Uthman44, Tommi Vasankari, Narayanaswamy Venketasubramanian, Vasiliy Victorovich Vlassov45, Theo Vos1, Ronny Westerman, Lijing L. Yan46, Yuichiro Yano47, Naohiro Yonemoto, Maysaa El Sayed Zaki, Christopher J L Murray1 
10 Jan 2017-JAMA
TL;DR: In international surveys, although there is uncertainty in some estimates, the rate of elevatedSBP (≥110-115 and ≥140 mm Hg) increased substantially between 1990 and 2015, and DALYs and deaths associated with elevated SBP also increased.
Abstract: Importance Elevated systolic blood (SBP) pressure is a leading global health risk. Quantifying the levels of SBP is important to guide prevention policies and interventions. Objective To estimate the association between SBP of at least 110 to 115 mm Hg and SBP of 140 mm Hg or higher and the burden of different causes of death and disability by age and sex for 195 countries and territories, 1990-2015. Design A comparative risk assessment of health loss related to SBP. Estimated distribution of SBP was based on 844 studies from 154 countries (published 1980-2015) of 8.69 million participants. Spatiotemporal Gaussian process regression was used to generate estimates of mean SBP and adjusted variance for each age, sex, country, and year. Diseases with sufficient evidence for a causal relationship with high SBP (eg, ischemic heart disease, ischemic stroke, and hemorrhagic stroke) were included in the primary analysis. Main Outcomes and Measures Mean SBP level, cause-specific deaths, and health burden related to SBP (≥110-115 mm Hg and also ≥140 mm Hg) by age, sex, country, and year. Results Between 1990-2015, the rate of SBP of at least 110 to 115 mm Hg increased from 73 119 (95% uncertainty interval [UI], 67 949-78 241) to 81 373 (95% UI, 76 814-85 770) per 100 000, and SBP of 140 mm Hg or higher increased from 17 307 (95% UI, 17 117-17 492) to 20 526 (95% UI, 20 283-20 746) per 100 000. The estimated annual death rate per 100 000 associated with SBP of at least 110 to 115 mm Hg increased from 135.6 (95% UI, 122.4-148.1) to 145.2 (95% UI 130.3-159.9) and the rate for SBP of 140 mm Hg or higher increased from 97.9 (95% UI, 87.5-108.1) to 106.3 (95% UI, 94.6-118.1). Loss of disability-adjusted life-years (DALYs) associated with SBP of at least 110 to 115 mm Hg increased from 148 million (95% UI, 134-162 million) to 211 million (95% UI, 193-231 million), and for SBP of 140 mm Hg or higher, the loss increased from 95.9 million (95% UI, 87.0-104.9 million) to 143.0 million (95% UI, 130.2-157.0 million). The largest numbers of SBP-related deaths were caused by ischemic heart disease (4.9 million [95% UI, 4.0-5.7 million]; 54.5%), hemorrhagic stroke (2.0 million [95% UI, 1.6-2.3 million]; 58.3%), and ischemic stroke (1.5 million [95% UI, 1.2-1.8 million]; 50.0%). In 2015, China, India, Russia, Indonesia, and the United States accounted for more than half of the global DALYs related to SBP of at least 110 to 115 mm Hg. Conclusions and Relevance In international surveys, although there is uncertainty in some estimates, the rate of elevated SBP (≥110-115 and ≥140 mm Hg) increased substantially between 1990 and 2015, and DALYs and deaths associated with elevated SBP also increased. Projections based on this sample suggest that in 2015, an estimated 3.5 billion adults had SBP of at least 110 to 115 mm Hg and 874 million adults had SBP of 140 mm Hg or higher.

1,494 citations


Journal ArticleDOI
11 Jul 2017-JAMA
TL;DR: This study assesses overall survival associated with electronic patient-reported symptom monitoring vs usual care during routine cancer treatment.
Abstract: This study assesses overall survival associated with electronic patient-reported symptom monitoring vs usual care during routine cancer treatment

1,378 citations


Journal ArticleDOI
19 Dec 2017-JAMA
TL;DR: In the final analysis of this randomized clinical trial of patients with glioblastoma who had received standard radiochemotherapy, the addition of TTFields to maintenance temozolomide chemotherapy vs maintenance Temozolmide alone, resulted in statistically significant improvement in progression-free survival and overall survival.
Abstract: Importance Tumor-treating fields (TTFields) is an antimitotic treatment modality that interferes with glioblastoma cell division and organelle assembly by delivering low-intensity alternating electric fields to the tumor. Objective To investigate whether TTFields improves progression-free and overall survival of patients with glioblastoma, a fatal disease that commonly recurs at the initial tumor site or in the central nervous system. Design, Setting, and Participants In this randomized, open-label trial, 695 patients with glioblastoma whose tumor was resected or biopsied and had completed concomitant radiochemotherapy (median time from diagnosis to randomization, 3.8 months) were enrolled at 83 centers (July 2009-2014) and followed up through December 2016. A preliminary report from this trial was published in 2015; this report describes the final analysis. Interventions Patients were randomized 2:1 to TTFields plus maintenance temozolomide chemotherapy (n = 466) or temozolomide alone (n = 229). The TTFields, consisting of low-intensity, 200 kHz frequency, alternating electric fields, was delivered (≥ 18 hours/d) via 4 transducer arrays on the shaved scalp and connected to a portable device. Temozolomide was administered to both groups (150-200 mg/m2) for 5 days per 28-day cycle (6-12 cycles). Main Outcomes and Measures Progression-free survival (tested at α = .046). The secondary end point was overall survival (tested hierarchically at α = .048). Analyses were performed for the intent-to-treat population. Adverse events were compared by group. Results Of the 695 randomized patients (median age, 56 years; IQR, 48-63; 473 men [68%]), 637 (92%) completed the trial. Median progression-free survival from randomization was 6.7 months in the TTFields-temozolomide group and 4.0 months in the temozolomide-alone group (HR, 0.63; 95% CI, 0.52-0.76;P Conclusions and Relevance In the final analysis of this randomized clinical trial of patients with glioblastoma who had received standard radiochemotherapy, the addition of TTFields to maintenance temozolomide chemotherapy vs maintenance temozolomide alone, resulted in statistically significant improvement in progression-free survival and overall survival. These results are consistent with the previous interim analysis. Trial Registration clinicaltrials.gov Identifier:NCT00916409

1,368 citations


Journal ArticleDOI
12 Dec 2017-JAMA
TL;DR: In this evaluation of retinal images from multiethnic cohorts of patients with diabetes, the DLS had high sensitivity and specificity for identifying diabetic retinopathy and related eye diseases.
Abstract: Importance A deep learning system (DLS) is a machine learning technology with potential for screening diabetic retinopathy and related eye diseases. Objective To evaluate the performance of a DLS in detecting referable diabetic retinopathy, vision-threatening diabetic retinopathy, possible glaucoma, and age-related macular degeneration (AMD) in community and clinic-based multiethnic populations with diabetes. Design, Setting, and Participants Diagnostic performance of a DLS for diabetic retinopathy and related eye diseases was evaluated using 494 661 retinal images. A DLS was trained for detecting diabetic retinopathy (using 76 370 images), possible glaucoma (125 189 images), and AMD (72 610 images), and performance of DLS was evaluated for detecting diabetic retinopathy (using 112 648 images), possible glaucoma (71 896 images), and AMD (35 948 images). Training of the DLS was completed in May 2016, and validation of the DLS was completed in May 2017 for detection of referable diabetic retinopathy (moderate nonproliferative diabetic retinopathy or worse) and vision-threatening diabetic retinopathy (severe nonproliferative diabetic retinopathy or worse) using a primary validation data set in the Singapore National Diabetic Retinopathy Screening Program and 10 multiethnic cohorts with diabetes. Exposures Use of a deep learning system. Main Outcomes and Measures Area under the receiver operating characteristic curve (AUC) and sensitivity and specificity of the DLS with professional graders (retinal specialists, general ophthalmologists, trained graders, or optometrists) as the reference standard. Results In the primary validation dataset (n = 14 880 patients; 71 896 images; mean [SD] age, 60.2 [2.2] years; 54.6% men), the prevalence of referable diabetic retinopathy was 3.0%; vision-threatening diabetic retinopathy, 0.6%; possible glaucoma, 0.1%; and AMD, 2.5%. The AUC of the DLS for referable diabetic retinopathy was 0.936 (95% CI, 0.925-0.943), sensitivity was 90.5% (95% CI, 87.3%-93.0%), and specificity was 91.6% (95% CI, 91.0%-92.2%). For vision-threatening diabetic retinopathy, AUC was 0.958 (95% CI, 0.956-0.961), sensitivity was 100% (95% CI, 94.1%-100.0%), and specificity was 91.1% (95% CI, 90.7%-91.4%). For possible glaucoma, AUC was 0.942 (95% CI, 0.929-0.954), sensitivity was 96.4% (95% CI, 81.7%-99.9%), and specificity was 87.2% (95% CI, 86.8%-87.5%). For AMD, AUC was 0.931 (95% CI, 0.928-0.935), sensitivity was 93.2% (95% CI, 91.1%-99.8%), and specificity was 88.7% (95% CI, 88.3%-89.0%). For referable diabetic retinopathy in the 10 additional datasets, AUC range was 0.889 to 0.983 (n = 40 752 images). Conclusions and Relevance In this evaluation of retinal images from multiethnic cohorts of patients with diabetes, the DLS had high sensitivity and specificity for identifying diabetic retinopathy and related eye diseases. Further research is necessary to evaluate the applicability of the DLS in health care settings and the utility of the DLS to improve vision outcomes.

1,309 citations


Journal ArticleDOI
04 Apr 2017-JAMA
TL;DR: Among patients in the United States diagnosed with thyroid cancer from 1974-2013, the overall incidence of thyroid cancer increased 3% annually, with increases in the incidence rate and thyroid cancer mortality rate for advanced-stage papillary thyroid cancer.
Abstract: Importance Thyroid cancer incidence has increased substantially in the United States over the last 4 decades, driven largely by increases in papillary thyroid cancer. It is unclear whether the increasing incidence of papillary thyroid cancer has been related to thyroid cancer mortality trends. Objective To compare trends in thyroid cancer incidence and mortality by tumor characteristics at diagnosis. Design, Setting, and Participants Trends in thyroid cancer incidence and incidence-based mortality rates were evaluated using data from the Surveillance, Epidemiology, and End Results-9 (SEER-9) cancer registry program, and annual percent change in rates was calculated using log-linear regression. Exposure Tumor characteristics. Main Outcomes and Measures Annual percent changes in age-adjusted thyroid cancer incidence and incidence-based mortality rates by histologic type and SEER stage for cases diagnosed during 1974-2013. Results Among 77 276 patients (mean [SD] age at diagnosis, 48 [16] years; 58 213 [75%] women) diagnosed with thyroid cancer from 1974-2013, papillary thyroid cancer was the most common histologic type (64 625 cases), and 2371 deaths from thyroid cancer occurred during 1994-2013. Thyroid cancer incidence increased, on average, 3.6% per year (95% CI, 3.2%-3.9%) during 1974-2013 (from 4.56 per 100 000 person-years in 1974-1977 to 14.42 per 100 000 person-years in 2010-2013), primarily related to increases in papillary thyroid cancer (annual percent change, 4.4% [95% CI, 4.0%-4.7%]). Papillary thyroid cancer incidence increased for all SEER stages at diagnosis (4.6% per year for localized, 4.3% per year for regional, 2.4% per year for distant, 1.8% per year for unknown). During 1994-2013, incidence-based mortality increased 1.1% per year (95% CI, 0.6%-1.6%) (from 0.40 per 100 000 person-years in 1994-1997 to 0.46 per 100 000 person-years in 2010-2013) overall and 2.9% per year (95% CI, 1.1%-4.7%) for SEER distant stage papillary thyroid cancer. Conclusions and Relevance Among patients in the United States diagnosed with thyroid cancer from 1974-2013, the overall incidence of thyroid cancer increased 3% annually, with increases in the incidence rate and thyroid cancer mortality rate for advanced-stage papillary thyroid cancer. These findings are consistent with a true increase in the occurrence of thyroid cancer in the United States.

1,296 citations


Journal ArticleDOI
27 Jun 2017-JAMA
TL;DR: To estimate the recent prevalence and to investigate the ethnic variation of diabetes and prediabetes in the Chinese adult population, a nationally representative cross-sectional survey in 2013 in mainland China was conducted.
Abstract: Importance Previous studies have shown increasing prevalence of diabetes in China, which now has the world’s largest diabetes epidemic. Objectives To estimate the recent prevalence and to investigate the ethnic variation of diabetes and prediabetes in the Chinese adult population. Design, Setting, and Participants A nationally representative cross-sectional survey in 2013 in mainland China, which consisted of 170 287 participants. Exposures Fasting plasma glucose and hemoglobin A 1c levels were measured for all participants. A 2-hour oral glucose tolerance test was conducted for all participants without diagnosed diabetes. Main Outcomes and Measures Primary outcomes were total diabetes and prediabetes defined according to the 2010 American Diabetes Association criteria. Awareness and treatment were also evaluated. Hemoglobin A 1c concentration of less than 7.0% among treated diabetes patients was considered adequate glycemic control. Minority ethnic groups in China with at least 1000 participants (Tibetan, Zhuang, Manchu, Uyghur, and Muslim) were compared with Han participants. Results Among the Chinese adult population, the estimated standardized prevalence of total diagnosed and undiagnosed diabetes was 10.9% (95% CI, 10.4%-11.5%); that of diagnosed diabetes, 4.0% (95% CI, 3.6%-4.3%); and that of prediabetes, 35.7% (95% CI, 34.1%-37.4%). Among persons with diabetes, 36.5% (95% CI, 34.3%-38.6%) were aware of their diagnosis and 32.2% (95% CI, 30.1%-34.2%) were treated; 49.2% (95% CI, 46.9%-51.5%) of patients treated had adequate glycemic control. Tibetan and Muslim Chinese had significantly lower crude prevalence of diabetes than Han participants (14.7% [95% CI, 14.6%-14.9%] for Han, 4.3% [95% CI, 3.5%-5.0%] for Tibetan, and 10.6% [95% CI, 9.3%-11.9%] for Muslim; P Conclusions and Relevance Among adults in China, the estimated overall prevalence of diabetes was 10.9%, and that for prediabetes was 35.7%. Differences from previous estimates for 2010 may be due to an alternate method of measuring hemoglobin A 1c .

1,261 citations


Journal ArticleDOI
03 Oct 2017-JAMA
TL;DR: In clinical data from 409 hospitals, sepsis was present in 6% of adult hospitalizations, and in contrast to claims-based analyses, neither the incidence of sepsi nor the combined outcome of death or discharge to hospice changed significantly between 2009-2014.
Abstract: Importance Estimates from claims-based analyses suggest that the incidence of sepsis is increasing and mortality rates from sepsis are decreasing. However, estimates from claims data may lack clinical fidelity and can be affected by changing diagnosis and coding practices over time. Objective To estimate the US national incidence of sepsis and trends using detailed clinical data from the electronic health record (EHR) systems of diverse hospitals. Design, Setting, and Population Retrospective cohort study of adult patients admitted to 409 academic, community, and federal hospitals from 2009-2014. Exposures Sepsis was identified using clinical indicators of presumed infection and concurrent acute organ dysfunction, adapting Third International Consensus Definitions for Sepsis and Septic Shock (Sepsis-3) criteria for objective and consistent EHR-based surveillance. Main Outcomes and Measures Sepsis incidence, outcomes, and trends from 2009-2014 were calculated using regression models and compared with claims-based estimates using International Classification of Diseases, Ninth Revision, Clinical Modification codes for severe sepsis or septic shock. Case-finding criteria were validated against Sepsis-3 criteria using medical record reviews. Results A total of 173 690 sepsis cases (mean age, 66.5 [SD, 15.5] y; 77 660 [42.4%] women) were identified using clinical criteria among 2 901 019 adults admitted to study hospitals in 2014 (6.0% incidence). Of these, 26 061 (15.0%) died in the hospital and 10 731 (6.2%) were discharged to hospice. From 2009-2014, sepsis incidence using clinical criteria was stable (+0.6% relative change/y [95% CI, −2.3% to 3.5%], P = .67) whereas incidence per claims increased (+10.3%/y [95% CI, 7.2% to 13.3%], P P = .004), but there was no significant change in the combined outcome of death or discharge to hospice (−1.3%/y [95% CI, −3.2% to 0.6%], P = .19). In contrast, mortality using claims declined significantly (−7.0%/y [95% CI, −8.8% to −5.2%], P P P P = .23). Conclusions and Relevance In clinical data from 409 hospitals, sepsis was present in 6% of adult hospitalizations, and in contrast to claims-based analyses, neither the incidence of sepsis nor the combined outcome of death or discharge to hospice changed significantly between 2009-2014. The findings also suggest that EHR-based clinical data provide more objective estimates than claims-based data for sepsis surveillance.

1,105 citations


Journal ArticleDOI
12 Sep 2017-JAMA
TL;DR: Findings do not support routine use of axillary lymph node dissection in this patient population based on 10-year outcomes, and overall survival for patients treated with sentinel lymph nodes dissection alone was noninferior toOverall survival for those treated with axillary node dissections.
Abstract: Importance The results of the American College of Surgeons Oncology Group Z0011 (ACOSOG Z0011) trial were first reported in 2005 with a median follow-up of 6.3 years. Longer follow-up was necessary because the majority of the patients had estrogen receptor–positive tumors that may recur later in the disease course (the ACOSOG is now part of the Alliance for Clinical Trials in Oncology). Objective To determine whether the 10-year overall survival of patients with sentinel lymph node metastases treated with breast-conserving therapy and sentinel lymph node dissection (SLND) alone without axillary lymph node dissection (ALND) is noninferior to that of women treated with axillary dissection. Design, Setting, and Participants The ACOSOG Z0011 phase 3 randomized clinical trial enrolled patients from May 1999 to December 2004 at 115 sites (both academic and community medical centers). The last date of follow-up was September 29, 2015, in the ACOSOG Z0011 (Alliance) trial. Eligible patients were women with clinical T1 or T2 invasive breast cancer, no palpable axillary adenopathy, and 1 or 2 sentinel lymph nodes containing metastases. Interventions All patients had planned lumpectomy, planned tangential whole-breast irradiation, and adjuvant systemic therapy. Third-field radiation was prohibited. Main Outcomes and Measures The primary outcome was overall survival with a noninferiority hazard ratio (HR) margin of 1.3. The secondary outcome was disease-free survival. Results Among 891 women who were randomized (median age, 55 years), 856 (96%) completed the trial (446 in the SLND alone group and 445 in the ALND group). At a median follow-up of 9.3 years (interquartile range, 6.93-10.34 years), the 10-year overall survival was 86.3% in the SLND alone group and 83.6% in the ALND group (HR, 0.85 [1-sided 95% CI, 0-1.16]; noninferiorityP = .02). The 10-year disease-free survival was 80.2% in the SLND alone group and 78.2% in the ALND group (HR, 0.85 [95% CI, 0.62-1.17];P = .32). Between year 5 and year 10, 1 regional recurrence was seen in the SLND alone group vs none in the ALND group. Ten-year regional recurrence did not differ significantly between the 2 groups. Conclusions and Relevance Among women with T1 or T2 invasive primary breast cancer, no palpable axillary adenopathy, and 1 or 2 sentinel lymph nodes containing metastases, 10-year overall survival for patients treated with sentinel lymph node dissection alone was noninferior to overall survival for those treated with axillary lymph node dissection. These findings do not support routine use of axillary lymph node dissection in this patient population based on 10-year outcomes. Trial Registration clinicaltrials.gov Identifier:NCT00003855

1,021 citations


Journal ArticleDOI
06 Jun 2017-JAMA
TL;DR: More than 1 million pregnant women had gestational weight gain greater than or less than guideline recommendations, compared with weight gain within recommended levels, was associated with higher risk of adverse maternal and infant outcomes.
Abstract: Importance Body mass index (BMI) and gestational weight gain are increasing globally. In 2009, the Institute of Medicine (IOM) provided specific recommendations regarding the ideal gestational weight gain. However, the association between gestational weight gain consistent with theIOM guidelines and pregnancy outcomes is unclear. Objective To perform a systematic review, meta-analysis, and metaregression to evaluate associations between gestational weight gain above or below the IOM guidelines (gain of 12.5-18 kg for underweight women [BMI Data Sources and Study Selection Search of EMBASE, Evidence-Based Medicine Reviews, MEDLINE, and MEDLINE In-Process between January 1, 1999, and February 7, 2017, for observational studies stratified by prepregnancy BMI category and total gestational weight gain. Data Extraction and Synthesis Data were extracted by 2 independent reviewers. Odds ratios (ORs) and absolute risk differences (ARDs) per live birth were calculated using a random-effects model based on a subset of studies with available data. Main Outcomes and Measures Primary outcomes were small for gestational age (SGA), preterm birth, and large for gestational age (LGA). Secondary outcomes were macrosomia, cesarean delivery, and gestational diabetes mellitus. Results Of 5354 identified studies, 23 (n = 1 309 136 women) met inclusion criteria. Gestational weight gain was below or above guidelines in 23% and 47% of pregnancies, respectively. Gestational weight gain below the recommendations was associated with higher risk of SGA (OR, 1.53 [95% CI, 1.44-1.64]; ARD, 5% [95% CI, 4%-6%]) and preterm birth (OR, 1.70 [1.32-2.20]; ARD, 5% [3%-8%]) and lower risk of LGA (OR, 0.59 [0.55-0.64]; ARD, −2% [−10% to −6%]) and macrosomia (OR, 0.60 [0.52-0.68]; ARD, −2% [−3% to −1%]); cesarean delivery showed no significant difference (OR, 0.98 [0.96-1.02]; ARD, 0% [−2% to 1%]). Gestational weight gain above the recommendations was associated with lower risk of SGA (OR, 0.66 [0.63-0.69]; ARD, −3%; [−4% to −2%]) and preterm birth (OR, 0.77 [0.69-0.86]; ARD, −2% [−2% to −1%]) and higher risk of LGA (OR, 1.85 [1.76-1.95]; ARD, 4% [2%-5%]), macrosomia (OR, 1.95 [1.79-2.11]; ARD, 6% [4%-9%]), and cesarean delivery (OR, 1.30 [1.25-1.35]; ARD, 4% [3%-6%]). Gestational diabetes mellitus could not be evaluated because of the nature of available data. Conclusions and Relevance In this systematic review and meta-analysis of more than 1 million pregnant women, 47% had gestational weight gain greater than IOM recommendations and 23% had gestational weight gain less than IOM recommendations. Gestational weight gain greater than or less than guideline recommendations, compared with weight gain within recommended levels, was associated with higher risk of adverse maternal and infant outcomes.

Journal ArticleDOI
27 Jun 2017-JAMA
TL;DR: Advances in the diagnosis and treatment of prostate cancer have improved the ability to stratify patients by risk and allowed clinicians to recommend therapy based on cancer prognosis and patient preference.
Abstract: Importance Prostate cancer is the most common cancer diagnosis made in men with more than 160 000 new cases each year in the United States. Although it often has an indolent course, prostate cancer remains the third-leading cause of cancer death in men. Observations When prostate cancer is suspected, tissue biopsy remains the standard of care for diagnosis. However, the identification and characterization of the disease have become increasingly precise through improved risk stratification and advances in magnetic resonance and functional imaging, as well as from the emergence of biomarkers. Multiple management options now exist for men diagnosed with prostate cancer. Active surveillance (the serial monitoring for disease progression with the intent to cure) appears to be safe and has become the preferred approach for men with less-aggressive prostate cancer, particularly those with a prostate-specific antigen level of less than 10 ng/mL and Gleason score 3 + 3 tumors. Surgery and radiation continue to be curative treatments for localized disease but have adverse effects such as urinary symptoms and sexual dysfunction that can negatively affect quality of life. For metastatic disease, chemotherapy as initial treatment now appears to extend survival compared with androgen deprivation therapy alone. New vaccines, hormonal therapeutics, and bone-targeting agents have demonstrated efficacy in men with metastatic prostate cancer resistant to traditional hormonal therapy. Conclusions and Relevance Advances in the diagnosis and treatment of prostate cancer have improved the ability to stratify patients by risk and allowed clinicians to recommend therapy based on cancer prognosis and patient preference. Initial treatment with chemotherapy can improve survival compared with androgen deprivation therapy. Abiraterone, enzalutamide, and other agents can improve outcomes in men with metastatic prostate cancer resistant to traditional hormonal therapy.

Journal ArticleDOI
10 Oct 2017-JAMA
TL;DR: This Users’ Guide will help clinicians understand the available metrics for assessing discrimination, calibration, and the relative performance of different prediction models to help clinicians make optimal use of existing prediction models.
Abstract: Accurate information regarding prognosis is fundamental to optimal clinical care. The best approach to assess patient prognosis relies on prediction models that simultaneously consider a number of prognostic factors and provide an estimate of patients' absolute risk of an event. Such prediction models should be characterized by adequately discriminating between patients who will have an event and those who will not and by adequate calibration ensuring accurate prediction of absolute risk. This Users' Guide will help clinicians understand the available metrics for assessing discrimination, calibration, and the relative performance of different prediction models. This article complements existing Users' Guides that address the development and validation of prediction models. Together, these guides will help clinicians to make optimal use of existing prediction models.

Journal ArticleDOI
18 Apr 2017-JAMA
TL;DR: Results suggest that patients with nodding syndrome may benefit from immunomodulatory therapies, and that antibodies generated to fight the parasitic infection could inadvertently attack brain cells expressing leiomodin-1, causing an autoimmune reaction that gives rise to nodding syndrome.
Abstract: Potential Cause of Nodding Syndrome Identified Nodding syndrome, a rare form of epilepsy that has affected thousands of children between the ages of 5 and 15 years in East Africa, may be caused by an inappropriate immune reaction to the parasitic worm, Onchocerca volvulus, that causes onchocerciasis (river blindness), according to a report published in Science Translational Medicine. First documented in Tanzania in the 1960s, nodding syndrome has remained an untreatable disease characterized by seizures, neurological deterioration, and a high rate of death. Although the cause of nodding syndrome has been elusive, an increase in the condition in areas where the parasite O volvulus is endemic suggests that infection with the worm plays a role in disease pathogenesis. Using a protein chip method, investigators from the United States and Uganda analyzed serum specimens and cerebral spinal fluid (CSF) from children with nodding syndrome and from unaffected villagers (control group) in Uganda and South Sudan. They detected higher levels of autoantibodies to leiomodin-1—a protein normally found in neurons—in the serum samples and CSF of patients with nodding syndrome than in samples from the unaffected control subjects. They also showed that leiomodin-1 autoantibodies were toxic to human neurons grown in culture and that leimodin-1 is expressed in neurons in regions of the mouse brain that correspond to those affected in humans with nodding syndrome. Several O volvulus proteins are similar to leiomodin-1, and leimodin-1 autoantibodies isolated from patients with nodding syndrome cross-reacted with these homologous parasite proteins. These findings suggest that antibodies generated to fight the parasitic infection could inadvertently attack brain cells expressing leiomodin-1, causing an autoimmune reaction that gives rise to nodding syndrome. Although additional investigation is needed to further unravel the pathophysiology of the disease, these results suggest that patients with nodding syndrome may benefit from immunomodulatory therapies. Depression Is the Leading Cause of Disability Around the World The proportion of the global population living with depression is estimated to be 322 million people—4.4% of the world’s population— according to a new report, “Depression and Other Common Mental Disorders: Global Health Estimates,” released by the World Health Organization. The report also includes data on anxiety disorders, which affect more than 260 million people—3.6% of the global population. The prevalence of these common mental disorders is increasing, particularly in lowand middle-income countries, with many people experiencing both depressionandanxietydisorderssimultaneously. According to the report, the number of people in the world living with depression has increased by 18.4% between 2005 and 2015, and depressive disorders were the single largest contributor to nonfatal health loss globally in 2015. Although depression can affect anyone at any point in their lives, it is 1.5 times more common in women than men. Almost half the number of people living with depression reside in South-East Asia and Western Pacific regions. Poverty, unemployment, life events, and illness increase the risk of depression. The prevalence of anxiety disorders, which include generalized anxiety disorder, panic disorder, posttraumatic stress disorder, and obsessive-compulsive disorder, increased by 14.9% globally between 2005 and 2015 and was particularly high in the Americas. Anxiety disorders are now the sixth largest cause of disability and, as with depressive disorders, more women than men are affected.

Journal ArticleDOI
17 Jan 2017-JAMA
TL;DR: Among adults with suspected infection admitted to an ICU, an increase in SOFA score of 2 or more had greater prognostic accuracy for in-hospital mortality than SIRS criteria or the qSOFA score, suggesting that SIRs criteria and qSOfa may have limited utility for predicting mortality in an ICUs.
Abstract: Importance The Sepsis-3 Criteria emphasized the value of a change of 2 or more points in the Sequential [Sepsis-related] Organ Failure Assessment (SOFA) score, introduced quick SOFA (qSOFA), and removed the systemic inflammatory response syndrome (SIRS) criteria from the sepsis definition. Objective Externally validate and assess the discriminatory capacities of an increase in SOFA score by 2 or more points, 2 or more SIRS criteria, or a qSOFA score of 2 or more points for outcomes among patients who are critically ill with suspected infection. Design, Setting, and Participants Retrospective cohort analysis of 184 875 patients with an infection-related primary admission diagnosis in 182 Australian and New Zealand intensive care units (ICUs) from 2000 through 2015. Exposures SOFA, qSOFA, and SIRS criteria applied to data collected within 24 hours of ICU admission. Main Outcomes and Measures The primary outcome was in-hospital mortality. In-hospital mortality or ICU length of stay (LOS) of 3 days or more was a composite secondary outcome. Discrimination was assessed using the area under the receiver operating characteristic curve (AUROC). Adjusted analyses were performed using a model of baseline risk determined using variables independent of the scoring systems. Results Among 184 875 patients (mean age, 62.9 years [SD, 17.4]; women, 82 540 [44.6%]; most common diagnosis bacterial pneumonia, 32 634 [17.7%]), a total of 34 578 patients (18.7%) died in the hospital, and 102 976 patients (55.7%) died or experienced an ICU LOS of 3 days or more. SOFA score increased by 2 or more points in 90.1%; 86.7% manifested 2 or more SIRS criteria, and 54.4% had a qSOFA score of 2 or more points. SOFA demonstrated significantly greater discrimination for in-hospital mortality (crude AUROC, 0.753 [99% CI, 0.750-0.757]) than SIRS criteria (crude AUROC, 0.589 [99% CI, 0.585-0.593]) or qSOFA (crude AUROC, 0.607 [99% CI, 0.603-0.611]). Incremental improvements were 0.164 (99% CI, 0.159-0.169) for SOFA vs SIRS criteria and 0.146 (99% CI, 0.142-0.151) for SOFA vs qSOFA ( P P Conclusions and Relevance Among adults with suspected infection admitted to an ICU, an increase in SOFA score of 2 or more had greater prognostic accuracy for in-hospital mortality than SIRS criteria or the qSOFA score. These findings suggest that SIRS criteria and qSOFA may have limited utility for predicting mortality in an ICU setting.

Journal ArticleDOI
24 Jan 2017-JAMA
TL;DR: Among adults with type 1 diabetes who used multiple daily insulin injections, the use of CGM compared with usual care resulted in a greater decrease in HbA1c level during 24 weeks.
Abstract: Importance Previous clinical trials showing the benefit of continuous glucose monitoring (CGM) in the management of type 1 diabetes predominantly have included adults using insulin pumps, even though the majority of adults with type 1 diabetes administer insulin by injection. Objective To determine the effectiveness of CGM in adults with type 1 diabetes treated with insulin injections. Design, Setting, and Participants Randomized clinical trial conducted between October 2014 and May 2016 at 24 endocrinology practices in the United States that included 158 adults with type 1 diabetes who were using multiple daily insulin injections and had hemoglobin A 1c (HbA 1c ) levels of 7.5% to 9.9%. Interventions Random assignment 2:1 to CGM (n = 105) or usual care (control group; n = 53). Main Outcomes and Measures Primary outcome measure was the difference in change in central-laboratory–measured HbA 1c level from baseline to 24 weeks. There were 18 secondary or exploratory end points, of which 15 are reported in this article, including duration of hypoglycemia at less than 70 mg/dL, measured with CGM for 7 days at 12 and 24 weeks. Results Among the 158 randomized participants (mean age, 48 years [SD, 13]; 44% women; mean baseline HbA 1c level, 8.6% [SD, 0.6%]; and median diabetes duration, 19 years [interquartile range, 10-31 years]), 155 (98%) completed the study. In the CGM group, 93% used CGM 6 d/wk or more in month 6. Mean HbA 1c reduction from baseline was 1.1% at 12 weeks and 1.0% at 24 weeks in the CGM group and 0.5% and 0.4%, respectively, in the control group (repeated-measures model P 1c level from baseline was –0.6% (95% CI, –0.8% to –0.3%; P P = .002). Severe hypoglycemia events occurred in 2 participants in each group. Conclusions and Relevance Among adults with type 1 diabetes who used multiple daily insulin injections, the use of CGM compared with usual care resulted in a greater decrease in HbA 1c level during 24 weeks. Further research is needed to assess longer-term effectiveness, as well as clinical outcomes and adverse effects. Trial Registration clinicaltrials.gov Identifier:NCT02282397

Journal ArticleDOI
24 Oct 2017-JAMA
TL;DR: It is suggested that robotic-assisted laparoscopic surgery, when performed by surgeons with varying experience with robotic surgery, does not confer an advantage in rectal cancer resection.
Abstract: Importance Robotic rectal cancer surgery is gaining popularity, but limited data are available regarding safety and efficacy. Objective To compare robotic-assisted vs conventional laparoscopic surgery for risk of conversion to open laparotomy among patients undergoing resection for rectal cancer. Design, Setting, and Participants Randomized clinical trial comparing robotic-assisted vs conventional laparoscopic surgery among 471 patients with rectal adenocarcinoma suitable for curative resection conducted at 29 sites across 10 countries, including 40 surgeons. Recruitment of patients was from January 7, 2011, to September 30, 2014, follow-up was conducted at 30 days and 6 months, and final follow-up was on June 16, 2015. Interventions Patients were randomized to robotic-assisted (n = 237) or conventional (n = 234) laparoscopic rectal cancer resection, performed by either high (upper rectum) or low (total rectum) anterior resection or abdominoperineal resection (rectum and perineum). Main Outcomes and Measures The primary outcome was conversion to open laparotomy. Secondary end points included intraoperative and postoperative complications, circumferential resection margin positivity (CRM+) and other pathological outcomes, quality of life (36-Item Short Form Survey and 20-item Multidimensional Fatigue Inventory), bladder and sexual dysfunction (International Prostate Symptom Score, International Index of Erectile Function, and Female Sexual Function Index), and oncological outcomes. Results Among 471 randomized patients (mean [SD] age, 64.9 [11.0] years; 320 [67.9%] men), 466 (98.9%) completed the study. The overall rate of conversion to open laparotomy was 10.1%: 19 of 236 patients (8.1%) in the robotic-assisted laparoscopic group and 28 of 230 patients (12.2%) in the conventional laparoscopic group (unadjusted risk difference = 4.1% [95% CI, −1.4% to 9.6%]; adjusted odds ratio = 0.61 [95% CI, 0.31 to 1.21];P = .16). The overall CRM+ rate was 5.7%; CRM+ occurred in 14 (6.3%) of 224 patients in the conventional laparoscopic group and 12 (5.1%) of 235 patients in the robotic-assisted laparoscopic group (unadjusted risk difference = 1.1% [95% CI, −3.1% to 5.4%]; adjusted odds ratio = 0.78 [95% CI, 0.35 to 1.76];P = .56). Of the other 8 reported prespecified secondary end points, including intraoperative complications, postoperative complications, plane of surgery, 30-day mortality, bladder dysfunction, and sexual dysfunction, none showed a statistically significant difference between groups. Conclusions and Relevance Among patients with rectal adenocarcinoma suitable for curative resection, robotic-assisted laparoscopic surgery, as compared with conventional laparoscopic surgery, did not significantly reduce the risk of conversion to open laparotomy. These findings suggest that robotic-assisted laparoscopic surgery, when performed by surgeons with varying experience with robotic surgery, does not confer an advantage in rectal cancer resection. Trial Registration isrctn.org Identifier:ISRCTN80500123

Journal ArticleDOI
25 Jul 2017-JAMA
TL;DR: In a convenience sample of deceased football players who donated their brains for research, a high proportion had neuropathological evidence of CTE, suggesting that CTE may be related to prior participation in football.
Abstract: Importance Players of American football may be at increased risk of long-term neurological conditions, particularly chronic traumatic encephalopathy (CTE). Objective To determine the neuropathological and clinical features of deceased football players with CTE. Design, Setting, and Participants Case series of 202 football players whose brains were donated for research. Neuropathological evaluations and retrospective telephone clinical assessments (including head trauma history) with informants were performed blinded. Online questionnaires ascertained athletic and military history. Exposures Participation in American football at any level of play. Main Outcomes and Measures Neuropathological diagnoses of neurodegenerative diseases, including CTE, based on defined diagnostic criteria; CTE neuropathological severity (stages I to IV or dichotomized into mild [stages I and II] and severe [stages III and IV]); informant-reported athletic history and, for players who died in 2014 or later, clinical presentation, including behavior, mood, and cognitive symptoms and dementia. Results Among 202 deceased former football players (median age at death, 66 years [interquartile range, 47-76 years]), CTE was neuropathologically diagnosed in 177 players (87%; median age at death, 67 years [interquartile range, 52-77 years]; mean years of football participation, 15.1 [SD, 5.2]), including 0 of 2 pre–high school, 3 of 14 high school (21%), 48 of 53 college (91%), 9 of 14 semiprofessional (64%), 7 of 8 Canadian Football League (88%), and 110 of 111 National Football League (99%) players. Neuropathological severity of CTE was distributed across the highest level of play, with all 3 former high school players having mild pathology and the majority of former college (27 [56%]), semiprofessional (5 [56%]), and professional (101 [86%]) players having severe pathology. Among 27 participants with mild CTE pathology, 26 (96%) had behavioral or mood symptoms or both, 23 (85%) had cognitive symptoms, and 9 (33%) had signs of dementia. Among 84 participants with severe CTE pathology, 75 (89%) had behavioral or mood symptoms or both, 80 (95%) had cognitive symptoms, and 71 (85%) had signs of dementia. Conclusions and Relevance In a convenience sample of deceased football players who donated their brains for research, a high proportion had neuropathological evidence of CTE, suggesting that CTE may be related to prior participation in football.

Journal ArticleDOI
07 Mar 2017-JAMA
TL;DR: To estimate associations of intake of 10 specific dietary factors with mortality due to heart disease, stroke, and type 2 diabetes (cardiometabolic mortality) among US adults, a comparative risk assessment model was used.
Abstract: Importance In the United States, national associations of individual dietary factors with specific cardiometabolic diseases are not well established. Objective To estimate associations of intake of 10 specific dietary factors with mortality due to heart disease, stroke, and type 2 diabetes (cardiometabolic mortality) among US adults. Design, Setting, and Participants A comparative risk assessment model incorporated data and corresponding uncertainty on population demographics and dietary habits from National Health and Nutrition Examination Surveys (1999-2002: n = 8104; 2009-2012: n = 8516); estimated associations of diet and disease from meta-analyses of prospective studies and clinical trials with validity analyses to assess potential bias; and estimated disease-specific national mortality from the National Center for Health Statistics. Exposures Consumption of 10 foods/nutrients associated with cardiometabolic diseases: fruits, vegetables, nuts/seeds, whole grains, unprocessed red meats, processed meats, sugar-sweetened beverages (SSBs), polyunsaturated fats, seafood omega-3 fats, and sodium. Main Outcomes and Measures Estimated absolute and percentage mortality due to heart disease, stroke, and type 2 diabetes in 2012. Disease-specific and demographic-specific (age, sex, race, and education) mortality and trends between 2002 and 2012 were also evaluated. Results In 2012, 702 308 cardiometabolic deaths occurred in US adults, including 506 100 from heart disease (371 266 coronary heart disease, 35 019 hypertensive heart disease, and 99 815 other cardiovascular disease), 128 294 from stroke (16 125 ischemic, 32 591 hemorrhagic, and 79 578 other), and 67 914 from type 2 diabetes. Of these, an estimated 318 656 (95% uncertainty interval [UI], 306 064-329 755; 45.4%) cardiometabolic deaths per year were associated with suboptimal intakes—48.6% (95% UI, 46.2%-50.9%) of cardiometabolic deaths in men and 41.8% (95% UI, 39.3%-44.2%) in women; 64.2% (95% UI, 60.6%-67.9%) at younger ages (25-34 years) and 35.7% (95% UI, 33.1%-38.1%) at older ages (≥75 years); 53.1% (95% UI, 51.6%-54.8%) among blacks, 50.0% (95% UI, 48.2%-51.8%) among Hispanics, and 42.8% (95% UI, 40.9%-44.5%) among whites; and 46.8% (95% UI, 44.9%-48.7%) among lower-, 45.7% (95% UI, 44.2%-47.4%) among medium-, and 39.1% (95% UI, 37.2%-41.2%) among higher-educated individuals. The largest numbers of estimated diet-related cardiometabolic deaths were related to high sodium (66 508 deaths in 2012; 9.5% of all cardiometabolic deaths), low nuts/seeds (59 374; 8.5%), high processed meats (57 766; 8.2%), low seafood omega-3 fats (54 626; 7.8%), low vegetables (53 410; 7.6%), low fruits (52 547; 7.5%), and high SSBs (51 694; 7.4%). Between 2002 and 2012, population-adjusted US cardiometabolic deaths per year decreased by 26.5%. The greatest decline was associated with insufficient polyunsaturated fats (−20.8% relative change [95% UI, −18.5% to −22.8%]), nuts/seeds (−18.0% [95% UI, −14.6% to −21.0%]), and excess SSBs (−14.5% [95% UI, −12.0% to −16.9%]). The greatest increase was associated with unprocessed red meats (+14.4% [95% UI, 9.1%-19.5%]). Conclusions and Relevance Dietary factors were estimated to be associated with a substantial proportion of deaths from heart disease, stroke, and type 2 diabetes. These results should help identify priorities, guide public health planning, and inform strategies to alter dietary habits and improve health.

Journal ArticleDOI
10 Oct 2017-JAMA
TL;DR: In patients with moderate to severe ARDS, a strategy with lung recruitment and titrated PEEP compared with low PEEP increased 28-day all-cause mortality, and these findings do not support the routine use of lung recruitment maneuver and PEEP titration in these patients.
Abstract: Importance The effects of recruitment maneuvers and positive end-expiratory pressure (PEEP) titration on clinical outcomes in patients with acute respiratory distress syndrome (ARDS) remain uncertain. Objective To determine if lung recruitment associated with PEEP titration according to the best respiratory-system compliance decreases 28-day mortality of patients with moderate to severe ARDS compared with a conventional low-PEEP strategy. Design, Setting, and Participants Multicenter, randomized trial conducted at 120 intensive care units (ICUs) from 9 countries from November 17, 2011, through April 25, 2017, enrolling adults with moderate to severe ARDS. Interventions An experimental strategy with a lung recruitment maneuver and PEEP titration according to the best respiratory–system compliance (n = 501; experimental group) or a control strategy of low PEEP (n = 509). All patients received volume-assist control mode until weaning. Main Outcomes and Measures The primary outcome was all-cause mortality until 28 days. Secondary outcomes were length of ICU and hospital stay; ventilator-free days through day 28; pneumothorax requiring drainage within 7 days; barotrauma within 7 days; and ICU, in-hospital, and 6-month mortality. Results A total of 1010 patients (37.5% female; mean [SD] age, 50.9 [17.4] years) were enrolled and followed up. At 28 days, 277 of 501 patients (55.3%) in the experimental group and 251 of 509 patients (49.3%) in the control group had died (hazard ratio [HR], 1.20; 95% CI, 1.01 to 1.42; P = .041). Compared with the control group, the experimental group strategy increased 6-month mortality (65.3% vs 59.9%; HR, 1.18; 95% CI, 1.01 to 1.38; P = .04), decreased the number of mean ventilator-free days (5.3 vs 6.4; difference, −1.1; 95% CI, −2.1 to −0.1; P = .03), increased the risk of pneumothorax requiring drainage (3.2% vs 1.2%; difference, 2.0%; 95% CI, 0.0% to 4.0%; P = .03), and the risk of barotrauma (5.6% vs 1.6%; difference, 4.0%; 95% CI, 1.5% to 6.5%; P = .001). There were no significant differences in the length of ICU stay, length of hospital stay, ICU mortality, and in-hospital mortality. Conclusions and Relevance In patients with moderate to severe ARDS, a strategy with lung recruitment and titrated PEEP compared with low PEEP increased 28-day all-cause mortality. These findings do not support the routine use of lung recruitment maneuver and PEEP titration in these patients. Trial Registration clinicaltrials.gov Identifier:NCT01374022


Journal ArticleDOI
20 Jun 2017-JAMA
TL;DR: It is concluded with moderate certainty that screening for obesity in children and adolescents 6 years and older is of moderate net benefit and clinicians should offer or refer them to comprehensive, intensive behavioral interventions to promote improvements in weight status.
Abstract: Importance Based on year 2000 Centers for Disease Control and Prevention growth charts, approximately 17% of children and adolescents aged 2 to 19 years in the United States have obesity, and almost 32% of children and adolescents are overweight or have obesity. Obesity in children and adolescents is associated with morbidity such as mental health and psychological issues, asthma, obstructive sleep apnea, orthopedic problems, and adverse cardiovascular and metabolic outcomes (eg, high blood pressure, abnormal lipid levels, and insulin resistance). Children and adolescents may also experience teasing and bullying behaviors based on their weight. Obesity in childhood and adolescence may continue into adulthood and lead to adverse cardiovascular outcomes or other obesity-related morbidity, such as type 2 diabetes. Subpopulation Considerations Although the overall rate of child and adolescent obesity has stabilized over the last decade after increasing steadily for 3 decades, obesity rates continue to increase in certain populations, such as African American girls and Hispanic boys. These racial/ethnic differences in obesity prevalence are likely a result of both genetic and nongenetic factors (eg, socioeconomic status, intake of sugar-sweetened beverages and fast food, and having a television in the bedroom). Objective To update the 2010 US Preventive Services Task Force (USPSTF) recommendation on screening for obesity in children 6 years and older. Evidence Review The USPSTF reviewed the evidence on screening for obesity in children and adolescents and the benefits and harms of weight management interventions. Findings Comprehensive, intensive behavioral interventions (≥26 contact hours) in children and adolescents 6 years and older who have obesity can result in improvements in weight status for up to 12 months; there is inadequate evidence regarding the effectiveness of less intensive interventions. The harms of behavioral interventions can be bounded as small to none, and the harms of screening are minimal. Therefore, the USPSTF concluded with moderate certainty that screening for obesity in children and adolescents 6 years and older is of moderate net benefit. Conclusions and Recommendation The USPSTF recommends that clinicians screen for obesity in children and adolescents 6 years and older and offer or refer them to comprehensive, intensive behavioral interventions to promote improvements in weight status. (B recommendation)

Journal ArticleDOI
20 Jun 2017-JAMA
TL;DR: Among patients with KRAS wt untreated advanced or metastatic colorectal cancer, there was no significant difference in overall survival between the addition of cetuximab vs bevacizumab to chemotherapy as initial biologic treatment.
Abstract: Importance Combining biologic monoclonal antibodies with chemotherapeutic cytotoxic drugs provides clinical benefit to patients with advanced or metastatic colorectal cancer, but the optimal choice of the initial biologic therapy in previously untreated patients is unknown. Objective To determine if the addition of cetuximab vs bevacizumab to the combination of leucovorin, fluorouracil, and oxaliplatin (mFOLFOX6) regimen or the combination of leucovorin, fluorouracil, and irinotecan (FOLFIRI) regimen is superior as first-line therapy in advanced or metastatic KRAS wild-type (wt) colorectal cancer. Design, Setting, and Participants Patients (≥18 years) enrolled at community and academic centers throughout the National Clinical Trials Network in the United States and Canada (November 2005-March 2012) with previously untreated advanced or metastatic colorectal cancer whose tumors were KRAS wt chose to take either the mFOLFOX6 regimen or the FOLFIRI regimen as chemotherapy and were randomized to receive either cetuximab (n = 578) or bevacizumab (n = 559). The last date of follow-up was December 15, 2015. Interventions Cetuximab vs bevacizumab combined with either mFOLFOX6 or FOLFIRI chemotherapy regimen chosen by the treating physician and patient. Main Outcomes and Measures The primary end point was overall survival. Secondary objectives included progression-free survival and overall response rate, site-reported confirmed or unconfirmed complete or partial response. Results Among 1137 patients (median age, 59 years; 440 [39%] women), 1074 (94%) of patients met eligibility criteria. As of December 15, 2015, median follow-up for 263 surviving patients was 47.4 months (range, 0-110.7 months), and 82% of patients (938 of 1137) experienced disease progression. The median overall survival was 30.0 months in the cetuximab-chemotherapy group and 29.0 months in the bevacizumab-chemotherapy group with a stratified hazard ratio (HR) of 0.88 (95% CI, 0.77-1.01; P = .08). The median progression-free survival was 10.5 months in the cetuximab-chemotherapy group and 10.6 months in the bevacizumab-chemotherapy group with a stratified HR of 0.95 (95% CI, 0.84-1.08; P = .45). Response rates were not significantly different, 59.6% vs 55.2% for cetuximab and bevacizumab, respectively (difference, 4.4%, 95% CI, 1.0%-9.0%, P = .13). Conclusions and Relevance Among patients with KRAS wt untreated advanced or metastatic colorectal cancer, there was no significant difference in overall survival between the addition of cetuximab vs bevacizumab to chemotherapy as initial biologic treatment. Trial Registration clinicaltrials.gov identifier:NCT00265850

Journal ArticleDOI
07 Feb 2017-JAMA
TL;DR: Patients with pituitary adenomas should be identified at an early stage so that effective treatment can be implemented and measurement of a late-night salivary cortisol level is the best screening test.
Abstract: Importance Pituitary adenomas may hypersecrete hormones or cause mass effects. Therefore, early diagnosis and treatment are important. Observations Prevalence of pituitary adenomas ranges from 1 in 865 adults to 1 in 2688 adults. Approximately 50% are microadenomas ( Conclusions and Relevance Patients with pituitary adenomas should be identified at an early stage so that effective treatment can be implemented. For prolactinomas, initial therapy is generally dopamine agonists. For all other pituitary adenomas, initial therapy is generally transsphenoidal surgery with medical therapy being reserved for those not cured by surgery.

Journal ArticleDOI
08 Aug 2017-JAMA
TL;DR: In this Viewpoint, the potential unintended consequences that may result from the application of ML-DSS in clinical practice are considered.
Abstract: Over the past decade, machine learning techniques have made substantial advances in many domains. In health care, global interest in the potential of machine learning has increased; for example, a deep learning algorithm has shown high accuracy in detecting diabetic retinopathy.1 There have been suggestions that machine learning will drive changes in health care within a few years, specifically in medical disciplines that require more accurate prognostic models (eg, oncology) and those based on pattern recognition (eg, radiology and pathology). However, comparative studies on the effectiveness of machine learning–based decision support systems (ML-DSS) in medicine are lacking, especially regarding the effects on health outcomes. Moreover, the introduction of new technologies in health care has not always been straightforward or without unintended and adverse effects.2 In this Viewpoint we consider the potential unintended consequences that may result from the application of ML-DSS in clinical practice.

Journal ArticleDOI
26 Sep 2017-JAMA
TL;DR: Avoiding use of antipsychotics and other sedating medications for treatment of severe agitation that poses risk to patient or staff safety or threatens interruption of essential medical therapies is recommended.
Abstract: Importance Delirium is defined as an acute disorder of attention and cognition. It is a common, serious, and often fatal condition among older patients. Although often underrecognized, delirium has serious adverse effects on the individual’s function and quality of life, as well as broad societal effects with substantial health care costs. Objective To summarize the current state of the art in diagnosis and treatment of delirium and to highlight critical areas for future research to advance the field. Evidence Review Search of Ovid MEDLINE, Embase, and the Cochrane Library for the past 6 years, from January 1, 2011, until March 16, 2017, using a combination of controlled vocabulary and keyword terms. Since delirium is more prevalent in older adults, the focus was on studies in elderly populations; studies based solely in the intensive care unit (ICU) and non–English-language articles were excluded. Findings Of 127 articles included, 25 were clinical trials, 42 cohort studies, 5 systematic reviews and meta-analyses, and 55 were other categories. A total of 11 616 patients were represented in the treatment studies. Advances in diagnosis have included the development of brief screening tools with high sensitivity and specificity, such as the 3-Minute Diagnostic Assessment; 4 A’s Test; and proxy-based measures such as the Family Confusion Assessment Method. Measures of severity, such as the Confusion Assessment Method–Severity Score, can aid in monitoring response to treatment, risk stratification, and assessing prognosis. Nonpharmacologic approaches focused on risk factors such as immobility, functional decline, visual or hearing impairment, dehydration, and sleep deprivation are effective for delirium prevention and also are recommended for delirium treatment. Current recommendations for pharmacologic treatment of delirium, based on recent reviews of the evidence, recommend reserving use of antipsychotics and other sedating medications for treatment of severe agitation that poses risk to patient or staff safety or threatens interruption of essential medical therapies. Conclusions and Relevance Advances in diagnosis can improve recognition and risk stratification of delirium. Prevention of delirium using nonpharmacologic approaches is documented to be effective, while pharmacologic prevention and treatment of delirium remains controversial.

Journal ArticleDOI
25 Apr 2017-JAMA
TL;DR: Among patients undergoing noncardiac surgery, peak postoperative hsTnT during the first 3 days after surgery was significantly associated with 30-day mortality and potential diagnostic criteria for MINS was found.
Abstract: Importance Little is known about the relationship between perioperative high-sensitivity troponin T (hsTnT) measurements and 30-day mortality and myocardial injury after noncardiac surgery (MINS). Objective To determine the association between perioperative hsTnT measurements and 30-day mortality and potential diagnostic criteria for MINS (ie, myocardial injury due to ischemia associated with 30-day mortality). Design, Setting, and Participants Prospective cohort study of patients aged 45 years or older who underwent inpatient noncardiac surgery and had a postoperative hsTnT measurement. Starting in October 2008, participants were recruited at 23 centers in 13 countries; follow-up finished in December 2013. Exposures Patients had hsTnT measurements 6 to 12 hours after surgery and daily for 3 days; 40.4% had a preoperative hsTnT measurement. Main Outcomes and Measures A modified Mazumdar approach (an iterative process) was used to determine if there were hsTnT thresholds associated with risk of death and had an adjusted hazard ratio (HR) of 3.0 or higher and a risk of 30-day mortality of 3% or higher. To determine potential diagnostic criteria for MINS, regression analyses ascertained if postoperative hsTnT elevations required an ischemic feature (eg, ischemic symptom or electrocardiography finding) to be associated with 30-day mortality. Results Among 21 842 participants, the mean age was 63.1 (SD, 10.7) years and 49.1% were female. Death within 30 days after surgery occurred in 266 patients (1.2%; 95% CI, 1.1%-1.4%). Multivariable analysis demonstrated that compared with the reference group (peak hsTnT Conclusions and Relevance Among patients undergoing noncardiac surgery, peak postoperative hsTnT during the first 3 days after surgery was significantly associated with 30-day mortality. Elevated postoperative hsTnT without an ischemic feature was also associated with 30-day mortality.

Journal ArticleDOI
01 Aug 2017-JAMA
TL;DR: Among patients with ischemic stroke in the anterior circulation undergoing thrombectomy, first-line throm bectomy with contact aspiration compared with stent retriever did not result in an increased successful revascularization rate at the end of the procedure.
Abstract: Importance The benefits of endovascular revascularization using the contact aspiration technique vs the stent retriever technique in patients with acute ischemic stroke remain uncertain because of lack of evidence from randomized trials. Objective To compare efficacy and adverse events using the contact aspiration technique vs the standard stent retriever technique as a first-line endovascular treatment for successful revascularization among patients with acute ischemic stroke and large vessel occlusion. Design, Setting, and Participants The Contact Aspiration vs Stent Retriever for Successful Revascularization (ASTER) study was a randomized, open-label, blinded end-point clinical trial conducted in 8 comprehensive stroke centers in France (October 2015-October 2016). Patients who presented with acute ischemic stroke and a large vessel occlusion in the anterior circulation within 6 hours of symptom onset were included. Interventions Patients were randomly assigned to first-line contact aspiration (n = 192) or first-line stent retriever (n = 189) immediately prior to mechanical thrombectomy. Main Outcomes and Measures The primary outcome was the proportion of patients with successful revascularization defined as a modified Thrombolysis in Cerebral Infarction score of 2b or 3 at the end of all endovascular procedures. Secondary outcomes included degree of disability assessed by overall distribution of the modified Rankin Scale (mRS) score at 90 days, change in National Institutes of Health Stroke Scale (NIHSS) score at 24 hours, all-cause mortality at 90 days, and procedure-related serious adverse events. Results Among 381 patients randomized (mean age, 69.9 years; 174 women [45.7%]), 363 (95.3%) completed the trial. Median time from symptom onset to arterial puncture was 227 minutes (interquartile range, 180-280 minutes). For the primary outcome, the proportion of patients with successful revascularization was 85.4% (n = 164) in the contact aspiration group vs 83.1% (n = 157) in the stent retriever group (odds ratio, 1.20 [95% CI, 0.68-2.10]; P = .53; difference, 2.4% [95% CI, −5.4% to 9.7%]). For the clinical efficacy outcomes (change in NIHSS score at 24 hours, mRS score at 90 days) and adverse events, there were no significant differences between groups. Conclusions and Relevance Among patients with ischemic stroke in the anterior circulation undergoing thrombectomy, first-line thrombectomy with contact aspiration compared with stent retriever did not result in an increased successful revascularization rate at the end of the procedure. Trial Registration clinicaltrials.gov Identifier:NCT02523261

Journal ArticleDOI
17 Jan 2017-JAMA
TL;DR: Among patients presenting to the emergency department with suspected infection, the use of qSOFA resulted in greater prognostic accuracy for in-hospital mortality than did either SIRS or severe sepsis, and these findings provide support for the Third International Consensus Definitions for Sepsis and Septic Shock (Sepsis-3) criteria in theEmergency department setting.
Abstract: Importance An international task force recently redefined the concept of sepsis. This task force recommended the use of the quick Sequential Organ Failure Assessment (qSOFA) score instead of systemic inflammatory response syndrome (SIRS) criteria to identify patients at high risk of mortality. However, these new criteria have not been prospectively validated in some settings, and their added value in the emergency department remains unknown. Objective To prospectively validate qSOFA as a mortality predictor and compare the performances of the new sepsis criteria to the previous ones. Design, Settings, and Participants International prospective cohort study, conducted in France, Spain, Belgium, and Switzerland between May and June 2016. In the 30 participating emergency departments, for a 4-week period, consecutive patients who visited the emergency departments with suspected infection were included. All variables from previous and new definitions of sepsis were collected. Patients were followed up until hospital discharge or death. Exposures Measurement of qSOFA, SOFA, and SIRS. Main Outcomes and Measures In-hospital mortality. Results Of 1088 patients screened, 879 were included in the analysis. Median age was 67 years (interquartile range, 47-81 years), 414 (47%) were women, and 379 (43%) had respiratory tract infection. Overall in-hospital mortality was 8%: 3% for patients with a qSOFA score lower than 2 vs 24% for those with qSOFA score of 2 or higher (absolute difference, 21%; 95% CI, 15%-26%). The qSOFA performed better than both SIRS and severe sepsis in predicting in-hospital mortality, with an area under the receiver operating curve (AUROC) of 0.80 (95% CI, 0.74-0.85) vs 0.65 (95% CI, 0.59-0.70) for both SIRS and severe sepsis ( P Conclusions and Relevance Among patients presenting to the emergency department with suspected infection, the use of qSOFA resulted in greater prognostic accuracy for in-hospital mortality than did either SIRS or severe sepsis. These findings provide support for the Third International Consensus Definitions for Sepsis and Septic Shock (Sepsis-3) criteria in the emergency department setting. Trial Registration clinicaltrials.gov Identifier:NCT02738164

Journal ArticleDOI
10 Oct 2017-JAMA
TL;DR: Among patients predominantly undergoing abdominal surgery who were at increased postoperative risk, management targeting an individualized systolic blood pressure management strategy compared with standard management, reduced the risk of postoperative organ dysfunction.
Abstract: Importance Perioperative hypotension is associated with an increase in postoperative morbidity and mortality, but the appropriate management strategy remains uncertain. Objective To evaluate whether an individualized blood pressure management strategy tailored to individual patient physiology could reduce postoperative organ dysfunction. Design, Setting, and Participants The Intraoperative Norepinephrine to Control Arterial Pressure (INPRESS) study was a multicenter, randomized, parallel-group clinical trial conducted in 9 French university and nonuniversity hospitals. Adult patients (n = 298) at increased risk of postoperative complications with a preoperative acute kidney injury risk index of class III or higher (indicating moderate to high risk of postoperative kidney injury) undergoing major surgery lasting 2 hours or longer under general anesthesia were enrolled from December 4, 2012, through August 28, 2016 (last follow-up, September 28, 2016). Interventions Individualized management strategy aimed at achieving a systolic blood pressure (SBP) within 10% of the reference value (ie, patient’s resting SBP) or standard management strategy of treating SBP less than 80 mm Hg or lower than 40% from the reference value during and for 4 hours following surgery. Main Outcomes and Measures The primary outcome was a composite of systemic inflammatory response syndrome and dysfunction of at least 1 organ system of the renal, respiratory, cardiovascular, coagulation, and neurologic systems by day 7 after surgery. Secondary outcomes included the individual components of the primary outcome, durations of ICU and hospital stay, adverse events, and all-cause mortality at 30 days after surgery. Results Among 298 patients who were randomized, 292 patients completed the trial (mean [SD] age, 70 [7] years; 44 [15.1%] women) and were included in the modified intention-to-treat analysis. The primary outcome event occurred in 56 of 147 patients (38.1%) assigned to the individualized treatment strategy vs 75 of 145 patients (51.7%) assigned to the standard treatment strategy (relative risk, 0.73; 95% CI, 0.56 to 0.94;P = .02; absolute risk difference, −14%, 95% CI, −25% to −2%). Sixty-eight patients (46.3%) in the individualized treatment group and 92 (63.4%) in the standard treatment group had postoperative organ dysfunction by day 30 (adjusted hazard ratio, 0.66; 95% CI, 0.52 to 0.84;P = .001). There were no significant between-group differences in severe adverse events or 30-day mortality. Conclusions and Relevance Among patients predominantly undergoing abdominal surgery who were at increased postoperative risk, management targeting an individualized systolic blood pressure, compared with standard management, reduced the risk of postoperative organ dysfunction. Trial Registration clinicaltrials.gov Identifier:NCT01536470