scispace - formally typeset
Search or ask a question

Showing papers in "JAMA in 2016"


Journal ArticleDOI
23 Feb 2016-JAMA
TL;DR: The task force concluded the term severe sepsis was redundant and updated definitions and clinical criteria should replace previous definitions, offer greater consistency for epidemiologic studies and clinical trials, and facilitate earlier recognition and more timely management of patients with sepsi or at risk of developing sepsic shock.
Abstract: Importance Definitions of sepsis and septic shock were last revised in 2001. Considerable advances have since been made into the pathobiology (changes in organ function, morphology, cell biology, biochemistry, immunology, and circulation), management, and epidemiology of sepsis, suggesting the need for reexamination. Objective To evaluate and, as needed, update definitions for sepsis and septic shock. Process A task force (n = 19) with expertise in sepsis pathobiology, clinical trials, and epidemiology was convened by the Society of Critical Care Medicine and the European Society of Intensive Care Medicine. Definitions and clinical criteria were generated through meetings, Delphi processes, analysis of electronic health record databases, and voting, followed by circulation to international professional societies, requesting peer review and endorsement (by 31 societies listed in the Acknowledgment). Key Findings From Evidence Synthesis Limitations of previous definitions included an excessive focus on inflammation, the misleading model that sepsis follows a continuum through severe sepsis to shock, and inadequate specificity and sensitivity of the systemic inflammatory response syndrome (SIRS) criteria. Multiple definitions and terminologies are currently in use for sepsis, septic shock, and organ dysfunction, leading to discrepancies in reported incidence and observed mortality. The task force concluded the term severe sepsis was redundant. Recommendations Sepsis should be defined as life-threatening organ dysfunction caused by a dysregulated host response to infection. For clinical operationalization, organ dysfunction can be represented by an increase in the Sequential [Sepsis-related] Organ Failure Assessment (SOFA) score of 2 points or more, which is associated with an in-hospital mortality greater than 10%. Septic shock should be defined as a subset of sepsis in which particularly profound circulatory, cellular, and metabolic abnormalities are associated with a greater risk of mortality than with sepsis alone. Patients with septic shock can be clinically identified by a vasopressor requirement to maintain a mean arterial pressure of 65 mm Hg or greater and serum lactate level greater than 2 mmol/L (>18 mg/dL) in the absence of hypovolemia. This combination is associated with hospital mortality rates greater than 40%. In out-of-hospital, emergency department, or general hospital ward settings, adult patients with suspected infection can be rapidly identified as being more likely to have poor outcomes typical of sepsis if they have at least 2 of the following clinical criteria that together constitute a new bedside clinical score termed quickSOFA (qSOFA): respiratory rate of 22/min or greater, altered mentation, or systolic blood pressure of 100 mm Hg or less. Conclusions and Relevance These updated definitions and clinical criteria should replace previous definitions, offer greater consistency for epidemiologic studies and clinical trials, and facilitate earlier recognition and more timely management of patients with sepsis or at risk of developing sepsis.

14,699 citations


Journal ArticleDOI
13 Dec 2016-JAMA
TL;DR: An algorithm based on deep machine learning had high sensitivity and specificity for detecting referable diabetic retinopathy and diabetic macular edema in retinal fundus photographs from adults with diabetes.
Abstract: Importance Deep learning is a family of computational methods that allow an algorithm to program itself by learning from a large set of examples that demonstrate the desired behavior, removing the need to specify rules explicitly. Application of these methods to medical imaging requires further assessment and validation. Objective To apply deep learning to create an algorithm for automated detection of diabetic retinopathy and diabetic macular edema in retinal fundus photographs. Design and Setting A specific type of neural network optimized for image classification called a deep convolutional neural network was trained using a retrospective development data set of 128 175 retinal images, which were graded 3 to 7 times for diabetic retinopathy, diabetic macular edema, and image gradability by a panel of 54 US licensed ophthalmologists and ophthalmology senior residents between May and December 2015. The resultant algorithm was validated in January and February 2016 using 2 separate data sets, both graded by at least 7 US board-certified ophthalmologists with high intragrader consistency. Exposure Deep learning–trained algorithm. Main Outcomes and Measures The sensitivity and specificity of the algorithm for detecting referable diabetic retinopathy (RDR), defined as moderate and worse diabetic retinopathy, referable diabetic macular edema, or both, were generated based on the reference standard of the majority decision of the ophthalmologist panel. The algorithm was evaluated at 2 operating points selected from the development set, one selected for high specificity and another for high sensitivity. Results The EyePACS-1 data set consisted of 9963 images from 4997 patients (mean age, 54.4 years; 62.2% women; prevalence of RDR, 683/8878 fully gradable images [7.8%]); the Messidor-2 data set had 1748 images from 874 patients (mean age, 57.6 years; 42.6% women; prevalence of RDR, 254/1745 fully gradable images [14.6%]). For detecting RDR, the algorithm had an area under the receiver operating curve of 0.991 (95% CI, 0.988-0.993) for EyePACS-1 and 0.990 (95% CI, 0.986-0.995) for Messidor-2. Using the first operating cut point with high specificity, for EyePACS-1, the sensitivity was 90.3% (95% CI, 87.5%-92.7%) and the specificity was 98.1% (95% CI, 97.8%-98.5%). For Messidor-2, the sensitivity was 87.0% (95% CI, 81.1%-91.0%) and the specificity was 98.5% (95% CI, 97.7%-99.1%). Using a second operating point with high sensitivity in the development set, for EyePACS-1 the sensitivity was 97.5% and specificity was 93.4% and for Messidor-2 the sensitivity was 96.1% and specificity was 93.9%. Conclusions and Relevance In this evaluation of retinal fundus photographs from adults with diabetes, an algorithm based on deep machine learning had high sensitivity and specificity for detecting referable diabetic retinopathy. Further research is necessary to determine the feasibility of applying this algorithm in the clinical setting and to determine whether use of the algorithm could lead to improved care and outcomes compared with current ophthalmologic assessment.

4,810 citations


Journal ArticleDOI
19 Apr 2016-JAMA
TL;DR: This guideline is intended to improve communication about benefits and risks of opioids for chronic pain, improve safety and effectiveness of pain treatment, and reduce risks associated with long-term opioid therapy.
Abstract: Importance Primary care clinicians find managing chronic pain challenging. Evidence of long-term efficacy of opioids for chronic pain is limited. Opioid use is associated with serious risks, including opioid use disorder and overdose. Objective To provide recommendations about opioid prescribing for primary care clinicians treating adult patients with chronic pain outside of active cancer treatment, palliative care, and end-of-life care. Process The Centers for Disease Control and Prevention (CDC) updated a 2014 systematic review on effectiveness and risks of opioids and conducted a supplemental review on benefits and harms, values and preferences, and costs. CDC used the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) framework to assess evidence type and determine the recommendation category. Evidence Synthesis Evidence consisted of observational studies or randomized clinical trials with notable limitations, characterized as low quality using GRADE methodology. Meta-analysis was not attempted due to the limited number of studies, variability in study designs and clinical heterogeneity, and methodological shortcomings of studies. No study evaluated long-term (≥1 year) benefit of opioids for chronic pain. Opioids were associated with increased risks, including opioid use disorder, overdose, and death, with dose-dependent effects. Recommendations There are 12 recommendations. Of primary importance, nonopioid therapy is preferred for treatment of chronic pain. Opioids should be used only when benefits for pain and function are expected to outweigh risks. Before starting opioids, clinicians should establish treatment goals with patients and consider how opioids will be discontinued if benefits do not outweigh risks. When opioids are used, clinicians should prescribe the lowest effective dosage, carefully reassess benefits and risks when considering increasing dosage to 50 morphine milligram equivalents or more per day, and avoid concurrent opioids and benzodiazepines whenever possible. Clinicians should evaluate benefits and harms of continued opioid therapy with patients every 3 months or more frequently and review prescription drug monitoring program data, when available, for high-risk combinations or dosages. For patients with opioid use disorder, clinicians should offer or arrange evidence-based treatment, such as medication-assisted treatment with buprenorphine or methadone. Conclusions and Relevance The guideline is intended to improve communication about benefits and risks of opioids for chronic pain, improve safety and effectiveness of pain treatment, and reduce risks associated with long-term opioid therapy.

3,935 citations


Journal ArticleDOI
23 Feb 2016-JAMA
TL;DR: Clinician recognition of ARDS was associated with higher PEEP, greater use of neuromuscular blockade, and prone positioning, which indicates the potential for improvement in the management of patients with ARDS.
Abstract: IMPORTANCE Limited information exists about the epidemiology, recognition, management, and outcomes of patients with the acute respiratory distress syndrome (ARDS). OBJECTIVES To evaluate intensive ...

3,259 citations


Journal ArticleDOI
23 Feb 2016-JAMA
TL;DR: To evaluate the validity of clinical criteria to identify patients with suspected infection who are at risk of sepsis, a new model derived using multivariable logistic regression in a split sample was derived.
Abstract: RESULTS In the primary cohort, 148 907 encounters had suspected infection (n = 74 453 derivation; n = 74 454 validation), of whom 6347 (4%) died. Among ICU encounters in the validation cohort (n = 7932 with suspected infection, of whom 1289 [16%] died), the predictive validity for in-hospital mortality was lower for SIRS (AUROC = 0.64; 95% CI, 0.62-0.66) and qSOFA (AUROC = 0.66; 95% CI, 0.64-0.68) vs SOFA (AUROC = 0.74; 95% CI, 0.73-0.76; P < .001 for both) or LODS (AUROC = 0.75; 95% CI, 0.73-0.76; P < .001 for both). Among non-ICU encounters in the validation cohort (n = 66 522 with suspected infection, of whom 1886 [3%] died), qSOFA had predictive validity (AUROC = 0.81; 95% CI, 0.80-0.82) that was greater than SOFA (AUROC = 0.79; 95% CI, 0.78-0.80; P < .001) and SIRS (AUROC = 0.76; 95% CI, 0.75-0.77; P < .001). Relative to qSOFA scores lower than 2, encounters with qSOFA scores of 2 or higher had a 3- to 14-fold increase in hospital mortality across baseline risk deciles. Findings were similar in external data sets and for the secondary outcome.

2,639 citations


Journal ArticleDOI
07 Jun 2016-JAMA
TL;DR: Analyses of changes over the decade from 2005 through 2014, adjusted for age, race/Hispanic origin, smoking status, and education, showed significant increasing linear trends among women for overall obesity and for class 3 obesity but not among men.
Abstract: Importance Between 1980 and 2000, the prevalence of obesity increased significantly among adult men and women in the United States; further significant increases were observed through 2003-2004 for men but not women. Subsequent comparisons of data from 2003-2004 with data through 2011-2012 showed no significant increases for men or women. Objective To examine obesity prevalence for 2013-2014 and trends over the decade from 2005 through 2014 adjusting for sex, age, race/Hispanic origin, smoking status, and education. Design, Setting, and Participants Analysis of data obtained from the National Health and Nutrition Examination Survey (NHANES), a cross-sectional, nationally representative health examination survey of the US civilian noninstitutionalized population that includes measured weight and height. Exposures Survey period. Main Outcomes and Measures Prevalence of obesity (body mass index ≥30) and class 3 obesity (body mass index ≥40). Results This report is based on data from 2638 adult men (mean age, 46.8 years) and 2817 women (mean age, 48.4 years) from the most recent 2 years (2013-2014) of NHANES and data from 21 013 participants in previous NHANES surveys from 2005 through 2012. For the years 2013-2014, the overall age-adjusted prevalence of obesity was 37.7% (95% CI, 35.8%-39.7%); among men, it was 35.0% (95% CI, 32.8%-37.3%); and among women, it was 40.4% (95% CI, 37.6%-43.3%). The corresponding prevalence of class 3 obesity overall was 7.7% (95% CI, 6.2%-9.3%); among men, it was 5.5% (95% CI, 4.0%-7.2%); and among women, it was 9.9% (95% CI, 7.5%-12.3%). Analyses of changes over the decade from 2005 through 2014, adjusted for age, race/Hispanic origin, smoking status, and education, showed significant increasing linear trends among women for overall obesity ( P = .004) and for class 3 obesity ( P = .01) but not among men ( P = .30 for overall obesity; P = .14 for class 3 obesity). Conclusions and Relevance In this nationally representative survey of adults in the United States, the age-adjusted prevalence of obesity in 2013-2014 was 35.0% among men and 40.4% among women. The corresponding values for class 3 obesity were 5.5% for men and 9.9% for women. For women, the prevalence of overall obesity and of class 3 obesity showed significant linear trends for increase between 2005 and 2014; there were no significant trends for men. Other studies are needed to determine the reasons for these trends.

2,392 citations


Journal ArticleDOI
21 Jun 2016-JAMA
TL;DR: It is concluded with high certainty that screening for colorectal cancer in average-risk, asymptomatic adults aged 50 to 75 years is of substantial net benefit.
Abstract: Importance Colorectal cancer is the second leading cause of cancer death in the United States. In 2016, an estimated 134 000 persons will be diagnosed with the disease, and about 49 000 will die from it. Colorectal cancer is most frequently diagnosed among adults aged 65 to 74 years; the median age at death from colorectal cancer is 73 years. Objective To update the 2008 US Preventive Services Task Force (USPSTF) recommendation on screening for colorectal cancer. Evidence Review The USPSTF reviewed the evidence on the effectiveness of screening with colonoscopy, flexible sigmoidoscopy, computed tomography colonography, the guaiac-based fecal occult blood test, the fecal immunochemical test, the multitargeted stool DNA test, and the methylated SEPT9 DNA test in reducing the incidence of and mortality from colorectal cancer or all-cause mortality; the harms of these screening tests; and the test performance characteristics of these tests for detecting adenomatous polyps, advanced adenomas based on size, or both, as well as colorectal cancer. The USPSTF also commissioned a comparative modeling study to provide information on optimal starting and stopping ages and screening intervals across the different available screening methods. Findings The USPSTF concludes with high certainty that screening for colorectal cancer in average-risk, asymptomatic adults aged 50 to 75 years is of substantial net benefit. Multiple screening strategies are available to choose from, with different levels of evidence to support their effectiveness, as well as unique advantages and limitations, although there are no empirical data to demonstrate that any of the reviewed strategies provide a greater net benefit. Screening for colorectal cancer is a substantially underused preventive health strategy in the United States. Conclusions and Recommendations The USPSTF recommends screening for colorectal cancer starting at age 50 years and continuing until age 75 years (A recommendation). The decision to screen for colorectal cancer in adults aged 76 to 85 years should be an individual one, taking into account the patient’s overall health and prior screening history (C recommendation).

2,100 citations


Journal ArticleDOI
13 Sep 2016-JAMA
TL;DR: The Second Panel on Cost-Effectiveness in Health and Medicine reviewed the current status of the field of cost-effectiveness analysis and developed a new set of recommendations, including the recommendation to perform analyses from 2 reference case perspectives and to provide an impact inventory to clarify included consequences.
Abstract: Importance Since publication of the report by the Panel on Cost-Effectiveness in Health and Medicine in 1996, researchers have advanced the methods of cost-effectiveness analysis, and policy makers have experimented with its application. The need to deliver health care efficiently and the importance of using analytic techniques to understand the clinical and economic consequences of strategies to improve health have increased in recent years. Objective To review the state of the field and provide recommendations to improve the quality of cost-effectiveness analyses. The intended audiences include researchers, government policy makers, public health officials, health care administrators, payers, businesses, clinicians, patients, and consumers. Design In 2012, the Second Panel on Cost-Effectiveness in Health and Medicine was formed and included 2 co-chairs, 13 members, and 3 additional members of a leadership group. These members were selected on the basis of their experience in the field to provide broad expertise in the design, conduct, and use of cost-effectiveness analyses. Over the next 3.5 years, the panel developed recommendations by consensus. These recommendations were then reviewed by invited external reviewers and through a public posting process. Findings The concept of a “reference case” and a set of standard methodological practices that all cost-effectiveness analyses should follow to improve quality and comparability are recommended. All cost-effectiveness analyses should report 2 reference case analyses: one based on a health care sector perspective and another based on a societal perspective. The use of an “impact inventory,” which is a structured table that contains consequences (both inside and outside the formal health care sector), intended to clarify the scope and boundaries of the 2 reference case analyses is also recommended. This special communication reviews these recommendations and others concerning the estimation of the consequences of interventions, the valuation of health outcomes, and the reporting of cost-effectiveness analyses. Conclusions and Relevance The Second Panel reviewed the current status of the field of cost-effectiveness analysis and developed a new set of recommendations. Major changes include the recommendation to perform analyses from 2 reference case perspectives and to provide an impact inventory to clarify included consequences.

1,995 citations


Journal ArticleDOI
07 Jun 2016-JAMA
TL;DR: In this nationally representative study of US children and adolescents aged 2 to 19 years, the prevalence of obesity in 2011-2014 was 17.0% and extreme obesity was 5.8%.
Abstract: Importance Previous analyses of obesity trends among children and adolescents showed an increase between 1988-1994 and 1999-2000, but no change between 2003-2004 and 2011-2012, except for a significant decline among children aged 2 to 5 years. Objectives To provide estimates of obesity and extreme obesity prevalence for children and adolescents for 2011-2014 and investigate trends by age between 1988-1994 and 2013-2014. Design, Setting, and Participants Children and adolescents aged 2 to 19 years with measured weight and height in the 1988-1994 through 2013-2014 National Health and Nutrition Examination Surveys. Exposures Survey period. Main Outcomes and Measures Obesity was defined as a body mass index (BMI) at or above the sex-specific 95th percentile on the US Centers for Disease Control and Prevention (CDC) BMI-for-age growth charts. Extreme obesity was defined as a BMI at or above 120% of the sex-specific 95th percentile on the CDC BMI-for-age growth charts. Detailed estimates are presented for 2011-2014. The analyses of linear and quadratic trends in prevalence were conducted using 9 survey periods. Trend analyses between 2005-2006 and 2013-2014 also were conducted. Results Measurements from 40 780 children and adolescents (mean age, 11.0 years; 48.8% female) between 1988-1994 and 2013-2014 were analyzed. Among children and adolescents aged 2 to 19 years, the prevalence of obesity in 2011-2014 was 17.0% (95% CI, 15.5%-18.6%) and extreme obesity was 5.8% (95% CI, 4.9%-6.8%). Among children aged 2 to 5 years, obesity increased from 7.2% (95% CI, 5.8%-8.8%) in 1988-1994 to 13.9% (95% CI, 10.7%-17.7%) ( P P = .03) in 2013-2014. Among children aged 6 to 11 years, obesity increased from 11.3% (95% CI, 9.4%-13.4%) in 1988-1994 to 19.6% (95% CI, 17.1%-22.4%) ( P P = .44). Obesity increased among adolescents aged 12 to 19 years between 1988-1994 (10.5% [95% CI, 8.8%-12.5%]) and 2013-2014 (20.6% [95% CI, 16.2%-25.6%]; P P = .02) and adolescents aged 12 to 19 years (2.6% [95% CI, 1.7%-3.9%] in 1988-1994 to 9.1% [95% CI, 7.0%-11.5%] in 2013-2014; P P value range, .09-.87). Conclusions and Relevance In this nationally representative study of US children and adolescents aged 2 to 19 years, the prevalence of obesity in 2011-2014 was 17.0% and extreme obesity was 5.8%. Between 1988-1994 and 2013-2014, the prevalence of obesity increased until 2003-2004 and then decreased in children aged 2 to 5 years, increased until 2007-2008 and then leveled off in children aged 6 to 11 years, and increased among adolescents aged 12 to 19 years.

1,934 citations


Journal ArticleDOI
26 Apr 2016-JAMA
TL;DR: In the United States between 2001 and 2014, higher income was associated with greater longevity, and differences in life expectancy across income groups increased over time, however, the association between life expectancy and income varied substantially across areas; differences in longevity acrossincome groups decreased in some areas and increased in others.
Abstract: Importance The relationship between income and life expectancy is well established but remains poorly understood. Objectives To measure the level, time trend, and geographic variability in the association between income and life expectancy and to identify factors related to small area variation. Design and Setting Income data for the US population were obtained from 1.4 billion deidentified tax records between 1999 and 2014. Mortality data were obtained from Social Security Administration death records. These data were used to estimate race- and ethnicity-adjusted life expectancy at 40 years of age by household income percentile, sex, and geographic area, and to evaluate factors associated with differences in life expectancy. Exposure Pretax household earnings as a measure of income. Main Outcomes and Measures Relationship between income and life expectancy; trends in life expectancy by income group; geographic variation in life expectancy levels and trends by income group; and factors associated with differences in life expectancy across areas. Results The sample consisted of 1 408 287 218 person-year observations for individuals aged 40 to 76 years (mean age, 53.0 years; median household earnings among working individuals, $61 175 per year). There were 4 114 380 deaths among men (mortality rate, 596.3 per 100 000) and 2 694 808 deaths among women (mortality rate, 375.1 per 100 000). The analysis yielded 4 results. First, higher income was associated with greater longevity throughout the income distribution. The gap in life expectancy between the richest 1% and poorest 1% of individuals was 14.6 years (95% CI, 14.4 to 14.8 years) for men and 10.1 years (95% CI, 9.9 to 10.3 years) for women. Second, inequality in life expectancy increased over time. Between 2001 and 2014, life expectancy increased by 2.34 years for men and 2.91 years for women in the top 5% of the income distribution, but by only 0.32 years for men and 0.04 years for women in the bottom 5% ( P r = −0.69, P r = 0.72, P r = 0.42, P r = 0.57, P Conclusions and Relevance In the United States between 2001 and 2014, higher income was associated with greater longevity, and differences in life expectancy across income groups increased over time. However, the association between life expectancy and income varied substantially across areas; differences in longevity across income groups decreased in some areas and increased in others. The differences in life expectancy were correlated with health behaviors and local area characteristics.

1,663 citations


Journal ArticleDOI
27 Sep 2016-JAMA
TL;DR: The period in which endovascular thrombectomy is associated with benefit, and the extent to which treatment delay is related to functional outcomes, mortality, and symptomatic intracranial hemorrhage are characterized are characterized.
Abstract: Importance Endovascular thrombectomy with second-generation devices is beneficial for patients with ischemic stroke due to intracranial large-vessel occlusions. Delineation of the association of treatment time with outcomes would help to guide implementation. Objective To characterize the period in which endovascular thrombectomy is associated with benefit, and the extent to which treatment delay is related to functional outcomes, mortality, and symptomatic intracranial hemorrhage. Design, Setting, and Patients Demographic, clinical, and brain imaging data as well as functional and radiologic outcomes were pooled from randomized phase 3 trials involving stent retrievers or other second-generation devices in a peer-reviewed publication (by July 1, 2016). The identified 5 trials enrolled patients at 89 international sites. Exposures Endovascular thrombectomy plus medical therapy vs medical therapy alone; time to treatment. Main Outcomes and Measures The primary outcome was degree of disability (mRS range, 0-6; lower scores indicating less disability) at 3 months, analyzed with the common odds ratio (cOR) to detect ordinal shift in the distribution of disability over the range of the mRS; secondary outcomes included functional independence at 3 months, mortality by 3 months, and symptomatic hemorrhagic transformation. Results Among all 1287 patients (endovascular thrombectomy + medical therapy [n = 634]; medical therapy alone [n = 653]) enrolled in the 5 trials (mean age, 66.5 years [SD, 13.1]; women, 47.0%), time from symptom onset to randomization was 196 minutes (IQR, 142 to 267). Among the endovascular group, symptom onset to arterial puncture was 238 minutes (IQR, 180 to 302) and symptom onset to reperfusion was 286 minutes (IQR, 215 to 363). At 90 days, the mean mRS score was 2.9 (95% CI, 2.7 to 3.1) in the endovascular group and 3.6 (95% CI, 3.5 to 3.8) in the medical therapy group. The odds of better disability outcomes at 90 days (mRS scale distribution) with the endovascular group declined with longer time from symptom onset to arterial puncture: cOR at 3 hours, 2.79 (95% CI, 1.96 to 3.98), absolute risk difference (ARD) for lower disability scores, 39.2%; cOR at 6 hours, 1.98 (95% CI, 1.30 to 3.00), ARD, 30.2%; cOR at 8 hours,1.57 (95% CI, 0.86 to 2.88), ARD, 15.7%; retaining statistical significance through 7 hours and 18 minutes. Among 390 patients who achieved substantial reperfusion with endovascular thrombectomy, each 1-hour delay to reperfusion was associated with a less favorable degree of disability (cOR, 0.84 [95% CI, 0.76 to 0.93]; ARD, −6.7%) and less functional independence (OR, 0.81 [95% CI, 0.71 to 0.92], ARD, −5.2% [95% CI, −8.3% to −2.1%]), but no change in mortality (OR, 1.12 [95% CI, 0.93 to 1.34]; ARD, 1.5% [95% CI, −0.9% to 4.2%]). Conclusions and Relevance In this individual patient data meta-analysis of patients with large-vessel ischemic stroke, earlier treatment with endovascular thrombectomy + medical therapy compared with medical therapy alone was associated with lower degrees of disability at 3 months. Benefit became nonsignificant after 7.3 hours.

Journal ArticleDOI
23 Feb 2016-JAMA
TL;DR: A consensus process using results from a systematic review, surveys, and cohort studies found that adult patients with septic shock can be identified using the clinical criteria of hypotension requiring vasopressor therapy to maintain mean BP 65 mm Hg or greater and having a serum lactate level greater than 2 mmol/L after adequate fluid resuscitation.
Abstract: Importance Septic shock currently refers to a state of acute circulatory failure associated with infection. Emerging biological insights and reported variation in epidemiology challenge the validity of this definition. Objective To develop a new definition and clinical criteria for identifying septic shock in adults. Design, Setting, and Participants The Society of Critical Care Medicine and the European Society of Intensive Care Medicine convened a task force (19 participants) to revise current sepsis/septic shock definitions. Three sets of studies were conducted: (1) a systematic review and meta-analysis of observational studies in adults published between January 1, 1992, and December 25, 2015, to determine clinical criteria currently reported to identify septic shock and inform the Delphi process; (2) a Delphi study among the task force comprising 3 surveys and discussions of results from the systematic review, surveys, and cohort studies to achieve consensus on a new septic shock definition and clinical criteria; and (3) cohort studies to test variables identified by the Delphi process using Surviving Sepsis Campaign (SSC) (2005-2010; n = 28 150), University of Pittsburgh Medical Center (UPMC) (2010-2012; n = 1 309 025), and Kaiser Permanente Northern California (KPNC) (2009-2013; n = 1 847 165) electronic health record (EHR) data sets. Main Outcomes and Measures Evidence for and agreement on septic shock definitions and criteria. Results The systematic review identified 44 studies reporting septic shock outcomes (total of 166 479 patients) from a total of 92 sepsis epidemiology studies reporting different cutoffs and combinations for blood pressure (BP), fluid resuscitation, vasopressors, serum lactate level, and base deficit to identify septic shock. The septic shock–associated crude mortality was 46.5% (95% CI, 42.7%-50.3%), with significant between-study statistical heterogeneity ( I 2 = 99.5%; τ 2 = 182.5; P Conclusions and Relevance Based on a consensus process using results from a systematic review, surveys, and cohort studies, septic shock is defined as a subset of sepsis in which underlying circulatory, cellular, and metabolic abnormalities are associated with a greater risk of mortality than sepsis alone. Adult patients with septic shock can be identified using the clinical criteria of hypotension requiring vasopressor therapy to maintain mean BP 65 mm Hg or greater and having a serum lactate level greater than 2 mmol/L after adequate fluid resuscitation.

Journal ArticleDOI
06 Dec 2016-JAMA
TL;DR: A systematic review of studies on the prevalence of depression, depressive symptoms, or suicidal ideation in medical students published before September 17, 2016 found that strategies for preventing and treating these disorders in this population of medical students are needed.
Abstract: Importance Medical students are at high risk for depression and suicidal ideation. However, the prevalence estimates of these disorders vary between studies. Objective To estimate the prevalence of depression, depressive symptoms, and suicidal ideation in medical students. Data Sources and Study Selection Systematic search of EMBASE, ERIC, MEDLINE, psycARTICLES, and psycINFO without language restriction for studies on the prevalence of depression, depressive symptoms, or suicidal ideation in medical students published before September 17, 2016. Studies that were published in the peer-reviewed literature and used validated assessment methods were included. Data Extraction and Synthesis Information on study characteristics; prevalence of depression or depressive symptoms and suicidal ideation; and whether students who screened positive for depression sought treatment was extracted independently by 3 investigators. Estimates were pooled using random-effects meta-analysis. Differences by study-level characteristics were estimated using stratified meta-analysis and meta-regression. Main Outcomes and Measures Point or period prevalence of depression, depressive symptoms, or suicidal ideation as assessed by validated questionnaire or structured interview. Results Depression or depressive symptom prevalence data were extracted from 167 cross-sectional studies (n = 116 628) and 16 longitudinal studies (n = 5728) from 43 countries. All but 1 study used self-report instruments. The overall pooled crude prevalence of depression or depressive symptoms was 27.2% (37 933/122 356 individuals; 95% CI, 24.7% to 29.9%, I2 = 98.9%). Summary prevalence estimates ranged across assessment modalities from 9.3% to 55.9%. Depressive symptom prevalence remained relatively constant over the period studied (baseline survey year range of 1982-2015; slope, 0.2% increase per year [95% CI, −0.2% to 0.7%]). In the 9 longitudinal studies that assessed depressive symptoms before and during medical school (n = 2432), the median absolute increase in symptoms was 13.5% (range, 0.6% to 35.3%). Prevalence estimates did not significantly differ between studies of only preclinical students and studies of only clinical students (23.7% [95% CI, 19.5% to 28.5%] vs 22.4% [95% CI, 17.6% to 28.2%]; P = .72). The percentage of medical students screening positive for depression who sought psychiatric treatment was 15.7% (110/954 individuals; 95% CI, 10.2% to 23.4%, I2 = 70.1%). Suicidal ideation prevalence data were extracted from 24 cross-sectional studies (n = 21 002) from 15 countries. All but 1 study used self-report instruments. The overall pooled crude prevalence of suicidal ideation was 11.1% (2043/21 002 individuals; 95% CI, 9.0% to 13.7%, I2 = 95.8%). Summary prevalence estimates ranged across assessment modalities from 7.4% to 24.2%. Conclusions and Relevance In this systematic review, the summary estimate of the prevalence of depression or depressive symptoms among medical students was 27.2% and that of suicidal ideation was 11.1%. Further research is needed to identify strategies for preventing and treating these disorders in this population.

Journal ArticleDOI
03 May 2016-JAMA
TL;DR: In the United States in 2010-2011, there was an estimated annual antibiotic prescription rate per 1000 population of 506, but only an estimated 353 antibiotic prescriptions were likely appropriate, supporting the need for establishing a goal for outpatient antibiotic stewardship.
Abstract: Importance The National Action Plan for Combating Antibiotic-Resistant Bacteria set a goal of reducing inappropriate outpatient antibiotic use by 50% by 2020, but the extent of inappropriate outpatient antibiotic use is unknown. Objective To estimate the rates of outpatient oral antibiotic prescribing by age and diagnosis, and the estimated portions of antibiotic use that may be inappropriate in adults and children in the United States. Design, Setting, and Participants Using the 2010-2011 National Ambulatory Medical Care Survey and National Hospital Ambulatory Medical Care Survey, annual numbers and population-adjusted rates with 95% confidence intervals of ambulatory visits with oral antibiotic prescriptions by age, region, and diagnosis in the United States were estimated. Exposures Ambulatory care visits. Main Outcomes and Measures Based on national guidelines and regional variation in prescribing, diagnosis-specific prevalence and rates of total and appropriate antibiotic prescriptions were determined. These rates were combined to calculate an estimate of the appropriate annual rate of antibiotic prescriptions per 1000 population. Results Of the 184 032 sampled visits, 12.6% of visits (95% CI, 12.0%-13.3%) resulted in antibiotic prescriptions. Sinusitis was the single diagnosis associated with the most antibiotic prescriptions per 1000 population (56 antibiotic prescriptions [95% CI, 48-64]), followed by suppurative otitis media (47 antibiotic prescriptions [95% CI, 41-54]), and pharyngitis (43 antibiotic prescriptions [95% CI, 38-49]). Collectively, acute respiratory conditions per 1000 population led to 221 antibiotic prescriptions (95% CI, 198-245) annually, but only 111 antibiotic prescriptions were estimated to be appropriate for these conditions. Per 1000 population, among all conditions and ages combined in 2010-2011, an estimated 506 antibiotic prescriptions (95% CI, 458-554) were written annually, and, of these, 353 antibiotic prescriptions were estimated to be appropriate antibiotic prescriptions. Conclusions and Relevance In the United States in 2010-2011, there was an estimated annual antibiotic prescription rate per 1000 population of 506, but only an estimated 353 antibiotic prescriptions were likely appropriate, supporting the need for establishing a goal for outpatient antibiotic stewardship.

Journal ArticleDOI
26 Jul 2016-JAMA
TL;DR: Among patients with 1 to 3 brain metastases, the use of SRS alone, compared with SRS combined with WBRT, resulted in less cognitive deterioration at 3 months, and in the absence of a difference in overall survival, these findings suggest that.
Abstract: Importance Whole brain radiotherapy (WBRT) significantly improves tumor control in the brain after stereotactic radiosurgery (SRS), yet because of its association with cognitive decline, its role in the treatment of patients with brain metastases remains controversial. Objective To determine whether there is less cognitive deterioration at 3 months after SRS alone vs SRS plus WBRT. Design, Setting, and Participants At 34 institutions in North America, patients with 1 to 3 brain metastases were randomized to receive SRS or SRS plus WBRT between February 2002 and December 2013. Interventions The WBRT dose schedule was 30 Gy in 12 fractions; the SRS dose was 18 to 22 Gy in the SRS plus WBRT group and 20 to 24 Gy for SRS alone. Main Outcomes and Measures The primary end point was cognitive deterioration (decline >1 SD from baseline on at least 1 cognitive test at 3 months) in participants who completed the baseline and 3-month assessments. Secondary end points included time to intracranial failure, quality of life, functional independence, long-term cognitive status, and overall survival. Results There were 213 randomized participants (SRS alone, n = 111; SRS plus WBRT, n = 102) with a mean age of 60.6 years (SD, 10.5 years); 103 (48%) were women. There was less cognitive deterioration at 3 months after SRS alone (40/63 patients [63.5%]) than when combined with WBRT (44/48 patients [91.7%]; difference, −28.2%; 90% CI, −41.9% to −14.4%; P P = .002). Time to intracranial failure was significantly shorter for SRS alone compared with SRS plus WBRT (hazard ratio, 3.6; 95% CI, 2.2-5.9; P P = .26). Median overall survival was 10.4 months for SRS alone and 7.4 months for SRS plus WBRT (hazard ratio, 1.02; 95% CI, 0.75-1.38; P = .92). For long-term survivors, the incidence of cognitive deterioration was less after SRS alone at 3 months (5/11 [45.5%] vs 16/17 [94.1%]; difference, −48.7%; 95% CI, −87.6% to −9.7%; P = .007) and at 12 months (6/10 [60%] vs 17/18 [94.4%]; difference, −34.4%; 95% CI, −74.4% to 5.5%; P = .04). Conclusions and Relevance Among patients with 1 to 3 brain metastases, the use of SRS alone, compared with SRS combined with WBRT, resulted in less cognitive deterioration at 3 months. In the absence of a difference in overall survival, these findings suggest that for patients with 1 to 3 brain metastases amenable to radiosurgery, SRS alone may be a preferred strategy. Trial Registration clinicaltrials.gov Identifier:NCT00377156

Journal ArticleDOI
26 Jan 2016-JAMA
TL;DR: Screening for depression in the general adult population, including pregnant and postpartum women, should be implemented with adequate systems in place to ensure accurate diagnosis, effective treatment, and appropriate follow-up.
Abstract: Description Update of the 2009 US Preventive Services Task Force (USPSTF) recommendation on screening for depression in adults. Methods The USPSTF reviewed the evidence on the benefits and harms of screening for depression in adult populations, including older adults and pregnant and postpartum women; the accuracy of depression screening instruments; and the benefits and harms of depression treatment in these populations. Population This recommendation applies to adults 18 years and older. Recommendation The USPSTF recommends screening for depression in the general adult population, including pregnant and postpartum women. Screening should be implemented with adequate systems in place to ensure accurate diagnosis, effective treatment, and appropriate follow-up. (B recommendation)

Journal ArticleDOI
12 Jul 2016-JAMA
TL;DR: Evaluating the rate of within-couple HIV transmission among serodifferent heterosexual and MSM couples during periods of sex without condoms and when the HIV-positive partner had HIV-1 RNA load less than 200 copies/mL found no phylogenetically linked transmissions.
Abstract: Importance A key factor in assessing the effectiveness and cost-effectiveness of antiretroviral therapy (ART) as a prevention strategy is the absolute risk of HIV transmission through condomless sex with suppressed HIV-1 RNA viral load for both anal and vaginal sex. Objective To evaluate the rate of within-couple HIV transmission (heterosexual and men who have sex with men [MSM]) during periods of sex without condoms and when the HIV-positive partner had HIV-1 RNA load less than 200 copies/mL. Design, Setting, and Participants The prospective, observational PARTNER (Partners of People on ART—A New Evaluation of the Risks) study was conducted at 75 clinical sites in 14 European countries and enrolled 1166 HIV serodifferent couples (HIV-positive partner taking suppressive ART) who reported condomless sex (September 2010 to May 2014). Eligibility criteria for inclusion of couple-years of follow-up were condomless sex and HIV-1 RNA load less than 200 copies/mL. Anonymized phylogenetic analysis compared couples’ HIV-1 polymerase and envelope sequences if an HIV-negative partner became infected to determine phylogenetically linked transmissions. Exposures Condomless sexual activity with an HIV-positive partner taking virally suppressive ART. Main Outcomes and Measures Risk of within-couple HIV transmission to the HIV-negative partner Results Among 1166 enrolled couples, 888 (mean age, 42 years [IQR, 35-48]; 548 heterosexual [61.7%] and 340 MSM [38.3%]) provided 1238 eligible couple-years of follow-up (median follow-up, 1.3 years [IQR, 0.8-2.0]). At baseline, couples reported condomless sex for a median of 2 years (IQR, 0.5-6.3). Condomless sex with other partners was reported by 108 HIV-negative MSM (33%) and 21 heterosexuals (4%). During follow-up, couples reported condomless sex a median of 37 times per year (IQR, 15-71), with MSM couples reporting approximately 22 000 condomless sex acts and heterosexuals approximately 36 000. Although 11 HIV-negative partners became HIV-positive (10 MSM; 1 heterosexual; 8 reported condomless sex with other partners), no phylogenetically linked transmissions occurred over eligible couple-years of follow-up, giving a rate of within-couple HIV transmission of zero, with an upper 95% confidence limit of 0.30/100 couple-years of follow-up. The upper 95% confidence limit for condomless anal sex was 0.71 per 100 couple-years of follow-up. Conclusions and Relevance Among serodifferent heterosexual and MSM couples in which the HIV-positive partner was using suppressive ART and who reported condomless sex, during median follow-up of 1.3 years per couple, there were no documented cases of within-couple HIV transmission (upper 95% confidence limit, 0.30/100 couple-years of follow-up). Additional longer-term follow-up is necessary to provide more precise estimates of risk.

Journal ArticleDOI
02 Feb 2016-JAMA
TL;DR: This Viewpoint summarizes the updated recommendations of the US Department of Health and Human Services’ recently released 2015-2020 Dietary Guidelines for Americans.
Abstract: This Viewpoint summarizes the updated recommendations of the US Department of Health and Human Services’ recently released 2015-2020 Dietary Guidelines for Americans.

Journal ArticleDOI
28 Jun 2016-JAMA
TL;DR: Among ambulatory adults aged 75 years or older, treating to an SBP target of less than 120 mm Hg compared with an SBp target of more than 140mm Hg resulted in significantly lower rates of fatal and nonfatal major cardiovascular events and death from any cause.
Abstract: Importance The appropriate treatment target for systolic blood pressure (SBP) in older patients with hypertension remains uncertain. Objective To evaluate the effects of intensive ( Design, Setting, and Participants A multicenter, randomized clinical trial of patients aged 75 years or older who participated in the Systolic Blood Pressure Intervention Trial (SPRINT). Recruitment began on October 20, 2010, and follow-up ended on August 20, 2015. Interventions Participants were randomized to an SBP target of less than 120 mm Hg (intensive treatment group, n = 1317) or an SBP target of less than 140 mm Hg (standard treatment group, n = 1319). Main Outcomes and Measures The primary cardiovascular disease outcome was a composite of nonfatal myocardial infarction, acute coronary syndrome not resulting in a myocardial infarction, nonfatal stroke, nonfatal acute decompensated heart failure, and death from cardiovascular causes. All-cause mortality was a secondary outcome. Results Among 2636 participants (mean age, 79.9 years; 37.9% women), 2510 (95.2%) provided complete follow-up data. At a median follow-up of 3.14 years, there was a significantly lower rate of the primary composite outcome (102 events in the intensive treatment group vs 148 events in the standard treatment group; hazard ratio [HR], 0.66 [95% CI, 0.51-0.85]) and all-cause mortality (73 deaths vs 107 deaths, respectively; HR, 0.67 [95% CI, 0.49-0.91]). The overall rate of serious adverse events was not different between treatment groups (48.4% in the intensive treatment group vs 48.3% in the standard treatment group; HR, 0.99 [95% CI, 0.89-1.11]). Absolute rates of hypotension were 2.4% in the intensive treatment group vs 1.4% in the standard treatment group (HR, 1.71 [95% CI, 0.97-3.09]), 3.0% vs 2.4%, respectively, for syncope (HR, 1.23 [95% CI, 0.76-2.00]), 4.0% vs 2.7% for electrolyte abnormalities (HR, 1.51 [95% CI, 0.99-2.33]), 5.5% vs 4.0% for acute kidney injury (HR, 1.41 [95% CI, 0.98-2.04]), and 4.9% vs 5.5% for injurious falls (HR, 0.91 [95% CI, 0.65-1.29]). Conclusions and Relevance Among ambulatory adults aged 75 years or older, treating to an SBP target of less than 120 mm Hg compared with an SBP target of less than 140 mm Hg resulted in significantly lower rates of fatal and nonfatal major cardiovascular events and death from any cause. Trial Registration clinicaltrials.gov Identifier:NCT01206062

Journal ArticleDOI
27 Sep 2016-JAMA
TL;DR: The achieved absolute LDL-C level was significantly associated with the absolute rate of major coronary events, including coronary death or MI, for primary prevention trials and secondary prevention trials (1.5%-2.5% lower event rate); and for established nonstatin interventions that work primarily via upregulation of LDL receptor expression (ie, diet, bile acid sequestrants, ileal bypass, and ezetimibe).
Abstract: Importance The comparative clinical benefit of nonstatin therapies that reduce low-density lipoprotein cholesterol (LDL-C) remains uncertain. Objective To evaluate the association between lowering LDL-C and relative cardiovascular risk reduction across different statin and nonstatin therapies. Data Sources and Study Selection The MEDLINE and EMBASE databases were searched (1966-July 2016). The key inclusion criteria were that the study was a randomized clinical trial and the reported clinical outcomes included myocardial infarction (MI). Studies were excluded if the duration was less than 6 months or had fewer than 50 clinical events. Studies of 9 different types of LDL-C reduction approaches were included. Data Extraction and Synthesis Two authors independently extracted and entered data into standardized data sheets and data were analyzed using meta-regression. Main Outcomes and Measures The relative risk (RR) of major vascular events (a composite of cardiovascular death, acute MI or other acute coronary syndrome, coronary revascularization, or stroke) associated with the absolute reduction in LDL-C level; 5-year rate of major coronary events (coronary death or MI) associated with achieved LDL-C level. Results A total of 312 175 participants (mean age, 62 years; 24% women; mean baseline LDL-C level of 3.16 mmol/L [122.3 mg/dL]) from 49 trials with 39 645 major vascular events were included. The RR for major vascular events per 1-mmol/L (38.7-mg/dL) reduction in LDL-C level was 0.77 (95% CI, 0.71-0.84;P Conclusions and Relevance In this meta-regression analysis, the use of statin and nonstatin therapies that act via upregulation of LDL receptor expression to reduce LDL-C were associated with similar RRs of major vascular events per change in LDL-C. Lower achieved LDL-C levels were associated with lower rates of major coronary events.

Journal ArticleDOI
19 Apr 2016-JAMA
TL;DR: Among patients with advanced melanoma, pembrolizumab administration was associated with an overall objective response rate of 33, 12-month progression-free survival rate of 35, and median overall survival of 23 months; grade 3 or 4 treatment-related AEs occurred in 14%.
Abstract: Importance The programmed death 1 (PD-1) pathway limits immune responses to melanoma and can be blocked with the humanized anti-PD-1 monoclonal antibody pembrolizumab. Objective To characterize the association of pembrolizumab with tumor response and overall survival among patients with advanced melanoma. Design, Settings, and Participants Open-label, multicohort, phase 1b clinical trials (enrollment, December 2011-September 2013). Median duration of follow-up was 21 months. The study was performed in academic medical centers in Australia, Canada, France, and the United States. Eligible patients were aged 18 years and older and had advanced or metastatic melanoma. Data were pooled from 655 enrolled patients (135 from a nonrandomized cohort [n = 87 ipilimumab naive; n = 48 ipilimumab treated] and 520 from randomized cohorts [n = 226 ipilimumab naive; n = 294 ipilimumab treated]). Cutoff dates were April 18, 2014, for safety analyses and October 18, 2014, for efficacy analyses. Exposures Pembrolizumab 10 mg/kg every 2 weeks, 10 mg/kg every 3 weeks, or 2 mg/kg every 3 weeks continued until disease progression, intolerable toxicity, or investigator decision. Main Outcomes and Measures The primary end point was confirmed objective response rate (best overall response of complete response or partial response) in patients with measurable disease at baseline per independent central review. Secondary end points included toxicity, duration of response, progression-free survival, and overall survival. Results Among the 655 patients (median [range] age, 61 [18-94] years; 405 [62%] men), 581 had measurable disease at baseline. An objective response was reported in 194 of 581 patients (33% [95% CI, 30%-37%]) and in 60 of 133 treatment-naive patients (45% [95% CI, 36% to 54%]). Overall, 74% (152/205) of responses were ongoing at the time of data cutoff; 44% (90/205) of patients had response duration for at least 1 year and 79% (162/205) had response duration for at least 6 months. Twelve-month progression-free survival rates were 35% (95% CI, 31%-39%) in the total population and 52% (95% CI, 43%-60%) among treatment-naive patients. Median overall survival in the total population was 23 months (95% CI, 20-29) with a 12-month survival rate of 66% (95% CI, 62%-69%) and a 24-month survival rate of 49% (95% CI, 44%-53%). In treatment-naive patients, median overall survival was 31 months (95% CI, 24 to not reached) with a 12-month survival rate of 73% (95% CI, 65%-79%) and a 24-month survival rate of 60% (95% CI, 51%-68%). Ninety-two of 655 patients (14%) experienced at least 1 treatment-related grade 3 or 4 adverse event (AE) and 27 of 655 (4%) patients discontinued treatment because of a treatment-related AE. Treatment-related serious AEs were reported in 59 patients (9%). There were no drug-related deaths. Conclusions and Relevance Among patients with advanced melanoma, pembrolizumab administration was associated with an overall objective response rate of 33%, 12-month progression-free survival rate of 35%, and median overall survival of 23 months; grade 3 or 4 treatment-related AEs occurred in 14%. Trial Registration clinicaltrials.gov Identifier:NCT01295827

Journal ArticleDOI
21 Jun 2016-JAMA
TL;DR: In randomized trials conducted among average-risk, asymptomatic women, ovarian cancer mortality did not significantly differ between screened women and those with no screening or in usual care; evidence on psychological harms was limited but nonsignificant.
Abstract: Importance Colorectal cancer (CRC) remains a significant cause of morbidity and mortality in the United States. Objective To systematically review the effectiveness, diagnostic accuracy, and harms of screening for CRC. Data Sources Searches of MEDLINE, PubMed, and the Cochrane Central Register of Controlled Trials for relevant studies published from January 1, 2008, through December 31, 2014, with surveillance through February 23, 2016. Study Selection English-language studies conducted in asymptomatic populations at general risk of CRC. Data Extraction and Synthesis Two reviewers independently appraised the articles and extracted relevant study data from fair- or good-quality studies. Random-effects meta-analyses were conducted. Main Outcomes and Measures Colorectal cancer incidence and mortality, test accuracy in detecting CRC or adenomas, and serious adverse events. Results Four pragmatic randomized clinical trials (RCTs) evaluating 1-time or 2-time flexible sigmoidoscopy (n = 458 002) were associated with decreased CRC-specific mortality compared with no screening (incidence rate ratio, 0.73; 95% CI, 0.66-0.82). Five RCTs with multiple rounds of biennial screening with guaiac-based fecal occult blood testing (n = 419 966) showed reduced CRC-specific mortality (relative risk [RR], 0.91; 95% CI, 0.84-0.98, at 19.5 years to RR, 0.78; 95% CI, 0.65-0.93, at 30 years). Seven studies of computed tomographic colonography (CTC) with bowel preparation demonstrated per-person sensitivity and specificity to detect adenomas 6 mm and larger comparable with colonoscopy (sensitivity from 73% [95% CI, 58%-84%] to 98% [95% CI, 91%-100%]; specificity from 89% [95% CI, 84%-93%] to 91% [95% CI, 88%-93%]); variability and imprecision may be due to differences in study designs or CTC protocols. Sensitivity of colonoscopy to detect adenomas 6 mm or larger ranged from 75% (95% CI, 63%-84%) to 93% (95% CI, 88%-96%). On the basis of a single stool specimen, the most commonly evaluated families of fecal immunochemical tests (FITs) demonstrated good sensitivity (range, 73%-88%) and specificity (range, 90%-96%). One study (n = 9989) found that FIT plus stool DNA test had better sensitivity in detecting CRC than FIT alone (92%) but lower specificity (84%). Serious adverse events from colonoscopy in asymptomatic persons included perforations (4/10 000 procedures, 95% CI, 2-5 in 10 000) and major bleeds (8/10 000 procedures, 95% CI, 5-14 in 10 000). Computed tomographic colonography may have harms resulting from low-dose ionizing radiation exposure or identification of extracolonic findings. Conclusions and Relevance Colonoscopy, flexible sigmoidoscopy, CTC, and stool tests have differing levels of evidence to support their use, ability to detect cancer and precursor lesions, and risk of serious adverse events in average-risk adults. Although CRC screening has a large body of supporting evidence, additional research is still needed.

Journal ArticleDOI
15 Nov 2016-JAMA
TL;DR: A restrictive RBC transfusion threshold is safe in most clinical settings and the current blood banking practices of using standard-issue blood should be continued.
Abstract: Importance More than 100 million units of blood are collected worldwide each year, yet the indication for red blood cell (RBC) transfusion and the optimal length of RBC storage prior to transfusion are uncertain. Objective To provide recommendations for the target hemoglobin level for RBC transfusion among hospitalized adult patients who are hemodynamically stable and the length of time RBCs should be stored prior to transfusion. Evidence Review Reference librarians conducted a literature search for randomized clinical trials (RCTs) evaluating hemoglobin thresholds for RBC transfusion (1950-May 2016) and RBC storage duration (1948-May 2016) without language restrictions. The results were summarized using the Grading of Recommendations Assessment, Development and Evaluation method. For RBC transfusion thresholds, 31 RCTs included 12 587 participants and compared restrictive thresholds (transfusion not indicated until the hemoglobin level is 7-8 g/dL) with liberal thresholds (transfusion not indicated until the hemoglobin level is 9-10 g/dL). The summary estimates across trials demonstrated that restrictive RBC transfusion thresholds were not associated with higher rates of adverse clinical outcomes, including 30-day mortality, myocardial infarction, cerebrovascular accident, rebleeding, pneumonia, or thromboembolism. For RBC storage duration, 13 RCTs included 5515 participants randomly allocated to receive fresher blood or standard-issue blood. These RCTs demonstrated that fresher blood did not improve clinical outcomes. Findings It is good practice to consider the hemoglobin level, the overall clinical context, patient preferences, and alternative therapies when making transfusion decisions regarding an individual patient. Recommendation 1: a restrictive RBC transfusion threshold in which the transfusion is not indicated until the hemoglobin level is 7 g/dL is recommended for hospitalized adult patients who are hemodynamically stable, including critically ill patients, rather than when the hemoglobin level is 10 g/dL (strong recommendation, moderate quality evidence). A restrictive RBC transfusion threshold of 8 g/dL is recommended for patients undergoing orthopedic surgery, cardiac surgery, and those with preexisting cardiovascular disease (strong recommendation, moderate quality evidence). The restrictive transfusion threshold of 7 g/dL is likely comparable with 8 g/dL, but RCT evidence is not available for all patient categories. These recommendations do not apply to patients with acute coronary syndrome, severe thrombocytopenia (patients treated for hematological or oncological reasons who are at risk of bleeding), and chronic transfusion–dependent anemia (not recommended due to insufficient evidence). Recommendation 2: patients, including neonates, should receive RBC units selected at any point within their licensed dating period (standard issue) rather than limiting patients to transfusion of only fresh (storage length: Conclusions and Relevance Research in RBC transfusion medicine has significantly advanced the science in recent years and provides high-quality evidence to inform guidelines. A restrictive transfusion threshold is safe in most clinical settings and the current blood banking practices of using standard-issue blood should be continued.

Journal ArticleDOI
24 May 2016-JAMA
TL;DR: To determine whether early initiation of RRT in patients who are critically ill with AKI reduces 90-day all-cause mortality, a single-center randomized clinical trial of 231 critically ill patients with KDIGO stage 2 found that more patients in the early group recovered renal function by day 90.
Abstract: Importance Optimal timing of initiation of renal replacement therapy (RRT) for severe acute kidney injury (AKI) but without life-threatening indications is still unknown. Objective To determine whether early initiation of RRT in patients who are critically ill with AKI reduces 90-day all-cause mortality. Design, Setting, and Participants Single-center randomized clinical trial of 231 critically ill patients with AKI Kidney Disease: Improving Global Outcomes (KDIGO) stage 2 (≥2 times baseline or urinary output Interventions Early (within 8 hours of diagnosis of KDIGO stage 2; n = 112) or delayed (within 12 hours of stage 3 AKI or no initiation; n = 119) initiation of RRT. Main Outcomes and Measures The primary end point was mortality at 90 days after randomization. Secondary end points included 28- and 60-day mortality, clinical evidence of organ dysfunction, recovery of renal function, requirement of RRT after day 90, duration of renal support, and intensive care unit (ICU) and hospital length of stay. Results Among 231 patients (mean age, 67 years; men, 146 [63.2%]), all patients in the early group (n = 112) and 108 of 119 patients (90.8%) in the delayed group received RRT. All patients completed follow-up at 90 days. Median time (Q1, Q3) from meeting full eligibility criteria to RRT initiation was significantly shorter in the early group (6.0 hours [Q1, Q3: 4.0, 7.0]) than in the delayed group (25.5 h [Q1, Q3: 18.8, 40.3]; difference, −21.0 [95% CI, −24.0 to −18.0]; P P = .03). More patients in the early group recovered renal function by day 90 (60 of 112 patients [53.6%] in the early group vs 46 of 119 patients [38.7%] in the delayed group; odds ratio [OR], 0.55 [95% CI, 0.32 to 0. 93]; difference, 14.9% [95% CI, 2.2% to 27.6%]; P = .02). Duration of RRT and length of hospital stay were significantly shorter in the early group than in the delayed group (RRT: 9 days [Q1, Q3: 4, 44] in the early group vs 25 days [Q1, Q3: 7, >90] in the delayed group; P = .04; HR, 0.69 [95% CI, 0.48 to 1.00]; difference, −18 days [95% CI, −41 to 4]; hospital stay: 51 days [Q1, Q3: 31, 74] in the early group vs 82 days [Q1, Q3: 67, >90] in the delayed group; P Conclusions and Relevance Among critically ill patients with AKI, early RRT compared with delayed initiation of RRT reduced mortality over the first 90 days. Further multicenter trials of this intervention are warranted. Trial Registration German Clinical Trial Registry Identifier:DRKS00004367

Journal ArticleDOI
13 Dec 2016-JAMA
TL;DR: Among patients with angiographic coronary disease treated with statins, addition of evolocumab, compared with placebo, resulted in a greater decrease in PAV after 76 weeks of treatment, and further studies are needed to assess the effects of PCSK9 inhibition on clinical outcomes.
Abstract: Importance Reducing levels of low-density lipoprotein cholesterol (LDL-C) with intensive statin therapy reduces progression of coronary atherosclerosis in proportion to achieved LDL-C levels. Proprotein convertase subtilisin kexin type 9 (PCSK9) inhibitors produce incremental LDL-C lowering in statin-treated patients; however, the effects of these drugs on coronary atherosclerosis have not been evaluated. Objective To determine the effects of PCSK9 inhibition with evolocumab on progression of coronary atherosclerosis in statin-treated patients. Design, Setting, and Participants The GLAGOV multicenter, double-blind, placebo-controlled, randomized clinical trial (enrollment May 3, 2013, to January 12, 2015) conducted at 197 academic and community hospitals in North America, Europe, South America, Asia, Australia, and South Africa and enrolling 968 patients presenting for coronary angiography. Interventions Participants with angiographic coronary disease were randomized to receive monthly evolocumab (420 mg) (n = 484) or placebo (n = 484) via subcutaneous injection for 76 weeks, in addition to statins. Main Outcomes and Measures The primary efficacy measure was the nominal change in percent atheroma volume (PAV) from baseline to week 78, measured by serial intravascular ultrasonography (IVUS) imaging. Secondary efficacy measures were nominal change in normalized total atheroma volume (TAV) and percentage of patients demonstrating plaque regression. Safety and tolerability were also evaluated. Results Among the 968 treated patients (mean age, 59.8 years [SD, 9.2]; 269 [27.8%] women; mean LDL-C level, 92.5 mg/dL [SD, 27.2]), 846 had evaluable imaging at follow-up. Compared with placebo, the evolocumab group achieved lower mean, time-weighted LDL-C levels (93.0 vs 36.6 mg/dL; difference, −56.5 mg/dL [95% CI, −59.7 to −53.4]; P P 3 with placebo and 5.8 mm 3 with evolocumab (difference, −4.9 mm 3 [95% CI, −7.3 to −2.5]; P P P Conclusions and Relevance Among patients with angiographic coronary disease treated with statins, addition of evolocumab, compared with placebo, resulted in a greater decrease in PAV after 76 weeks of treatment. Further studies are needed to assess the effects of PCSK9 inhibition on clinical outcomes. Trial Registration clinicaltrials.gov Identifier:NCT01813422

Journal ArticleDOI
27 Dec 2016-JAMA
TL;DR: Modeled estimates of US spending on personal health care and public health showed substantial increases from 1996 through 2013; with spending on diabetes, ischemic heart disease, and low back and neck pain accounting for the highest amounts of spending by disease category.
Abstract: Importance US health care spending has continued to increase, and now accounts for more than 17% of the US economy. Despite the size and growth of this spending, little is known about how spending on each condition varies by age and across time. Objective To systematically and comprehensively estimate US spending on personal health care and public health, according to condition, age and sex group, and type of care. Design and Setting Government budgets, insurance claims, facility surveys, household surveys, and official US records from 1996 through 2013 were collected and combined. In total, 183 sources of data were used to estimate spending for 155 conditions (including cancer, which was disaggregated into 29 conditions). For each record, spending was extracted, along with the age and sex of the patient, and the type of care. Spending was adjusted to reflect the health condition treated, rather than the primary diagnosis. Exposures Encounter with US health care system. Main Outcomes and Measures National spending estimates stratified by condition, age and sex group, and type of care. Results From 1996 through 2013, $30.1 trillion of personal health care spending was disaggregated by 155 conditions, age and sex group, and type of care. Among these 155 conditions, diabetes had the highest health care spending in 2013, with an estimated $101.4 billion (uncertainty interval [UI], $96.7 billion-$106.5 billion) in spending, including 57.6% (UI, 53.8%-62.1%) spent on pharmaceuticals and 23.5% (UI, 21.7%-25.7%) spent on ambulatory care. Ischemic heart disease accounted for the second-highest amount of health care spending in 2013, with estimated spending of $88.1 billion (UI, $82.7 billion-$92.9 billion), and low back and neck pain accounted for the third-highest amount, with estimated health care spending of $87.6 billion (UI, $67.5 billion-$94.1 billion). The conditions with the highest spending levels varied by age, sex, type of care, and year. Personal health care spending increased for 143 of the 155 conditions from 1996 through 2013. Spending on low back and neck pain and on diabetes increased the most over the 18 years, by an estimated $57.2 billion (UI, $47.4 billion-$64.4 billion) and $64.4 billion (UI, $57.8 billion-$70.7 billion), respectively. From 1996 through 2013, spending on emergency care and retail pharmaceuticals increased at the fastest rates (6.4% [UI, 6.4%-6.4%] and 5.6% [UI, 5.6%-5.6%] annual growth rate, respectively), which were higher than annual rates for spending on inpatient care (2.8% [UI, 2.8%–2.8%] and nursing facility care (2.5% [UI, 2.5%-2.5%]). Conclusions and Relevance Modeled estimates of US spending on personal health care and public health showed substantial increases from 1996 through 2013; with spending on diabetes, ischemic heart disease, and low back and neck pain accounting for the highest amounts of spending by disease category. The rate of change in annual spending varied considerably among different conditions and types of care. This information may have implications for efforts to control US health care spending.

Journal ArticleDOI
03 May 2016-JAMA
TL;DR: In this open-label, randomized trial involving patients with locally advanced pancreatic cancer with disease controlled after 4 months of induction chemotherapy, there was no significant difference in overall survival with chemoradiotherapy compared with chemotherapy alone and there wasno significant difference with gem citabine compared with gemcitabine plus erlotinib used as maintenance therapy.
Abstract: Importance In locally advanced pancreatic cancer, the role of chemoradiotherapy is controversial and the efficacy of erlotinib is unknown. Objectives To assess whether chemoradiotherapy improves overall survival of patients with locally advanced pancreatic cancer controlled after 4 months of gemcitabine-based induction chemotherapy and to assess the effect of erlotinib on survival. Design, Setting, and Participants In LAP07, an international, open-label, phase 3 randomized trial, 449 patients were enrolled between 2008 and 2011. Follow-up ended in February 2013. Interventions In the first randomization, 223 patients received 1000 mg/m 2 weekly of gemcitabine alone and 219 patients received 1000 mg/m 2 of gemcitabine plus 100 mg/d of erlotinib. In the second randomization involving patients with progression-free disease after 4 months, 136 patients received 2 months of the same chemotherapy and 133 underwent chemoradiotherapy (54 Gy plus capecitabine). Main Outcomes and Measures The primary outcome was overall survival from the date of the first randomization. Secondary outcomes were the effect of erlotinib and quality assurance of radiotherapy on overall survival, progression-free survival of gemcitabine-erlotinib and erlotinib maintenance with gemcitabine alone at the second randomization, and toxic effects. Results A total of 442 of the 449 patients (232 men; median age, 63.3 years) enrolled underwent the first randomization. Of these, 269 underwent the second randomization. Interim analysis was performed when 221 patients died (109 in the chemoradiotherapy group and 112 in the chemotherapy group), reaching the early stopping boundaries for futility. With a median follow-up of 36.7 months, the median overall survival from the date of the first randomization was not significantly different between chemotherapy at 16.5 months (95% CI, 14.5-18.5 months) and chemoradiotherapy at 15.2 months (95% CI, 13.9-17.3 months; hazard ratio [HR], 1.03; 95% CI, 0.79-1.34; P = .83). Median overall survival from the date of the first randomization for the 223 patients receiving gemcitabine was 13.6 months (95% CI, 12.3-15.3 months) and was 11.9 months (95% CI, 10.4-13.5 months) for the 219 patients receiving gemcitabine plus erlotinib (HR, 1.19; 95% CI, 0.97-1.45; P = .09; 188 deaths vs 191 deaths). Chemoradiotherapy was associated with decreased local progression (32% vs 46%, P = .03) and no increase in grade 3 to 4 toxicity, except for nausea. Conclusions and Relevance In this open-label, randomized trial involving patients with locally advanced pancreatic cancer with disease controlled after 4 months of induction chemotherapy, there was no significant difference in overall survival with chemoradiotherapy compared with chemotherapy alone and there was no significant difference in overall survival with gemcitabine compared with gemcitabine plus erlotinib used as maintenance therapy. Trial Registration clinicaltrials.gov Identifier:NCT00634725

Journal ArticleDOI
26 Apr 2016-JAMA
TL;DR: A clinical decision tool to identify patients expected to derive benefit vs harm from continuing thienopyridine beyond 1 year after percutaneous coronary intervention is developed to inform dual antiplatelet therapy duration.
Abstract: Importance Dual antiplatelet therapy after percutaneous coronary intervention (PCI) reduces ischemia but increases bleeding. Objective To develop a clinical decision tool to identify patients expected to derive benefit vs harm from continuing thienopyridine beyond 1 year after PCI. Design, Setting, and Participants Among 11 648 randomized DAPT Study patients from 11 countries (August 2009-May 2014), a prediction rule was derived stratifying patients into groups to distinguish ischemic and bleeding risk 12 to 30 months after PCI. Validation was internal via bootstrap resampling and external among 8136 patients from 36 countries randomized in the PROTECT trial (June 2007-July 2014). Exposures Twelve months of open-label thienopyridine plus aspirin, then randomized to 18 months of continued thienopyridine plus aspirin vs placebo plus aspirin. Main Outcomes and Measures Ischemia (myocardial infarction or stent thrombosis) and bleeding (moderate or severe) 12 to 30 months after PCI. Results Among DAPT Study patients (derivation cohort; mean age, 61.3 years; women, 25.1%), ischemia occurred in 348 patients (3.0%) and bleeding in 215 (1.8%). Derivation cohort models predicting ischemia and bleeding hadcstatistics of 0.70 and 0.68, respectively. The prediction rule assigned 1 point each for myocardial infarction at presentation, prior myocardial infarction or PCI, diabetes, stent diameter less than 3 mm, smoking, and paclitaxel-eluting stent; 2 points each for history of congestive heart failure/low ejection fraction and vein graft intervention; −1 point for age 65 to younger than 75 years; and −2 points for age 75 years or older. Among the high score group (score ≥2, n = 5917), continued thienopyridine vs placebo was associated with reduced ischemic events (2.7% vs 5.7%; risk difference [RD], −3.0% [95% CI, −4.1% to −2.0%],P Conclusion and Relevance Among patients not sustaining major bleeding or ischemic events 1 year after PCI, a prediction rule assessing late ischemic and bleeding risks to inform dual antiplatelet therapy duration showed modest accuracy in derivation and validation cohorts. This rule requires further prospective evaluation to assess potential effects on patient care, as well as validation in other cohorts. Trial Registration clinicaltrials.gov Identifier:NCT00977938.

Journal ArticleDOI
22 Nov 2016-JAMA
TL;DR: Palliative care was associated consistently with improvements in advance care planning, patient and caregiver satisfaction, and lower health care utilization, and evidence of associations with other outcomes was mixed.
Abstract: Importance The use of palliative care programs and the number of trials assessing their effectiveness have increased. Objective To determine the association of palliative care with quality of life (QOL), symptom burden, survival, and other outcomes for people with life-limiting illness and for their caregivers. Data Sources MEDLINE, EMBASE, CINAHL, and Cochrane CENTRAL to July 2016. Study Selection Randomized clinical trials of palliative care interventions in adults with life-limiting illness. Data Extraction and Synthesis Two reviewers independently extracted data. Narrative synthesis was conducted for all trials. Quality of life, symptom burden, and survival were analyzed using random-effects meta-analysis, with estimates of QOL translated to units of the Functional Assessment of Chronic Illness Therapy–palliative care scale (FACIT-Pal) instrument (range, 0-184 [worst-best]; minimal clinically important difference [MCID], 9 points); and symptom burden translated to the Edmonton Symptom Assessment Scale (ESAS) (range, 0-90 [best-worst]; MCID, 5.7 points). Main Outcomes and Measures Quality of life, symptom burden, survival, mood, advance care planning, site of death, health care satisfaction, resource utilization, and health care expenditures. Results Forty-three RCTs provided data on 12 731 patients (mean age, 67 years) and 2479 caregivers. Thirty-five trials used usual care as the control, and 14 took place in the ambulatory setting. In the meta-analysis, palliative care was associated with statistically and clinically significant improvements in patient QOL at the 1- to 3-month follow-up (standardized mean difference, 0.46; 95% CI, 0.08 to 0.83; FACIT-Pal mean difference, 11.36] and symptom burden at the 1- to 3-month follow-up (standardized mean difference, −0.66; 95% CI, −1.25 to −0.07; ESAS mean difference, −10.30). When analyses were limited to trials at low risk of bias (n = 5), the association between palliative care and QOL was attenuated but remained statistically significant (standardized mean difference, 0.20; 95% CI, 0.06 to 0.34; FACIT-Pal mean difference, 4.94), whereas the association with symptom burden was not statistically significant (standardized mean difference, −0.21; 95% CI, −0.42 to 0.00; ESAS mean difference, −3.28). There was no association between palliative care and survival (hazard ratio, 0.90; 95% CI, 0.69 to 1.17). Palliative care was associated consistently with improvements in advance care planning, patient and caregiver satisfaction, and lower health care utilization. Evidence of associations with other outcomes was mixed. Conclusions and Relevance In this meta-analysis, palliative care interventions were associated with improvements in patient QOL and symptom burden. Findings for caregiver outcomes were inconsistent. However, many associations were no longer significant when limited to trials at low risk of bias, and there was no significant association between palliative care and survival.

Journal ArticleDOI
07 Jun 2016-JAMA
TL;DR: Among healthy children with a single anesthesia exposure before age 36 months, compared with healthy siblings with no anesthesia exposure, there were no statistically significant differences in IQ scores in later childhood.
Abstract: Importance Exposure of young animals to commonly used anesthetics causes neurotoxicity including impaired neurocognitive function and abnormal behavior. The potential neurocognitive and behavioral effects of anesthesia exposure in young children are thus important to understand. Objective To examine if a single anesthesia exposure in otherwise healthy young children was associated with impaired neurocognitive development and abnormal behavior in later childhood. Design, Setting, and Participants Sibling-matched cohort study conducted between May 2009 and April 2015 at 4 university-based US pediatric tertiary care hospitals. The study cohort included sibling pairs within 36 months in age and currently 8 to 15 years old. The exposed siblings were healthy at surgery/anesthesia. Neurocognitive and behavior outcomes were prospectively assessed with retrospectively documented anesthesia exposure data. Exposures A single exposure to general anesthesia during inguinal hernia surgery in the exposed sibling and no anesthesia exposure in the unexposed sibling, before age 36 months. Main Outcomes and Measures The primary outcome was global cognitive function (IQ). Secondary outcomes included domain-specific neurocognitive functions and behavior. A detailed neuropsychological battery assessed IQ and domain-specific neurocognitive functions. Parents completed validated, standardized reports of behavior. Results Among the 105 sibling pairs, the exposed siblings (mean age, 17.3 months at surgery/anesthesia; 9.5% female) and the unexposed siblings (44% female) had IQ testing at mean ages of 10.6 and 10.9 years, respectively. All exposed children received inhaled anesthetic agents, and anesthesia duration ranged from 20 to 240 minutes, with a median duration of 80 minutes. Mean IQ scores between exposed siblings (scores: full scale = 111; performance = 108; verbal = 111) and unexposed siblings (scores: full scale = 111; performance = 107; verbal = 111) were not statistically significantly different. Differences in mean IQ scores between sibling pairs were: full scale = −0.2 (95% CI, −2.6 to 2.9); performance = 0.5 (95% CI, −2.7 to 3.7); and verbal = −0.5 (95% CI, −3.2 to 2.2). No statistically significant differences in mean scores were found between sibling pairs in memory/learning, motor/processing speed, visuospatial function, attention, executive function, language, or behavior. Conclusions and Relevance Among healthy children with a single anesthesia exposure before age 36 months, compared with healthy siblings with no anesthesia exposure, there were no statistically significant differences in IQ scores in later childhood. Further study of repeated exposure, prolonged exposure, and vulnerable subgroups is needed.