scispace - formally typeset
Search or ask a question

Showing papers in "JAMA in 2019"


Journal ArticleDOI
22 Jan 2019-JAMA
TL;DR: This review focuses on current approaches and evolving strategies for local and systemic therapy of breast cancer as well as distinct risk profiles and treatment strategies.
Abstract: Importance Breast cancer will be diagnosed in 12% of women in the United States over the course of their lifetimes and more than 250 000 new cases of breast cancer were diagnosed in the United States in 2017. This review focuses on current approaches and evolving strategies for local and systemic therapy of breast cancer. Observations Breast cancer is categorized into 3 major subtypes based on the presence or absence of molecular markers for estrogen or progesterone receptors and human epidermal growth factor 2 (ERBB2; formerlyHER2): hormone receptor positive/ERBB2 negative (70% of patients),ERBB2positive (15%-20%), and triple-negative (tumors lacking all 3 standard molecular markers; 15%). More than 90% of breast cancers are not metastatic at the time of diagnosis. For people presenting without metastatic disease, therapeutic goals are tumor eradication and preventing recurrence. Triple-negative breast cancer is more likely to recur than the other 2 subtypes, with 85% 5-year breast cancer–specific survival for stage I triple-negative tumors vs 94% to 99% for hormone receptor positive andERBB2positive. Systemic therapy for nonmetastatic breast cancer is determined by subtype: patients with hormone receptor–positive tumors receive endocrine therapy, and a minority receive chemotherapy as well; patients withERBB2-positive tumors receiveERBB2-targeted antibody or small-molecule inhibitor therapy combined with chemotherapy; and patients with triple-negative tumors receive chemotherapy alone. Local therapy for all patients with nonmetastatic breast cancer consists of surgical resection, with consideration of postoperative radiation if lumpectomy is performed. Increasingly, some systemic therapy is delivered before surgery. Tailoring postoperative treatment based on preoperative treatment response is under investigation. Metastatic breast cancer is treated according to subtype, with goals of prolonging life and palliating symptoms. Median overall survival for metastatic triple-negative breast cancer is approximately 1 year vs approximately 5 years for the other 2 subtypes. Conclusions and Relevance Breast cancer consists of 3 major tumor subtypes categorized according to estrogen or progesterone receptor expression andERBB2gene amplification. The 3 subtypes have distinct risk profiles and treatment strategies. Optimal therapy for each patient depends on tumor subtype, anatomic cancer stage, and patient preferences.

2,310 citations


Journal ArticleDOI
05 Mar 2019-JAMA
TL;DR: This initiative will leverage critical scientific advances in HIV prevention, diagnosis, treatment, and care by coordinating the highly successful programs, resources, and infrastructure of the CDC, the National Institutes of Health, the Health Resources and Services Administration, the Substance Abuse and Mental Health Services Administration (SAMHSA), and the Indian Health Service (IHS).
Abstract: In the State of the Union Address on February 5, 2019, President Donald J. Trump announced his administration’s goal to end the HIV epidemic in the United States within 10 years. The president’s budget will ask Republicans and Democrats to make the needed commitment to support a concrete plan to achieve this goal. While landmark biomedical and scientific research advances have led to the development of many successful HIV treatment regimens, prevention strategies, and improved care for persons with HIV, the HIV pandemic remains a public health crisis in the United States and globally. In the United States, more than 700 000 people have died as a result of HIV/AIDS since the disease was first recognized in 1981, and the Centers for Disease Control and Prevention (CDC) estimates that 1.1 million people are currently living with HIV, about 15% of whom are unaware of their HIV infection.1 Approximately 23% of new infections are transmitted by individuals who are unaware of their infection and approximately 69% of new infections are transmitted by those who are diagnosed with HIV infection but who are not in care.2 In 2017, more than 38 000 people were diagnosed with HIV in the United States. The majority of these cases were among young black/African American and Hispanic/Latino men who have sex with men (MSM). In addition, there was high incidence of HIV among transgender individuals, high-risk heterosexuals, and persons who inject drugs.1 This public health issue is also connected to the broader opioid crisis: 2015 marked the first time in 2 decades that the number of HIV cases attributed to drug injection increased.3 Of particular note, more than half of the new HIV diagnoses were reported in southern states and Washington, DC. During 2016 and 2017, of the 3007 counties in the United States, half of new HIV diagnoses were concentrated in 48 “hotspot” counties, Washington, DC, and Puerto Rico.4 The US Department of Health and Human Services (HHS) has proposed a new initiative to address this ongoing public health crisis with the goals of first reducing numbers of incident infections in the United States by 75% within 5 years, and then by 90% within 10 years. This initiative will leverage critical scientific advances in HIV prevention, diagnosis, treatment, and care by coordinating the highly successful programs, resources, and infrastructure of the CDC, the National Institutes of Health (NIH), the Health Resources and Services Administration (HRSA), the Substance Abuse and Mental Health Services Administration (SAMHSA), and the Indian Health Service (IHS). The initial phase, coordinated by the HHS Office of the Assistant Secretary of Health, will focus on geographic and demographic hotspots in 19 states, Washington, DC, and Puerto Rico, where the majority of the new HIV cases are reported, as well as in 7 states with a disproportionate occurrence of HIV in rural areas (eFigure in the Supplement). The strategic initiative includes 4 pillars: 1. diagnose all individuals with HIV as early as possible after infection; 2. treat HIV infection rapidly and effectively to achieve sustained viral suppression; 3. prevent at-risk individuals from acquiring HIV infection, including the use of pre-exposure prophylaxis (PrEP); and 4. rapidly detect and respond to emerging clusters of HIV infection to further reduce new transmissions. A key component for the success of this initiative is active partnerships with city, county, and state public health departments, local and regional clinics and health care facilities, clinicians, providers of medication-assisted treatment for opioid use disorder, and communityand faith-based organizations. The implementation of advances in HIV research achieved over 4 decades will be essential to achieving the goals of the initiative. Clinical studies serve as the scientific basis for strategies to prevent HIV transmission/acquisition. In this regard, as reviewed in a recent Viewpoint in JAMA,5 large clinical studies have recently proven the concept of undetectable = untransmittable (U = U), which has broad public health implications for HIV prevention and treatment at both the individual and societal level. U = U means that individuals with HIV who receive antiretroviral therapy (ART) and achieve and maintain an undetectable viral load do not sexually transmit HIV to others.5 U = U will be invaluable in helping to counteract the stigma associated with HIV, and this initiative will create environments in which all people, no matter their cultural background or risk profile, feel welcome for prevention and treatment services. Results from numerous clinical trials have led to significant advances in the treatment of HIV infection, such that a person living with HIV who is properly treated and adherent with therapy can expect to achieve a nearly normal lifespan. This progress is due to antiviral drug combinations drawn from more than 30 agents approved by the US Food and Drug Administration (FDA), as well as medications for the prevention and treatment regimens of HIV-associated coinfections and comorbidities. Furthermore, PrEP with a daily regimen of 2 oral antiretroviral drugs in a single pill has proven to be highly effective in preventing HIV infection for individuals at high risk. In addition, postexposure prophylaxis provides a highly efSupplemental content Opinion

885 citations


Journal ArticleDOI
02 Apr 2019-JAMA
TL;DR: Among patients with AF, the strategy of catheter ablation, compared with medical therapy, did not significantly reduce the primary composite end point of death, disabling stroke, serious bleeding, or cardiac arrest, which should be considered in interpreting the results of the trial.
Abstract: Importance Catheter ablation is effective in restoring sinus rhythm in atrial fibrillation (AF), but its effects on long-term mortality and stroke risk are uncertain. Objective To determine whether catheter ablation is more effective than conventional medical therapy for improving outcomes in AF. Design, Setting, and Participants The Catheter Ablation vs Antiarrhythmic Drug Therapy for Atrial Fibrillation trial is an investigator-initiated, open-label, multicenter, randomized trial involving 126 centers in 10 countries. A total of 2204 symptomatic patients with AF aged 65 years and older or younger than 65 years with 1 or more risk factors for stroke were enrolled from November 2009 to April 2016, with follow-up through December 31, 2017. Interventions The catheter ablation group (n = 1108) underwent pulmonary vein isolation, with additional ablative procedures at the discretion of site investigators. The drug therapy group (n = 1096) received standard rhythm and/or rate control drugs guided by contemporaneous guidelines. Main Outcomes and Measures The primary end point was a composite of death, disabling stroke, serious bleeding, or cardiac arrest. Among 13 prespecified secondary end points, 3 are included in this report: all-cause mortality; total mortality or cardiovascular hospitalization; and AF recurrence. Results Of the 2204 patients randomized (median age, 68 years; 37.2% female; 42.9% had paroxysmal AF and 57.1% had persistent AF), 89.3% completed the trial. Of the patients assigned to catheter ablation, 1006 (90.8%) underwent the procedure. Of the patients assigned to drug therapy, 301 (27.5%) ultimately received catheter ablation. In the intention-to-treat analysis, over a median follow-up of 48.5 months, the primary end point occurred in 8.0% (n = 89) of patients in the ablation group vs 9.2% (n = 101) of patients in the drug therapy group (hazard ratio [HR], 0.86 [95% CI, 0.65-1.15];P = .30). Among the secondary end points, outcomes in the ablation group vs the drug therapy group, respectively, were 5.2% vs 6.1% for all-cause mortality (HR, 0.85 [95% CI, 0.60-1.21];P = .38), 51.7% vs 58.1% for death or cardiovascular hospitalization (HR, 0.83 [95% CI, 0.74-0.93];P = .001), and 49.9% vs 69.5% for AF recurrence (HR, 0.52 [95% CI, 0.45-0.60];P Conclusions and Relevance Among patients with AF, the strategy of catheter ablation, compared with medical therapy, did not significantly reduce the primary composite end point of death, disabling stroke, serious bleeding, or cardiac arrest. However, the estimated treatment effect of catheter ablation was affected by lower-than-expected event rates and treatment crossovers, which should be considered in interpreting the results of the trial. Trial Registration ClinicalTrials.gov Identifier:NCT00911508

864 citations


Journal ArticleDOI
12 Feb 2019-JAMA
TL;DR: Among ambulatory adults with hypertension, treating to a systolic blood pressure goal of less than 120 mm Hg compared with a goal of more than 140mm Hg did not result in a significant reduction in the risk of probable dementia.
Abstract: Importance There are currently no proven treatments to reduce the risk of mild cognitive impairment and dementia. Objective To evaluate the effect of intensive blood pressure control on risk of dementia. Design, Setting, and Participants Randomized clinical trial conducted at 102 sites in the United States and Puerto Rico among adults aged 50 years or older with hypertension but without diabetes or history of stroke. Randomization began on November 8, 2010. The trial was stopped early for benefit on its primary outcome (a composite of cardiovascular events) and all-cause mortality on August 20, 2015. The final date for follow-up of cognitive outcomes was July 22, 2018. Interventions Participants were randomized to a systolic blood pressure goal of either less than 120 mm Hg (intensive treatment group; n = 4678) or less than 140 mm Hg (standard treatment group; n = 4683). Main Outcomes and Measures The primary cognitive outcome was occurrence of adjudicated probable dementia. Secondary cognitive outcomes included adjudicated mild cognitive impairment and a composite outcome of mild cognitive impairment or probable dementia. Results Among 9361 randomized participants (mean age, 67.9 years; 3332 women [35.6%]), 8563 (91.5%) completed at least 1 follow-up cognitive assessment. The median intervention period was 3.34 years. During a total median follow-up of 5.11 years, adjudicated probable dementia occurred in 149 participants in the intensive treatment group vs 176 in the standard treatment group (7.2 vs 8.6 cases per 1000 person-years; hazard ratio [HR], 0.83; 95% CI, 0.67-1.04). Intensive BP control significantly reduced the risk of mild cognitive impairment (14.6 vs 18.3 cases per 1000 person-years; HR, 0.81; 95% CI, 0.69-0.95) and the combined rate of mild cognitive impairment or probable dementia (20.2 vs 24.1 cases per 1000 person-years; HR, 0.85; 95% CI, 0.74-0.97). Conclusions and Relevance Among ambulatory adults with hypertension, treating to a systolic blood pressure goal of less than 120 mm Hg compared with a goal of less than 140 mm Hg did not result in a significant reduction in the risk of probable dementia. Because of early study termination and fewer than expected cases of dementia, the study may have been underpowered for this end point. Trial Registration ClinicalTrials.gov Identifier:NCT01206062

732 citations


Journal ArticleDOI
01 Jan 2019-JAMA
TL;DR: Among adults with type 2 diabetes and high CV and renal risk, linagliptin added to usual care compared with placebo added tousual care resulted in a noninferior risk of a composite CV outcome over a median 2.2 years.
Abstract: Importance Type 2 diabetes is associated with increased cardiovascular (CV) risk. Prior trials have demonstrated CV safety of 3 dipeptidyl peptidase 4 (DPP-4) inhibitors but have included limited numbers of patients with high CV risk and chronic kidney disease. Objective To evaluate the effect of linagliptin, a selective DPP-4 inhibitor, on CV outcomes and kidney outcomes in patients with type 2 diabetes at high risk of CV and kidney events. Design, Setting, and Participants Randomized, placebo-controlled, multicenter noninferiority trial conducted from August 2013 to August 2016 at 605 clinic sites in 27 countries among adults with type 2 diabetes, hemoglobin A 1c of 6.5% to 10.0%, high CV risk (history of vascular disease and urine-albumin creatinine ratio [UACR] >200 mg/g), and high renal risk (reduced eGFR and micro- or macroalbuminuria). Participants with end-stage renal disease (ESRD) were excluded. Final follow-up occurred on January 18, 2018. Interventions Patients were randomized to receive linagliptin, 5 mg once daily (n = 3494), or placebo once daily (n = 3485) added to usual care. Other glucose-lowering medications or insulin could be added based on clinical need and local clinical guidelines. Main Outcomes and Measures Primary outcome was time to first occurrence of the composite of CV death, nonfatal myocardial infarction, or nonfatal stroke. Criteria for noninferiority of linagliptin vs placebo was defined by the upper limit of the 2-sided 95% CI for the hazard ratio (HR) of linagliptin relative to placebo being less than 1.3. Secondary outcome was time to first occurrence of adjudicated death due to renal failure, ESRD, or sustained 40% or higher decrease in eGFR from baseline. Results Of 6991 enrollees, 6979 (mean age, 65.9 years; eGFR, 54.6 mL/min/1.73 m 2 ; 80.1% with UACR >30 mg/g) received at least 1 dose of study medication and 98.7% completed the study. During a median follow-up of 2.2 years, the primary outcome occurred in 434 of 3494 (12.4%) and 420 of 3485 (12.1%) in the linagliptin and placebo groups, respectively, (absolute incidence rate difference, 0.13 [95% CI, −0.63 to 0.90] per 100 person-years) (HR, 1.02; 95% CI, 0.89-1.17; P P = .62). Adverse events occurred in 2697 (77.2%) and 2723 (78.1%) patients in the linagliptin and placebo groups; 1036 (29.7%) and 1024 (29.4%) had 1 or more episodes of hypoglycemia; and there were 9 (0.3%) vs 5 (0.1%) events of adjudication-confirmed acute pancreatitis. Conclusions and Relevance Among adults with type 2 diabetes and high CV and renal risk, linagliptin added to usual care compared with placebo added to usual care resulted in a noninferior risk of a composite CV outcome over a median 2.2 years. Trial Registration ClinicalTrials.gov Identifier:NCT01897532

715 citations


Journal ArticleDOI
28 May 2019-JAMA
TL;DR: In this retrospective analysis of data sets from patients with sepsis, 4 clinical phenotypes were identified that correlated with host-response patterns and clinical outcomes, and simulations suggested these phenotypes may help in understanding heterogeneity of treatment effects.
Abstract: Importance Sepsis is a heterogeneous syndrome. Identification of distinct clinical phenotypes may allow more precise therapy and improve care. Objective To derive sepsis phenotypes from clinical data, determine their reproducibility and correlation with host-response biomarkers and clinical outcomes, and assess the potential causal relationship with results from randomized clinical trials (RCTs). Design, Settings, and Participants Retrospective analysis of data sets using statistical, machine learning, and simulation tools. Phenotypes were derived among 20 189 total patients (16 552 unique patients) who met Sepsis-3 criteria within 6 hours of hospital presentation at 12 Pennsylvania hospitals (2010-2012) using consensuskmeans clustering applied to 29 variables. Reproducibility and correlation with biological parameters and clinical outcomes were assessed in a second database (2013-2014; n = 43 086 total patients and n = 31 160 unique patients), in a prospective cohort study of sepsis due to pneumonia (n = 583), and in 3 sepsis RCTs (n = 4737). Exposures All clinical and laboratory variables in the electronic health record. Main Outcomes and Measures Derived phenotype (α, β, γ,and δ) frequency, host-response biomarkers, 28-day and 365-day mortality, and RCT simulation outputs. Results The derivation cohort included 20 189 patients with sepsis (mean age, 64 [SD, 17] years; 10 022 [50%] male; mean maximum 24-hour Sequential Organ Failure Assessment [SOFA] score, 3.9 [SD, 2.4]). The validation cohort included 43 086 patients (mean age, 67 [SD, 17] years; 21 993 [51%] male; mean maximum 24-hour SOFA score, 3.6 [SD, 2.0]). Of the 4 derived phenotypes, the α phenotype was the most common (n = 6625; 33%) and included patients with the lowest administration of a vasopressor; in the β phenotype (n = 5512; 27%), patients were older and had more chronic illness and renal dysfunction; in the γ phenotype (n = 5385; 27%), patients had more inflammation and pulmonary dysfunction; and in the δ phenotype (n = 2667; 13%), patients had more liver dysfunction and septic shock. Phenotype distributions were similar in the validation cohort. There were consistent differences in biomarker patterns by phenotype. In the derivation cohort, cumulative 28-day mortality was 287 deaths of 5691 unique patients (5%) for the α phenotype; 561 of 4420 (13%) for the β phenotype; 1031 of 4318 (24%) for the γ phenotype; and 897 of 2223 (40%) for the δ phenotype. Across all cohorts and trials, 28-day and 365-day mortality were highest among the δ phenotype vs the other 3 phenotypes (P 33% chance of benefit to >60% chance of harm). Conclusions and Relevance In this retrospective analysis of data sets from patients with sepsis, 4 clinical phenotypes were identified that correlated with host-response patterns and clinical outcomes, and simulations suggested these phenotypes may help in understanding heterogeneity of treatment effects. Further research is needed to determine the utility of these phenotypes in clinical care and for informing trial design and interpretation.

655 citations


Journal ArticleDOI
27 Aug 2019-JAMA
TL;DR: Improved understanding of the biology and molecular subtypes of non-small cell lung cancer have led to more biomarker-directed therapies for patients with metastatic disease and improvements in overall survival.
Abstract: Importance Non–small cell lung cancer remains the leading cause of cancer death in the United States. Until the last decade, the 5-year overall survival rate for patients with metastatic non–small cell lung cancer was less than 5%. Improved understanding of the biology of lung cancer has resulted in the development of new biomarker–targeted therapies and led to improvements in overall survival for patients with advanced or metastatic disease. Observations Systemic therapy for metastatic non–small cell lung cancer is selected according to the presence of specific biomarkers. Therefore, all patients with metastatic non–small cell lung cancer should undergo molecular testing for relevant mutations and expression of the protein PD-L1 (programmed death ligand 1). Molecular alterations that predict response to treatment (eg,EGFRmutations,ALKrearrangements,ROS1rearrangements, andBRAFV600E mutations) are present in approximately 30% of patients with non–small cell lung cancer. Targeted therapy for these alterations improves progression-free survival compared with cytotoxic chemotherapy. For example, somatic activating mutations in theEGFRgene are present in approximately 20% of patients with advanced non–small cell lung cancer. Tyrosine kinase inhibitors such as gefitinib, erlotinib, and afatinib improve progression-free survival in patients with susceptibleEGFRmutations. In patients with overexpression of ALK protein, the response rate was significantly better with crizotinib (a tyrosine kinase inhibitor) than with the combination of pemetrexed and either cisplatin or carboplatin (platinum-based chemotherapy) (74% vs 45%, respectively;P Conclusions and Relevance Improved understanding of the biology and molecular subtypes of non–small cell lung cancer have led to more biomarker-directed therapies for patients with metastatic disease. These biomarker-directed therapies and newer empirical treatment regimens have improved overall survival for patients with metastatic non–small cell lung cancer.

638 citations


Journal ArticleDOI
01 Oct 2019-JAMA
TL;DR: Optimal management of CKD includes cardiovascular risk reduction, treatment of albuminuria, avoidance of potential nephrotoxins, and adjustments to drug dosing (eg, many antibiotics and oral hypoglycemic agents).
Abstract: Importance Chronic kidney disease (CKD) is the 16th leading cause of years of life lost worldwide. Appropriate screening, diagnosis, and management by primary care clinicians are necessary to prevent adverse CKD-associated outcomes, including cardiovascular disease, end-stage kidney disease, and death. Observations Defined as a persistent abnormality in kidney structure or function (eg, glomerular filtration rate [GFR] Conclusions and Relevance Diagnosis, staging, and appropriate referral of CKD by primary care clinicians are important in reducing the burden of CKD worldwide.

594 citations


Journal ArticleDOI
01 Oct 2019-JAMA
TL;DR: In this preliminary study of patients with sepsis and ARDS, a 96-hour infusion of vitamin C compared with placebo did not significantly improve organ dysfunction scores or alter markers of inflammation and vascular injury.
Abstract: Importance Experimental data suggest that intravenous vitamin C may attenuate inflammation and vascular injury associated with sepsis and acute respiratory distress syndrome (ARDS). Objective To determine the effect of intravenous vitamin C infusion on organ failure scores and biological markers of inflammation and vascular injury in patients with sepsis and ARDS. Design, Setting, and Participants The CITRIS-ALI trial was a randomized, double-blind, placebo-controlled, multicenter trial conducted in 7 medical intensive care units in the United States, enrolling patients (N = 167) with sepsis and ARDS present for less than 24 hours. The study was conducted from September 2014 to November 2017, and final follow-up was January 2018. Interventions Patients were randomly assigned to receive intravenous infusion of vitamin C (50 mg/kg in dextrose 5% in water, n = 84) or placebo (dextrose 5% in water only, n = 83) every 6 hours for 96 hours. Main Outcomes and Measures The primary outcomes were change in organ failure as assessed by a modified Sequential Organ Failure Assessment score (range, 0-20, with higher scores indicating more dysfunction) from baseline to 96 hours, and plasma biomarkers of inflammation (C-reactive protein levels) and vascular injury (thrombomodulin levels) measured at 0, 48, 96, and 168 hours. Results Among 167 randomized patients (mean [SD] age, 54.8 years [16.7]; 90 men [54%]), 103 (62%) completed the study to day 60. There were no significant differences between the vitamin C and placebo groups in the primary end points of change in mean modified Sequential Organ Failure Assessment score from baseline to 96 hours (from 9.8 to 6.8 in the vitamin C group [3 points] and from 10.3 to 6.8 in the placebo group [3.5 points]; difference, −0.10; 95% CI, −1.23 to 1.03;P = .86) or in C-reactive protein levels (54.1 vs 46.1 μg/mL; difference, 7.94 μg/mL; 95% CI, −8.2 to 24.11;P = .33) and thrombomodulin levels (14.5 vs 13.8 ng/mL; difference, 0.69 ng/mL; 95% CI, −2.8 to 4.2;P = .70) at 168 hours. Conclusions and Relevance In this preliminary study of patients with sepsis and ARDS, a 96-hour infusion of vitamin C compared with placebo did not significantly improve organ dysfunction scores or alter markers of inflammation and vascular injury. Further research is needed to evaluate the potential role of vitamin C for other outcomes in sepsis and ARDS. Trial Registration ClinicalTrials.gov Identifier:NCT02106975

548 citations


Journal ArticleDOI
03 Dec 2019-JAMA
TL;DR: In 2019, the prevalence of self-reported e-cigarette use was high among high school and middle school students, with many current e- cigarette users reporting frequent use and most of the exclusive e- vaping users reporting use of flavored e-cigarettes.
Abstract: Importance The prevalence of e-cigarette use among US youth increased from 2011 to 2018. Continued monitoring of the prevalence of e-cigarette and other tobacco product use among youth is important to inform public health policy, planning, and regulatory efforts. Objective To estimate the prevalence of e-cigarette use among US high school and middle school students in 2019 including frequency of use, brands used, and use of flavored products. Design, Setting, and Participants Cross-sectional analyses of a school-based nationally representative sample of 19 018 US students in grades 6 to 12 participating in the 2019 National Youth Tobacco Survey. The survey was conducted from February 15, 2019, to May 24, 2019. Main Outcomes and Measures Self-reported current (past 30-day) e-cigarette use estimates among high school and middle school students; frequent use (≥20 days in the past 30 days) and usual e-cigarette brand among current e-cigarette users; and use of flavored e-cigarettes and flavor types among current exclusive e-cigarette users (no use of other tobacco products) by school level and usual brand. Prevalence estimates were weighted to account for the complex sampling design. Results The survey included 10 097 high school students (mean [SD] age, 16.1 [3.0] years; 47.5% female) and 8837 middle school students (mean [SD] age, 12.7 [2.8] years; 48.7% female). The response rate was 66.3%. An estimated 27.5% (95% CI, 25.3%-29.7%) of high school students and 10.5% (95% CI, 9.4%-11.8%) of middle school students reported current e-cigarette use. Among current e-cigarette users, an estimated 34.2% (95% CI, 31.2%-37.3%) of high school students and 18.0% (95% CI, 15.2%-21.2%) of middle school students reported frequent use, and an estimated 63.6% (95% CI, 59.3%-67.8%) of high school students and 65.4% (95% CI, 60.6%-69.9%) of middle school students reported exclusive use of e-cigarettes. Among current e-cigarette users, an estimated 59.1% (95% CI, 54.8%-63.2%) of high school students and 54.1% (95% CI, 49.1%-59.0%) of middle school students reported JUUL as their usual e-cigarette brand in the past 30 days; among current e-cigarette users, 13.8% (95% CI, 12.0%-15.9%) of high school students and 16.8% (95% CI, 13.6%-20.7%) of middle school students reported not having a usual e-cigarette brand. Among current exclusive e-cigarette users, an estimated 72.2% (95% CI, 69.1%-75.1%) of high school students and 59.2% (95% CI, 54.8%-63.4%) of middle school students used flavored e-cigarettes, with fruit, menthol or mint, and candy, desserts, or other sweets being the most commonly reported flavors. Conclusions and Relevance In 2019, the prevalence of self-reported e-cigarette use was high among high school and middle school students, with many current e-cigarette users reporting frequent use and most of the exclusive e-cigarette users reporting use of flavored e-cigarettes.

540 citations


Journal ArticleDOI
22 Oct 2019-JAMA
TL;DR: Dementia is an acquired loss of cognition in multiple cognitive domains sufficiently severe to affect social or occupational function and management should include both nonpharmacologic and pharmacologic approaches, although efficacy of available treatments remains limited.
Abstract: Importance Worldwide, 47 million people live with dementia and, by 2050, the number is expected to increase to 131 million. Observations Dementia is an acquired loss of cognition in multiple cognitive domains sufficiently severe to affect social or occupational function. In the United States, Alzheimer disease, one cause of dementia, affects 5.8 million people. Dementia is commonly associated with more than 1 neuropathology, usually Alzheimer disease with cerebrovascular pathology. Diagnosing dementia requires a history evaluating for cognitive decline and impairment in daily activities, with corroboration from a close friend or family member, in addition to a thorough mental status examination by a clinician to delineate impairments in memory, language, attention, visuospatial cognition such as spatial orientation, executive function, and mood. Brief cognitive impairment screening questionnaires can assist in initiating and organizing the cognitive assessment. However, if the assessment is inconclusive (eg, symptoms present, but normal examination findings), neuropsychological testing can help determine whether dementia is present. Physical examination may help identify the etiology of dementia. For example, focal neurologic abnormalities suggest stroke. Brain neuroimaging may demonstrate structural changes including, but not limited to, focal atrophy, infarcts, and tumor, that may not be identified on physical examination. Additional evaluation with cerebrospinal fluid assays or genetic testing may be considered in atypical dementia cases, such as age of onset younger than 65 years, rapid symptom onset, and/or impairment in multiple cognitive domains but not episodic memory. For treatment, patients may benefit from nonpharmacologic approaches, including cognitively engaging activities such as reading, physical exercise such as walking, and socialization such as family gatherings. Pharmacologic approaches can provide modest symptomatic relief. For Alzheimer disease, this includes an acetylcholinesterase inhibitor such as donepezil for mild to severe dementia, and memantine (used alone or as an add-on therapy) for moderate to severe dementia. Rivastigmine can be used to treat symptomatic Parkinson disease dementia. Conclusions and Relevance Alzheimer disease currently affects 5.8 million persons in the United States and is a common cause of dementia, which is usually accompanied by other neuropathology, often cerebrovascular disease such as brain infarcts. Causes of dementia can be diagnosed by medical history, cognitive and physical examination, laboratory testing, and brain imaging. Management should include both nonpharmacologic and pharmacologic approaches, although efficacy of available treatments remains limited.

Journal ArticleDOI
25 Jun 2019-JAMA
TL;DR: It is suggested that a shorter duration of DAPT may provide benefit, although given study limitations, additional research is needed in other populations.
Abstract: Importance Very short mandatory dual antiplatelet therapy (DAPT) after percutaneous coronary intervention (PCI) with a drug-eluting stent may be an attractive option. Objective To test the hypothesis of noninferiority of 1 month of DAPT compared with standard 12 months of DAPT for a composite end point of cardiovascular and bleeding events. Design, Setting, and Participants Multicenter, open-label, randomized clinical trial enrolling 3045 patients who underwent PCI at 90 hospitals in Japan from December 2015 through December 2017. Final 1-year clinical follow-up was completed in January 2019. Interventions Patients were randomized either to 1 month of DAPT followed by clopidogrel monotherapy (n=1523) or to 12 months of DAPT with aspirin and clopidogrel (n=1522). Main Outcomes and Measures The primary end point was a composite of cardiovascular death, myocardial infarction (MI), ischemic or hemorrhagic stroke, definite stent thrombosis, or major or minor bleeding at 12 months, with a relative noninferiority margin of 50%. The major secondary cardiovascular end point was a composite of cardiovascular death, MI, ischemic or hemorrhagic stroke, or definite stent thrombosis and the major secondary bleeding end point was major or minor bleeding. Results Among 3045 patients randomized, 36 withdrew consent; of 3009 remaining, 2974 (99%) completed the trial. One-month DAPT was both noninferior and superior to 12-month DAPT for the primary end point, occurring in 2.36% with 1-month DAPT and 3.70% with 12-month DAPT (absolute difference, −1.34% [95% CI, −2.57% to −0.11%]; hazard ratio [HR], 0.64 [95% CI, 0.42-0.98]), meeting criteria for noninferiority (P Conclusions and Relevance Among patients undergoing PCI, 1 month of DAPT followed by clopidogrel monotherapy, compared with 12 months of DAPT with aspirin and clopidogrel, resulted in a significantly lower rate of a composite of cardiovascular and bleeding events, meeting criteria for both noninferiority and superiority. These findings suggest that a shorter duration of DAPT may provide benefit, although given study limitations, additional research is needed in other populations. Trial Registration ClinicalTrials.gov Identifier:NCT02619760

Journal ArticleDOI
26 Nov 2019-JAMA
TL;DR: US life expectancy increased for most of the past 60 years, but the rate of increase slowed over time and life expectancy decreased after 2014, with the largest relative increases occurring in the Ohio Valley and New England.
Abstract: Importance US life expectancy has not kept pace with that of other wealthy countries and is now decreasing. Objective To examine vital statistics and review the history of changes in US life expectancy and increasing mortality rates; and to identify potential contributing factors, drawing insights from current literature and an analysis of state-level trends. Evidence Life expectancy data for 1959-2016 and cause-specific mortality rates for 1999-2017 were obtained from the US Mortality Database and CDC WONDER, respectively. The analysis focused on midlife deaths (ages 25-64 years), stratified by sex, race/ethnicity, socioeconomic status, and geography (including the 50 states). Published research from January 1990 through August 2019 that examined relevant mortality trends and potential contributory factors was examined. Findings Between 1959 and 2016, US life expectancy increased from 69.9 years to 78.9 years but declined for 3 consecutive years after 2014. The recent decrease in US life expectancy culminated a period of increasing cause-specific mortality among adults aged 25 to 64 years that began in the 1990s, ultimately producing an increase in all-cause mortality that began in 2010. During 2010-2017, midlife all-cause mortality rates increased from 328.5 deaths/100 000 to 348.2 deaths/100 000. By 2014, midlife mortality was increasing across all racial groups, caused by drug overdoses, alcohol abuse, suicides, and a diverse list of organ system diseases. The largest relative increases in midlife mortality rates occurred in New England (New Hampshire, 23.3%; Maine, 20.7%; Vermont, 19.9%) and the Ohio Valley (West Virginia, 23.0%; Ohio, 21.6%; Indiana, 14.8%; Kentucky, 14.7%). The increase in midlife mortality during 2010-2017 was associated with an estimated 33 307 excess US deaths, 32.8% of which occurred in 4 Ohio Valley states. Conclusions and Relevance US life expectancy increased for most of the past 60 years, but the rate of increase slowed over time and life expectancy decreased after 2014. A major contributor has been an increase in mortality from specific causes (eg, drug overdoses, suicides, organ system diseases) among young and middle-aged adults of all racial groups, with an onset as early as the 1990s and with the largest relative increases occurring in the Ohio Valley and New England. The implications for public health and the economy are substantial, making it vital to understand the underlying causes.

Journal ArticleDOI
15 Jan 2019-JAMA
TL;DR: In this preliminary study of adults with mild to moderate UC, 1-week treatment with anaerobically prepared donor FMT compared with autologous FMT resulted in a higher likelihood of remission at 8 weeks.
Abstract: Importance High-intensity, aerobically prepared fecal microbiota transplantation (FMT) has demonstrated efficacy in treating active ulcerative colitis (UC). FMT protocols involving anaerobic stool processing methods may enhance microbial viability and allow efficacy with a lower treatment intensity. Objective To assess the efficacy of a short duration of FMT therapy to induce remission in UC using anaerobically prepared stool. Design, Setting, and Participants A total of 73 adults with mild to moderately active UC were enrolled in a multicenter, randomized, double-blind clinical trial in 3 Australian tertiary referral centers between June 2013 and June 2016, with 12-month follow-up until June 2017. Interventions Patients were randomized to receive either anaerobically prepared pooled donor FMT (n = 38) or autologous FMT (n = 35) via colonoscopy followed by 2 enemas over 7 days. Open-label therapy was offered to autologous FMT participants at 8 weeks and they were followed up for 12 months. Main Outcomes and Measures The primary outcome was steroid-free remission of UC, defined as a total Mayo score of ≤2 with an endoscopic Mayo score of 1 or less at week 8. Total Mayo score ranges from 0 to 12 (0 = no disease and 12 = most severe disease). Steroid-free remission of UC was reassessed at 12 months. Secondary clinical outcomes included adverse events. Results Among 73 patients who were randomized (mean age, 39 years; women, 33 [45%]), 69 (95%) completed the trial. The primary outcome was achieved in 12 of the 38 participants (32%) receiving pooled donor FMT compared with 3 of the 35 (9%) receiving autologous FMT (difference, 23% [95% CI, 4%-42%]; odds ratio, 5.0 [95% CI, 1.2-20.1];P = .03). Five of the 12 participants (42%) who achieved the primary end point at week 8 following donor FMT maintained remission at 12 months. There were 3 serious adverse events in the donor FMT group and 2 in the autologous FMT group. Conclusions and Relevance In this preliminary study of adults with mild to moderate UC, 1-week treatment with anaerobically prepared donor FMT compared with autologous FMT resulted in a higher likelihood of remission at 8 weeks. Further research is needed to assess longer-term maintenance of remission and safety. Trial Registration anzctr.org.au Identifier:ACTRN12613000236796

Journal ArticleDOI
15 Oct 2019-JAMA
TL;DR: The estimated cost of waste in the US health care system ranged from $760 billion to $935 billion, accounting for approximately 25% of total health care spending, and the projected potential savings from interventions that reduce waste, excluding savings from administrative complexity, ranged from £191 billion to £282 billion, representing a potential 25% reduction in the total cost of Waste.
Abstract: Importance The United States spends more on health care than any other country, with costs approaching 18% of the gross domestic product (GDP). Prior studies estimated that approximately 30% of health care spending may be considered waste. Despite efforts to reduce overtreatment, improve care, and address overpayment, it is likely that substantial waste in US health care spending remains. Objectives To estimate current levels of waste in the US health care system in 6 previously developed domains and to report estimates of potential savings for each domain. Evidence A search of peer-reviewed and “gray” literature from January 2012 to May 2019 focused on the 6 waste domains previously identified by the Institute of Medicine and Berwick and Hackbarth: failure of care delivery, failure of care coordination, overtreatment or low-value care, pricing failure, fraud and abuse, and administrative complexity. For each domain, available estimates of waste-related costs and data from interventions shown to reduce waste-related costs were recorded, converted to annual estimates in 2019 dollars for national populations when necessary, and combined into ranges or summed as appropriate. Findings The review yielded 71 estimates from 54 unique peer-reviewed publications, government-based reports, and reports from the gray literature. Computations yielded the following estimated ranges of total annual cost of waste: failure of care delivery, $102.4 billion to $165.7 billion; failure of care coordination, $27.2 billion to $78.2 billion; overtreatment or low-value care, $75.7 billion to $101.2 billion; pricing failure, $230.7 billion to $240.5 billion; fraud and abuse, $58.5 billion to $83.9 billion; and administrative complexity, $265.6 billion. The estimated annual savings from measures to eliminate waste were as follows: failure of care delivery, $44.4 billion to $93.3 billion; failure of care coordination, $29.6 billion to $38.2 billion; overtreatment or low-value care, $12.8 billion to $28.6 billion; pricing failure, $81.4 billion to $91.2 billion; and fraud and abuse, $22.8 billion to $30.8 billion. No studies were identified that focused on interventions targeting administrative complexity. The estimated total annual costs of waste were $760 billion to $935 billion and savings from interventions that address waste were $191 billion to $282 billion. Conclusions and Relevance In this review based on 6 previously identified domains of health care waste, the estimated cost of waste in the US health care system ranged from $760 billion to $935 billion, accounting for approximately 25% of total health care spending, and the projected potential savings from interventions that reduce waste, excluding savings from administrative complexity, ranged from $191 billion to $282 billion, representing a potential 25% reduction in the total cost of waste. Implementation of effective measures to eliminate waste represents an opportunity reduce the continued increases in US health care expenditures.

Journal ArticleDOI
26 Mar 2019-JAMA
TL;DR: Although multiple individual factors are associated with outcomes, a multifaceted approach considering both potential for neurological recovery and ongoing multiorgan failure is warranted for prognostication and clinical decision-making in the post–cardiac arrest period.
Abstract: Importance In-hospital cardiac arrest is common and associated with a high mortality rate. Despite this, in-hospital cardiac arrest has received little attention compared with other high-risk cardiovascular conditions, such as stroke, myocardial infarction, and out-of-hospital cardiac arrest. Observations In-hospital cardiac arrest occurs in over 290 000 adults each year in the United States. Cohort data from the United States indicate that the mean age of patients with in-hospital cardiac arrest is 66 years, 58% are men, and the presenting rhythm is most often (81%) nonshockable (ie, asystole or pulseless electrical activity). The cause of the cardiac arrest is most often cardiac (50%-60%), followed by respiratory insufficiency (15%-40%). Efforts to prevent in-hospital cardiac arrest require both a system for identifying deteriorating patients and an appropriate interventional response (eg, rapid response teams). The key elements of treatment during cardiac arrest include chest compressions, ventilation, early defibrillation, when applicable, and immediate attention to potentially reversible causes, such as hyperkalemia or hypoxia. There is limited evidence to support more advanced treatments. Post–cardiac arrest care is focused on identification and treatment of the underlying cause, hemodynamic and respiratory support, and potentially employing neuroprotective strategies (eg, targeted temperature management). Although multiple individual factors are associated with outcomes (eg, age, initial rhythm, duration of the cardiac arrest), a multifaceted approach considering both potential for neurological recovery and ongoing multiorgan failure is warranted for prognostication and clinical decision-making in the post–cardiac arrest period. Withdrawal of care in the absence of definite prognostic signs both during and after cardiac arrest should be avoided. Hospitals are encouraged to participate in national quality-improvement initiatives. Conclusions and Relevance An estimated 290 000 in-hospital cardiac arrests occur each year in the United States. However, there is limited evidence to support clinical decision making. An increased awareness with regard to optimizing clinical care and new research might improve outcomes.

Journal ArticleDOI
12 Feb 2019-JAMA
TL;DR: Results from a large, multisite observational study of the association between bariatric surgery and long-term macrovascular disease outcomes among patients with severe obesity and type 2 diabetes found thatbariatric surgery was associated with a 40% lower incidence of macrov vascular disease at 5 years.
Abstract: Randomized trials serve as the standard for comparative studies of treatment effects. In many settings, it may not be feasible or ethical to conduct a randomized study,1 and researchers may pursue observational studies to better understand clinical outcomes. A central limitation of observational studies is the potential for confounding bias that arises because treatment assignment is not random. Thus, the observed associations may be attributable to differences other than the treatment being investigated and causality cannot be assumed. In the October 16, 2018, issue of JAMA, results from a large, multisite observational study of the association between bariatric surgery and long-term macrovascular disease outcomes among patients with severe obesity and type 2 diabetes was reported by Fisher et al.2 Using data from 5301 patients aged 19 to 79 years who underwent bariatric surgery at 1 of 4 integrated health systems in the United States between 2005 and 2011 and 14 934 matched nonsurgical patients, they found that bariatric surgery was associated with a 40% lower incidence of macrovascular disease at 5 years (2.1% in the surgical group and 4.3% in the nonsurgical group; hazard ratio [HR], 0.60 [95% CI, 0.42-0.86]). Two strategies were used to mitigate confounding bias. In the first, a matched cohort design was used where nonsurgical patients were matched to surgical patients on the basis of a priori–identified potential confounders (study site, age, sex, body mass index, hemoglobin A1c level, insulin use, observed diabetes duration, and prior health care use). In the second strategy used to adjust for confounding bias, the primary results were based on the fit of a multivariable Cox model that adjusted for all of the factors used in the matching as well as a broader range of potential confounders (Table 1 in the article2). Thus, any imbalances in the observed potential confounders that remained after the matching process were controlled for by the statistical analysis. Despite these efforts, however, given the observational design, the potential for unmeasured confounding remained.

Journal ArticleDOI
05 Feb 2019-JAMA
TL;DR: The underlying science-based evidence supporting the Undetectable = Untransmittable (U = U) concept is examined and the behavioral, social, and legal implications associated with the acceptance are examined.
Abstract: In 2016, the Prevention Access Campaign, a health equity initiative with the goal of ending the HIV/AIDS pandemic as well as HIV-related stigma, launched the Undetectable = Untransmittable (U = U) initiative.1 U = U signifies that individuals with HIV who receive antiretroviral therapy (ART) and have achieved and maintained an undetectable viral load cannot sexually transmit the virus to others. This concept, based on strong scientific evidence, has broad implications for treatment of HIV infection from a scientific and public health standpoint, for the self-esteem of individuals by reducing the stigma associated with HIV,2 and for certain legal aspects of HIV criminalization.3 In this Viewpoint, we examine the underlying science-based evidence supporting this important concept and the behavioral, social, and legal implications associated with the acceptance of the U = U concept. A major breakthrough in HIV/AIDS therapeutics came in 1996 with the advent of 3-drug combinations of antiretrovirals, including the newly developed protease inhibitors. These therapeutic regimens resulted in substantial decreases in viral load in a high percentage of patients, usually below the level of detection in plasma and sustained for extended periods.2 Although not appreciated at the time, the accomplishment of a sustained, undetectable viral load was likely the definitive point when the U = U concept became a reality. Proof of that concept would await further clinical trials and cohort studies. Based on a review of scientific data, a statement from Switzerland in 2008 indicated that individuals with HIV who did not have any other sexually transmitted infection, and achieved and maintained an undetectable viral load for at least 6 months, did not transmit HIV sexually.4 This was the first declaration of the U = U concept, but it was not universally embraced because it lacked the rigor of randomized clinical trials. In 2011, the HIV Prevention Trials Network (HPTN) study 052 compared the effect of early with delayed initiation of ART in the partner with HIV among 1763 HIVdiscordant couples, of whom 98% were heterosexual. The finding of a 96.4% reduction in HIV transmission in the early-ART group, vs those in the delayed group, provided the first evidence of treatment as prevention in a randomized clinical trial.5 At that point, the study could not address the durability of the finding or provide a precise correlation of the lack of transmissibility with an undetectable viral load. Importantly, after 5 additional years of follow-up, the durable, protective effect of early ART to maintain viral suppression and prevent HIV transmission was validated. There were no linked transmissions when viral load was durably suppressed by ART.6 Subsequent studies confirmed and extended these findings. The PARTNER 1 study determined the risk of HIV transmission via condomless sexual intercourse in 1166 HIV-discordant couples in which the partner with HIV was receiving ART and had achieved and maintained viral suppression (HIV-1 RNA viral load <200 copies/mL). After approximately 58 000 condomless sexual acts, there were no linked HIV transmissions.3 Since a minority of the HIV-discordant couples in PARTNER 1 were men who have sex with men (MSM), there was insufficient statistical power to determine the effect of an undetectable viral load on the transmission risk for receptive anal sex. In this regard, the Opposites Attract study evaluated transmissions involving 343 HIV-discordant MSM couples in Australia, Brazil, and Thailand. After 16 800 acts of condomless anal intercourse there were no linked HIV transmissions during 588.4 couple-years of follow-up during which time the partner with HIV had an undetectable viral load (<200 copies/mL).3 Building on these studies, the PARTNER 2 study conclusively demonstrated that there were no cases of HIV transmission between HIV-discordant MSM partners despite approximately 77 000 condomless sexual acts if the partner with HIV had achieved viral suppression and the uninfected partner was not receiving preexposure prophylaxis or postexposure prophylaxis.7 The validity of the U = U concept depends on achieving and maintaining an undetectable viral load in an individual with HIV. Because of the promise of U = U, achieving and maintaining an undetectable viral load becomes an aspirational goal and offers hope for persons with HIV. The principles involved in achieving and maintaining an undetectable viral load are related to (1) taking ART as prescribed and the importance of adherence; (2) time to viral suppression; (3) viral load testing recommendations; and (4) the risk of stopping ART (Box). Taking ART as prescribed is essential for achieving and maintaining an undetectable viral load. The Centers for Disease Control and Prevention (CDC) reported that of the individuals with HIV in the United States in HIV clinical care in 2015, approximately 20% had not achieved viral suppression (<200 HIV-1 RNA copies/mL) at their last test. CDC also noted that 40% of the individuals in HIV clinical care that same year did not maintain viral suppression for more than 12 months.8 Lack of adherence with ART is associated with many factors, including the lack of accessibility of quality health care. The stability of health care provided by programs such as the Ryan White HIV/AIDS Program shows that high rates of viral suppression are possible in the context of quality care delivery. VIEWPOINT

Journal ArticleDOI
19 Feb 2019-JAMA
TL;DR: Among patients with septic shock, a resuscitation strategy targeting normalization of capillary refill time, compared with a strategy targeting serum lactate levels, did not reduce all-cause 28-day mortality.
Abstract: Importance Abnormal peripheral perfusion after septic shock resuscitation has been associated with organ dysfunction and mortality. The potential role of the clinical assessment of peripheral perfusion as a target during resuscitation in early septic shock has not been established. Objective To determine if a peripheral perfusion–targeted resuscitation during early septic shock in adults is more effective than a lactate level–targeted resuscitation for reducing mortality. Design, Setting, and Participants Multicenter, randomized trial conducted at 28 intensive care units in 5 countries. Four-hundred twenty-four patients with septic shock were included between March 2017 and March 2018. The last date of follow-up was June 12, 2018. Interventions Patients were randomized to a step-by-step resuscitation protocol aimed at either normalizing capillary refill time (n = 212) or normalizing or decreasing lactate levels at rates greater than 20% per 2 hours (n = 212), during an 8-hour intervention period. Main Outcomes and Measures The primary outcome was all-cause mortality at 28 days. Secondary outcomes were organ dysfunction at 72 hours after randomization, as assessed by Sequential Organ Failure Assessment (SOFA) score (range, 0 [best] to 24 [worst]); death within 90 days; mechanical ventilation–, renal replacement therapy–, and vasopressor-free days within 28 days; intensive care unit and hospital length of stay. Results Among 424 patients randomized (mean age, 63 years; 226 [53%] women), 416 (98%) completed the trial. By day 28, 74 patients (34.9%) in the peripheral perfusion group and 92 patients (43.4%) in the lactate group had died (hazard ratio, 0.75 [95% CI, 0.55 to 1.02];P = .06; risk difference, −8.5% [95% CI, −18.2% to 1.2%]). Peripheral perfusion–targeted resuscitation was associated with less organ dysfunction at 72 hours (mean SOFA score, 5.6 [SD, 4.3] vs 6.6 [SD, 4.7]; mean difference, −1.00 [95% CI, −1.97 to −0.02];P = .045). There were no significant differences in the other 6 secondary outcomes. No protocol-related serious adverse reactions were confirmed. Conclusions and Relevance Among patients with septic shock, a resuscitation strategy targeting normalization of capillary refill time, compared with a strategy targeting serum lactate levels, did not reduce all-cause 28-day mortality. Trial Registration ClinicalTrials.gov Identifier:NCT03078712

Journal ArticleDOI
28 May 2019-JAMA
TL;DR: Among patients with a preoperative clinical stage indicating locally advanced gastric cancer, laparoscopic distal Gastrectomy compared with open distal gastrectomy, did not result in inferior disease-free survival at 3 years.
Abstract: Importance Laparoscopic distal gastrectomy is accepted as a more effective approach to conventional open distal gastrectomy for early-stage gastric cancer. However, efficacy for locally advanced gastric cancer remains uncertain. Objective To compare 3-year disease-free survival for patients with locally advanced gastric cancer after laparoscopic distal gastrectomy or open distal gastrectomy. Design, Setting, and Patients The study was a noninferiority, open-label, randomized clinical trial at 14 centers in China. A total of 1056 eligible patients with clinical stage T2, T3, or T4a gastric cancer without bulky nodes or distant metastases were enrolled from September 2012 to December 2014. Final follow-up was on December 31, 2017. Interventions Participants were randomized in a 1:1 ratio after stratification by site, age, cancer stage, and histology to undergo either laparoscopic distal gastrectomy (n = 528) or open distal gastrectomy (n = 528) with D2 lymphadenectomy. Main Outcomes and Measures The primary end point was 3-year disease-free survival with a noninferiority margin of −10% to compare laparoscopic distal gastrectomy with open distal gastrectomy. Secondary end points of 3-year overall survival and recurrence patterns were tested for superiority. Results Among 1056 patients, 1039 (98.4%; mean age, 56.2 years; 313 [30.1%] women) had surgery (laparoscopic distal gastrectomy [n=519] vs open distal gastrectomy [n=520]), and 999 (94.6%) completed the study. Three-year disease-free survival rate was 76.5% in the laparoscopic distal gastrectomy group and 77.8% in the open distal gastrectomy group, absolute difference of −1.3% and a 1-sided 97.5% CI of −6.5% to ∞, not crossing the prespecified noninferiority margin. Three-year overall survival rate (laparoscopic distal gastrectomy vs open distal gastrectomy: 83.1% vs 85.2%; adjusted hazard ratio, 1.19; 95% CI, 0.87 to 1.64;P = .28) and cumulative incidence of recurrence over the 3-year period (laparoscopic distal gastrectomy vs open distal gastrectomy: 18.8% vs 16.5%; subhazard ratio, 1.15; 95% CI, 0.86 to 1.54;P = .35) did not significantly differ between laparoscopic distal gastrectomy and open distal gastrectomy groups. Conclusions and Relevance Among patients with a preoperative clinical stage indicating locally advanced gastric cancer, laparoscopic distal gastrectomy, compared with open distal gastrectomy, did not result in inferior disease-free survival at 3 years. Trial Registration ClinicalTrials.gov Identifier:NCT01609309

Journal ArticleDOI
29 Jan 2019-JAMA
TL;DR: Differences in associated lipid levels between the LPL and LDLR scores were associated with similar lower risk of CHD per 10-mg/dL lower level of ApoB-containing lipoproteins, and the associations between triglyceride and LDL-C levels with the risk ofCHD became null after adjusting for differences in ApOB.
Abstract: Importance Triglycerides and cholesterol are both carried in plasma by apolipoprotein B (ApoB)–containing lipoprotein particles. It is unknown whether lowering plasma triglyceride levels reduces the risk of cardiovascular events to the same extent as lowering low-density lipoprotein cholesterol (LDL-C) levels. Objective To compare the association of triglyceride-lowering variants in the lipoprotein lipase (LPL) gene and LDL-C–lowering variants in the LDL receptor gene (LDLR) with the risk of cardiovascular disease per unit change in ApoB. Design, Setting, and Participants Mendelian randomization analyses evaluating the associations of genetic scores composed of triglyceride-lowering variants in theLPLgene and LDL-C–lowering variants in theLDLRgene, respectively, with the risk of cardiovascular events among participants enrolled in 63 cohort or case-control studies conducted in North America or Europe between 1948 and 2017. Exposures Differences in plasma triglyceride, LDL-C, and ApoB levels associated with theLPLandLDLRgenetic scores. Main Outcomes and Measures Odds ratio (OR) for coronary heart disease (CHD)—defined as coronary death, myocardial infarction, or coronary revascularization—per 10-mg/dL lower concentration of ApoB-containing lipoproteins. Results A total of 654 783 participants, including 91 129 cases of CHD, were included (mean age, 62.7 years; 51.4% women). For each 10-mg/dL lower level of ApoB-containing lipoproteins, theLPLscore was associated with 69.9-mg/dL (95% CI, 68.1-71.6;P = 7.1 × 10−1363) lower triglyceride levels and 0.7-mg/dL (95% CI, 0.03-1.4;P = .04) higher LDL-C levels; while theLDLRscore was associated with 14.2-mg/dL (95% CI, 13.6-14.8;P = 1.4 × 10−465) lower LDL-C and 1.9-mg/dL (95% CI, 0.1-3.9;P = .04) lower triglyceride levels. Despite these differences in associated lipid levels, theLPLandLDLRscores were associated with similar lower risk of CHD per 10-mg/dL lower level of ApoB-containing lipoproteins (OR, 0.771 [95% CI, 0.741-0.802],P = 3.9 × 10−38and OR, 0.773 [95% CI, 0.747-0.801],P = 1.1 × 10−46, respectively). In multivariable mendelian randomization analyses, the associations between triglyceride and LDL-C levels with the risk of CHD became null after adjusting for differences in ApoB (triglycerides: OR, 1.014 [95% CI, 0.965-1.065],P = .19; LDL-C: OR, 1.010 [95% CI, 0.967-1.055],P = .19; ApoB: OR, 0.761 [95% CI, 0.723-0.798],P = 7.51 × 10−20). Conclusions and Relevance Triglyceride-loweringLPLvariants and LDL-C–loweringLDLRvariants were associated with similar lower risk of CHD per unit difference in ApoB. Therefore, the clinical benefit of lowering triglyceride and LDL-C levels may be proportional to the absolute change in ApoB.

Journal ArticleDOI
15 Jan 2019-JAMA
TL;DR: Evaluation of penicillin allergy before deciding not to usePenicillin or other &bgr;-lactam antibiotics is an important tool for antimicrobial stewardship when reported allergy to Penicillin leads to the use of broad-spectrum antibiotics that increase the risk of antimicrobial resistance.
Abstract: Importance β-Lactam antibiotics are among the safest and most effective antibiotics. Many patients report allergies to these drugs that limit their use, resulting in the use of broad-spectrum antibiotics that increase the risk for antimicrobial resistance and adverse events. Observations Approximately 10% of the US population has reported allergies to the β-lactam agent penicillin, with higher rates reported by older and hospitalized patients. Although many patients report that they are allergic to penicillin, clinically significant IgE-mediated or T lymphocyte–mediated penicillin hypersensitivity is uncommon ( 10 years) unknown reactions without features suggestive of an IgE-mediated reaction. A moderate-risk history includes urticaria or other pruritic rashes and reactions with features of IgE-mediated reactions. A high-risk history includes patients who have had anaphylaxis, positive penicillin skin testing, recurrent penicillin reactions, or hypersensitivities to multiple β-lactam antibiotics. The goals of antimicrobial stewardship are undermined when reported allergy to penicillin leads to the use of broad-spectrum antibiotics that increase the risk for antimicrobial resistance, including increased risk of methicillin-resistantStaphylococcus aureusand vancomycin-resistantEnterococcus. Broad-spectrum antimicrobial agents also increase the risk of developingClostridium difficile (also known asClostridioides difficile) infection. Direct amoxicillin challenge is appropriate for patients with low-risk allergy histories. Moderate-risk patients can be evaluated with penicillin skin testing, which carries a negative predictive value that exceeds 95% and approaches 100% when combined with amoxicillin challenge. Clinicians performing penicillin allergy evaluation need to identify what methods are supported by their available resources. Conclusions and Relevance Many patients report they are allergic to penicillin but few have clinically significant reactions. Evaluation of penicillin allergy before deciding not to use penicillin or other β-lactam antibiotics is an important tool for antimicrobial stewardship.

Journal ArticleDOI
24 Sep 2019-JAMA
TL;DR: Among adults with relatively early type 2 diabetes and elevated cardiovascular risk, the use of linagliptin compared with glimepiride over a median 6.3 years resulted in a noninferior risk of a composite cardiovascular outcome.
Abstract: Importance Type 2 diabetes is associated with increased cardiovascular risk. In placebo-controlled cardiovascular safety trials, the dipeptidyl peptidase-4 inhibitor linagliptin demonstrated noninferiority, but it has not been tested against an active comparator. Objective This trial assessed cardiovascular outcomes of linagliptin vs glimepiride (sulfonylurea) in patients with relatively early type 2 diabetes and risk factors for or established atherosclerotic cardiovascular disease. Design, Setting, and Participants Randomized, double-blind, active-controlled, noninferiority trial, with participant screening from November 2010 to December 2012, conducted at 607 hospital and primary care sites in 43 countries involving 6042 participants. Adults with type 2 diabetes, glycated hemoglobin of 6.5% to 8.5%, and elevated cardiovascular risk were eligible for inclusion. Elevated cardiovascular risk was defined as documented atherosclerotic cardiovascular disease, multiple cardiovascular risk factors, aged at least 70 years, and evidence of microvascular complications. Follow-up ended in August 2018. Interventions Patients were randomized to receive 5 mg of linagliptin once daily (n = 3023) or 1 to 4 mg of glimepiride once daily (n = 3010) in addition to usual care. Investigators were encouraged to intensify glycemic treatment, primarily by adding or adjusting metformin, α-glucosidase inhibitors, thiazolidinediones, or insulin, according to clinical need. Main Outcomes and Measures The primary outcome was time to first occurrence of cardiovascular death, nonfatal myocardial infarction, or nonfatal stroke with the aim to establish noninferiority of linagliptin vs glimepiride, defined by the upper limit of the 2-sided 95.47% CI for the hazard ratio (HR) of linagliptin relative to glimepiride of less than 1.3. Results Of 6042 participants randomized, 6033 (mean age, 64.0 years; 2414 [39.9%] women; mean glycated hemoglobin, 7.2%; median duration of diabetes, 6.3 years; 42% with macrovascular disease; 59% had undergone metformin monotherapy) were treated and analyzed. The median duration of follow-up was 6.3 years. The primary outcome occurred in 356 of 3023 participants (11.8%) in the linagliptin group and 362 of 3010 (12.0%) in the glimepiride group (HR, 0.98 [95.47% CI, 0.84-1.14];P Conclusions and Relevance Among adults with relatively early type 2 diabetes and elevated cardiovascular risk, the use of linagliptin compared with glimepiride over a median 6.3 years resulted in a noninferior risk of a composite cardiovascular outcome. Trial Registration ClinicalTrials.gov Identifier:NCT01243424

Journal ArticleDOI
03 Sep 2019-JAMA
TL;DR: Among outpatient health care personnel, N95 respirators vs medical masks as worn by participants in this trial resulted in no significant difference in the incidence of laboratory-confirmed influenza.
Abstract: Importance Clinical studies have been inconclusive about the effectiveness of N95 respirators and medical masks in preventing health care personnel (HCP) from acquiring workplace viral respiratory infections. Objective To compare the effect of N95 respirators vs medical masks for prevention of influenza and other viral respiratory infections among HCP. Design, Setting, and Participants A cluster randomized pragmatic effectiveness study conducted at 137 outpatient study sites at 7 US medical centers between September 2011 and May 2015, with final follow-up in June 2016. Each year for 4 years, during the 12-week period of peak viral respiratory illness, pairs of outpatient sites (clusters) within each center were matched and randomly assigned to the N95 respirator or medical mask groups. Interventions Overall, 1993 participants in 189 clusters were randomly assigned to wear N95 respirators (2512 HCP-seasons of observation) and 2058 in 191 clusters were randomly assigned to wear medical masks (2668 HCP-seasons) when near patients with respiratory illness. Main Outcomes and Measures The primary outcome was the incidence of laboratory-confirmed influenza. Secondary outcomes included incidence of acute respiratory illness, laboratory-detected respiratory infections, laboratory-confirmed respiratory illness, and influenzalike illness. Adherence to interventions was assessed. Results Among 2862 randomized participants (mean [SD] age, 43 [11.5] years; 2369 [82.8%]) women), 2371 completed the study and accounted for 5180 HCP-seasons. There were 207 laboratory-confirmed influenza infection events (8.2% of HCP-seasons) in the N95 respirator group and 193 (7.2% of HCP-seasons) in the medical mask group (difference, 1.0%, [95% CI, −0.5% to 2.5%];P = .18) (adjusted odds ratio [OR], 1.18 [95% CI, 0.95-1.45]). There were 1556 acute respiratory illness events in the respirator group vs 1711 in the mask group (difference, −21.9 per 1000 HCP-seasons [95% CI, −48.2 to 4.4];P = .10); 679 laboratory-detected respiratory infections in the respirator group vs 745 in the mask group (difference, −8.9 per 1000 HCP-seasons, [95% CI, −33.3 to 15.4];P = .47); 371 laboratory-confirmed respiratory illness events in the respirator group vs 417 in the mask group (difference, −8.6 per 1000 HCP-seasons [95% CI, −28.2 to 10.9];P = .39); and 128 influenzalike illness events in the respirator group vs 166 in the mask group (difference, −11.3 per 1000 HCP-seasons [95% CI, −23.8 to 1.3];P = .08). In the respirator group, 89.4% of participants reported “always” or “sometimes” wearing their assigned devices vs 90.2% in the mask group. Conclusions and Relevance Among outpatient health care personnel, N95 respirators vs medical masks as worn by participants in this trial resulted in no significant difference in the incidence of laboratory-confirmed influenza. Trial Registration ClinicalTrials.gov Identifier:NCT01249625

Journal ArticleDOI
25 Jun 2019-JAMA
TL;DR: Among patients undergoing percutaneous coronary intervention, P2Y12 inhibitor monotherapy after 3 months of D APT compared with prolonged DAPT resulted in noninferior rates of major adverse cardiac and cerebrovascular events.
Abstract: Importance Data on P2Y12 inhibitor monotherapy after short-duration dual antiplatelet therapy (DAPT) in patients undergoing percutaneous coronary intervention are limited. Objective To determine whether P2Y12 inhibitor monotherapy after 3 months of DAPT is noninferior to 12 months of DAPT in patients undergoing PCI. Design, Setting, and Participants The SMART-CHOICE trial was an open-label, noninferiority, randomized study that was conducted in 33 hospitals in Korea and included 2993 patients undergoing PCI with drug-eluting stents. Enrollment began March 18, 2014, and follow-up was completed July 19, 2018. Interventions Patients were randomly assigned to receive aspirin plus a P2Y12 inhibitor for 3 months and thereafter P2Y12 inhibitor alone (n = 1495) or DAPT for 12 months (n = 1498). Main Outcomes and Measures The primary end point was major adverse cardiac and cerebrovascular events (a composite of all-cause death, myocardial infarction, or stroke) at 12 months after the index procedure. Secondary end points included the components of the primary end point and bleeding defined as Bleeding Academic Research Consortium type 2 to 5. The noninferiority margin was 1.8%. Results Among 2993 patients who were randomized (mean age, 64 years; 795 women [26.6%]), 2912 (97.3%) completed the trial. Adherence to the study protocol was 79.3% of the P2Y12 inhibitor monotherapy group and 95.2% of the DAPT group. At 12 months, major adverse cardiac and cerebrovascular events occurred in 42 patients in the P2Y12 inhibitor monotherapy group and in 36 patients in the DAPT group (2.9% vs 2.5%; difference, 0.4% [1-sided 95% CI, –∞% to 1.3%];P = .007 for noninferiority). There were no significant differences in all-cause death (21 [1.4%] vs 18 [1.2%]; hazard ratio [HR], 1.18; 95% CI, 0.63-2.21;P = .61), myocardial infarction (11 [0.8%] vs 17 [1.2%]; HR, 0.66; 95% CI, 0.31-1.40;P = .28), or stroke (11 [0.8%] vs 5 [0.3%]; HR, 2.23; 95% CI, 0.78-6.43;P = .14) between the 2 groups. The rate of bleeding was significantly lower in the P2Y12 inhibitor monotherapy group than in the DAPT group (2.0% vs 3.4%; HR, 0.58; 95% CI, 0.36-0.92;P = .02). Conclusions and Relevance Among patients undergoing percutaneous coronary intervention, P2Y12 inhibitor monotherapy after 3 months of DAPT compared with prolonged DAPT resulted in noninferior rates of major adverse cardiac and cerebrovascular events. Because of limitations in the study population and adherence, further research is needed in other populations. Trial Registration ClinicalTrials.gov Identifier:NCT02079194

Journal ArticleDOI
17 Sep 2019-JAMA
TL;DR: This exploratory study of patients with heart failure and reduced ejection fraction treated with sacubitril-valsartan found a correlation between changes in log2-NT-proBNP concentrations and left ventricular (LV) EF,LV end-diastolic volume index (LVEDVI), LV end-systolic volumeIndex (LVESVI), left atrial volume index(LAVI), and ratio of early transmitral Doppler velocity/early diastolic
Abstract: Importance In patients with heart failure and reduced ejection fraction (HFrEF), treatment with sacubitril-valsartan reduces N-terminal pro–b-type natriuretic peptide (NT-proBNP) concentrations. The effect of sacubitril-valsartan on cardiac remodeling is uncertain. Objective To determine whether NT-proBNP changes in patients with HFrEF treated with sacubitril-valsartan correlate with changes in measures of cardiac volume and function. Design, Setting, and Participants Prospective, 12-month, single-group, open-label study of patients with HFrEF enrolled in 78 outpatient sites in the United States. Sacubitril-valsartan was initiated and the dose adjusted. Enrollment commenced on October 25, 2016, and follow-up was completed on October 22, 2018. Exposures NT-proBNP concentrations among patients treated with sacubitril-valsartan. Main Outcomes and Measures The primary outcome was the correlation between changes in log2–NT-proBNP concentrations and left ventricular (LV) EF, LV end-diastolic volume index (LVEDVI), LV end-systolic volume index (LVESVI), left atrial volume index (LAVI), and ratio of early transmitral Doppler velocity/early diastolic annular velocity (E/e′) at 12 months. Results Among 794 patients (mean age, 65.1 years; 226 women [28.5%]; mean LVEF = 28.2%), 654 (82.4%) completed the study. The median NT-proBNP concentration at baseline was 816 pg/mL (interquartile range [IQR], 332-1822) and 455 pg/mL (IQR, 153-1090) at 12 months (difference,P Conclusions and Relevance In this exploratory study of patients with HFrEF treated with sacubitril-valsartan, reduction in NT-proBNP concentration was weakly yet significantly correlated with improvements in markers of cardiac volume and function at 12 months. The observed reverse cardiac remodeling may provide a mechanistic explanation for the effects of sacubitril-valsartan in patients with HFrEF. Trial Registration ClinicalTrials.gov Identifier:NCT02887183

Journal ArticleDOI
12 Mar 2019-JAMA
TL;DR: The relative paucity of strong evidence to answer many of the PICO questions supports the need for additional research and an international consensus for accepted definitions and hemoglobin thresholds, as well as clinically meaningful end points for multicenter trials.
Abstract: Importance Blood transfusion is one of the most frequently used therapies worldwide and is associated with benefits, risks, and costs. Objective To develop a set of evidence-based recommendations for patient blood management (PBM) and for research. Evidence Review The scientific committee developed 17 Population/Intervention/Comparison/Outcome (PICO) questions for red blood cell (RBC) transfusion in adult patients in 3 areas: preoperative anemia (3 questions), RBC transfusion thresholds (11 questions), and implementation of PBM programs (3 questions). These questions guided the literature search in 4 biomedical databases (MEDLINE, EMBASE, Cochrane Library, Transfusion Evidence Library), searched from inception to January 2018. Meta-analyses were conducted with the GRADE (Grading of Recommendations, Assessment, Development, and Evaluation) methodology and the Evidence-to-Decision framework by 3 panels including clinical and scientific experts, nurses, patient representatives, and methodologists, to develop clinical recommendations during a consensus conference in Frankfurt/Main, Germany, in April 2018. Findings From 17 607 literature citations associated with the 17 PICO questions, 145 studies, including 63 randomized clinical trials with 23 143 patients and 82 observational studies with more than 4 million patients, were analyzed. For preoperative anemia, 4 clinical and 3 research recommendations were developed, including the strong recommendation to detect and manage anemia sufficiently early before major elective surgery. For RBC transfusion thresholds, 4 clinical and 6 research recommendations were developed, including 2 strong clinical recommendations for critically ill but clinically stable intensive care patients with or without septic shock (recommended threshold for RBC transfusion, hemoglobin concentration Conclusions and Relevance The 2018 PBM International Consensus Conference defined the current status of the PBM evidence base for practice and research purposes and established 10 clinical recommendations and 12 research recommendations for preoperative anemia, RBC transfusion thresholds for adults, and implementation of PBM programs. The relative paucity of strong evidence to answer many of the PICO questions supports the need for additional research and an international consensus for accepted definitions and hemoglobin thresholds, as well as clinically meaningful end points for multicenter trials.

Journal ArticleDOI
22 Jan 2019-JAMA
TL;DR: The use of aspirin in individuals without cardiovascular disease was associated with a lower risk of cardiovascular events and an increased risk of major bleeding, which may inform discussions with patients about aspirin for primary prevention of cardiovascular Events and bleeding.
Abstract: Importance The role for aspirin in cardiovascular primary prevention remains controversial, with potential benefits limited by an increased bleeding risk. Objective To assess the association of aspirin use for primary prevention with cardiovascular events and bleeding. Data Sources PubMed and Embase were searched on Cochrane Library Central Register of Controlled Trials from the earliest available date through November 1, 2018. Study Selection Randomized clinical trials enrolling at least 1000 participants with no known cardiovascular disease and a follow-up of at least 12 months were included. Included studies compared aspirin use with no aspirin (placebo or no treatment). Data Extraction and Synthesis Data were screened and extracted independently by both investigators. Bayesian and frequentist meta-analyses were performed. Main Outcomes and Measures The primary cardiovascular outcome was a composite of cardiovascular mortality, nonfatal myocardial infarction, and nonfatal stroke. The primary bleeding outcome was any major bleeding (defined by the individual studies). Results A total of 13 trials randomizing 164 225 participants with 1 050 511 participant-years of follow-up were included. The median age of trial participants was 62 years (range, 53-74), 77 501 (47%) were men, 30 361 (19%) had diabetes, and the median baseline risk of the primary cardiovascular outcome was 10.2% (range, 2.6%-30.9%). Aspirin use was associated with significant reductions in the composite cardiovascular outcome compared with no aspirin (60.2 per 10 000 participant-years with aspirin and 65.2 per 10 000 participant-years with no aspirin) (hazard ratio [HR], 0.89 [95% credible interval, 0.84-0.94]; absolute risk reduction, 0.41% [95% CI, 0.23%-0.59%]; number needed to treat, 241). Aspirin use was associated with an increased risk of major bleeding events compared with no aspirin (23.1 per 10 000 participant-years with aspirin and 16.4 per 10 000 participant-years with no aspirin) (HR, 1.43 [95% credible interval, 1.30-1.56]; absolute risk increase, 0.47% [95% CI, 0.34%-0.62%]; number needed to harm, 210). Conclusions and Relevance The use of aspirin in individuals without cardiovascular disease was associated with a lower risk of cardiovascular events and an increased risk of major bleeding. This information may inform discussions with patients about aspirin for primary prevention of cardiovascular events and bleeding.

Journal ArticleDOI
06 Aug 2019-JAMA
TL;DR: Among older adults without cognitive impairment or dementia, both an unfavorable lifestyle and high genetic risk were significantly associated with higher dementia risk, while a favorable lifestyle was associated with a lower dementia risk among participants with high Genetic risk.
Abstract: Importance Genetic factors increase risk of dementia, but the extent to which this can be offset by lifestyle factors is unknown. Objective To investigate whether a healthy lifestyle is associated with lower risk of dementia regardless of genetic risk. Design, Setting, and Participants A retrospective cohort study that included adults of European ancestry aged at least 60 years without cognitive impairment or dementia at baseline. Participants joined the UK Biobank study from 2006 to 2010 and were followed up until 2016 or 2017. Exposures A polygenic risk score for dementia with low (lowest quintile), intermediate (quintiles 2 to 4), and high (highest quintile) risk categories and a weighted healthy lifestyle score, including no current smoking, regular physical activity, healthy diet, and moderate alcohol consumption, categorized into favorable, intermediate, and unfavorable lifestyles. Main Outcomes and Measures Incident all-cause dementia, ascertained through hospital inpatient and death records. Results A total of 196 383 individuals (mean [SD] age, 64.1 [2.9] years; 52.7% were women) were followed up for 1 545 433 person-years (median [interquartile range] follow-up, 8.0 [7.4-8.6] years). Overall, 68.1% of participants followed a favorable lifestyle, 23.6% followed an intermediate lifestyle, and 8.2% followed an unfavorable lifestyle. Twenty percent had high polygenic risk scores, 60% had intermediate risk scores, and 20% had low risk scores. Of the participants with high genetic risk, 1.23% (95% CI, 1.13%-1.35%) developed dementia compared with 0.63% (95% CI, 0.56%-0.71%) of the participants with low genetic risk (adjusted hazard ratio, 1.91 [95% CI, 1.64-2.23]). Of the participants with a high genetic risk and unfavorable lifestyle, 1.78% (95% CI, 1.38%-2.28%) developed dementia compared with 0.56% (95% CI, 0.48%-0.66%) of participants with low genetic risk and favorable lifestyle (hazard ratio, 2.83 [95% CI, 2.09-3.83]). There was no significant interaction between genetic risk and lifestyle factors (P = .99). Among participants with high genetic risk, 1.13% (95% CI, 1.01%-1.26%) of those with a favorable lifestyle developed dementia compared with 1.78% (95% CI, 1.38%-2.28%) with an unfavorable lifestyle (hazard ratio, 0.68 [95% CI, 0.51-0.90]). Conclusions and Relevance Among older adults without cognitive impairment or dementia, both an unfavorable lifestyle and high genetic risk were significantly associated with higher dementia risk. A favorable lifestyle was associated with a lower dementia risk among participants with high genetic risk.

Journal ArticleDOI
09 Apr 2019-JAMA
TL;DR: Whether a database that combines EHR-derived clinical data with CGP can identify and extend associations in non–small cell lung cancer (NSCLC) can be identified and extended, and how this approach may accelerate precision medicine.
Abstract: Importance Data sets linking comprehensive genomic profiling (CGP) to clinical outcomes may accelerate precision medicine. Objective To assess whether a database that combines EHR-derived clinical data with CGP can identify and extend associations in non–small cell lung cancer (NSCLC). Design, Setting, and Participants Clinical data from EHRs were linked with CGP results for 28 998 patients from 275 US oncology practices. Among 4064 patients with NSCLC, exploratory associations between tumor genomics and patient characteristics with clinical outcomes were conducted, with data obtained between January 1, 2011, and January 1, 2018. Exposures Tumor CGP, including presence of a driver alteration (a pathogenic or likely pathogenic alteration in a gene shown to drive tumor growth); tumor mutation burden (TMB), defined as the number of mutations per megabase; and clinical characteristics gathered from EHRs. Main Outcomes and Measures Overall survival (OS), time receiving therapy, maximal therapy response (as documented by the treating physician in the EHR), and clinical benefit rate (fraction of patients with stable disease, partial response, or complete response) to therapy. Results Among 4064 patients with NSCLC (median age, 66.0 years; 51.9% female), 3183 (78.3%) had a history of smoking, 3153 (77.6%) had nonsquamous cancer, and 871 (21.4%) had an alteration inEGFR,ALK, orROS1(701 [17.2%] withEGFR, 128 [3.1%] withALK, and 42 [1.0%] withROS1 alterations). There were 1946 deaths in 7 years. For patients with a driver alteration, improved OS was observed among those treated with (n = 575) vs not treated with (n = 560) targeted therapies (median, 18.6 months [95% CI, 15.2-21.7] vs 11.4 months [95% CI, 9.7-12.5] from advanced diagnosis;P Conclusions and Relevance Among patients with NSCLC included in a longitudinal database of clinical data linked to CGP results from routine care, exploratory analyses replicated previously described associations between clinical and genomic characteristics, between driver mutations and response to targeted therapy, and between TMB and response to immunotherapy. These findings demonstrate the feasibility of creating a clinicogenomic database derived from routine clinical experience and provide support for further research and discovery evaluating this approach in oncology.