scispace - formally typeset
Search or ask a question

Showing papers in "JAMA in 2018"


Journal ArticleDOI
20 Nov 2018-JAMA
TL;DR: Key guidelines in the Physical Activity Guidelines for Americans, 2nd edition, provide information and guidance on the types and amounts of physical activity that provide substantial health benefits and emphasize that moving more and sitting less will benefit nearly everyone.
Abstract: Importance Approximately 80% of US adults and adolescents are insufficiently active. Physical activity fosters normal growth and development and can make people feel, function, and sleep better and reduce risk of many chronic diseases. Objective To summarize key guidelines in thePhysical Activity Guidelines for Americans, 2nd edition (PAG). Process and Evidence Synthesis The 2018 Physical Activity Guidelines Advisory Committee conducted a systematic review of the science supporting physical activity and health. The committee addressed 38 questions and 104 subquestions and graded the evidence based on consistency and quality of the research. Evidence graded as strong or moderate was the basis of the key guidelines. The Department of Health and Human Services (HHS) based the PAG on the2018 Physical Activity Guidelines Advisory Committee Scientific Report. Recommendations The PAG provides information and guidance on the types and amounts of physical activity to improve a variety of health outcomes for multiple population groups. Preschool-aged children (3 through 5 years) should be physically active throughout the day to enhance growth and development. Children and adolescents aged 6 through 17 years should do 60 minutes or more of moderate-to-vigorous physical activity daily. Adults should do at least 150 minutes to 300 minutes a week of moderate-intensity, or 75 minutes to 150 minutes a week of vigorous-intensity aerobic physical activity, or an equivalent combination of moderate- and vigorous-intensity aerobic activity. They should also do muscle-strengthening activities on 2 or more days a week. Older adults should do multicomponent physical activity that includes balance training as well as aerobic and muscle-strengthening activities. Pregnant and postpartum women should do at least 150 minutes of moderate-intensity aerobic activity a week. Adults with chronic conditions or disabilities, who are able, should follow the key guidelines for adults and do both aerobic and muscle-strengthening activities. Recommendations emphasize that moving more and sitting less will benefit nearly everyone. Individuals performing the least physical activity benefit most by even modest increases in moderate-to-vigorous physical activity. Additional benefits occur with more physical activity. Both aerobic and muscle-strengthening physical activity are beneficial. Conclusions and Relevance ThePhysical Activity Guidelines for Americans, 2nd edition, provides information and guidance on the types and amounts of physical activity that provide substantial health benefits. Health professionals and policy makers should facilitate awareness of the guidelines and promote the health benefits of physical activity and support efforts to implement programs, practices, and policies to facilitate increased physical activity and to improve the health of the US population.

4,280 citations


Journal ArticleDOI
23 Jan 2018-JAMA
TL;DR: A group of 24 multidisciplinary experts used a systematic review of articles on existing reporting guidelines and methods, a 3-round Delphi process, a consensus meeting, pilot testing, and iterative refinement to develop the PRISMA diagnostic test accuracy guideline.
Abstract: Importance Systematic reviews of diagnostic test accuracy synthesize data from primary diagnostic studies that have evaluated the accuracy of 1 or more index tests against a reference standard, provide estimates of test performance, allow comparisons of the accuracy of different tests, and facilitate the identification of sources of variability in test accuracy. Objective To develop the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) diagnostic test accuracy guideline as a stand-alone extension of the PRISMA statement. Modifications to the PRISMA statement reflect the specific requirements for reporting of systematic reviews and meta-analyses of diagnostic test accuracy studies and the abstracts for these reviews. Design Established standards from the Enhancing the Quality and Transparency of Health Research (EQUATOR) Network were followed for the development of the guideline. The original PRISMA statement was used as a framework on which to modify and add items. A group of 24 multidisciplinary experts used a systematic review of articles on existing reporting guidelines and methods, a 3-round Delphi process, a consensus meeting, pilot testing, and iterative refinement to develop the PRISMA diagnostic test accuracy guideline. The final version of the PRISMA diagnostic test accuracy guideline checklist was approved by the group. Findings The systematic review (produced 64 items) and the Delphi process (provided feedback on 7 proposed items; 1 item was later split into 2 items) identified 71 potentially relevant items for consideration. The Delphi process reduced these to 60 items that were discussed at the consensus meeting. Following the meeting, pilot testing and iterative feedback were used to generate the 27-item PRISMA diagnostic test accuracy checklist. To reflect specific or optimal contemporary systematic review methods for diagnostic test accuracy, 8 of the 27 original PRISMA items were left unchanged, 17 were modified, 2 were added, and 2 were omitted. Conclusions and Relevance The 27-item PRISMA diagnostic test accuracy checklist provides specific guidance for reporting of systematic reviews. The PRISMA diagnostic test accuracy guideline can facilitate the transparent reporting of reviews, and may assist in the evaluation of validity and applicability, enhance replicability of reviews, and make the results from systematic reviews of diagnostic test accuracy studies more useful.

1,616 citations


Journal ArticleDOI
24 Apr 2018-JAMA
TL;DR: This study aims to demonstrate the importance of knowing the carrier and removal status of canine coronavirus, as a source of infection for other animals, not necessarily belonging to the same breeds.
Abstract: This study uses National Health and Nutrition Examination Survey data to characterize trends in obesity prevalence among US youth and adults between 2007-2008 and 2015-2016.

1,326 citations


Journal ArticleDOI
02 Oct 2018-JAMA
TL;DR: A treat-to-target strategy aimed at reducing disease activity by at least 50% within 3 months and achieving remission or low disease activity within 6 months, with sequential drug treatment if needed, can prevent RA-related disability.
Abstract: Importance Rheumatoid arthritis (RA) occurs in about 5 per 1000 people and can lead to severe joint damage and disability. Significant progress has been made over the past 2 decades regarding understanding of disease pathophysiology, optimal outcome measures, and effective treatment strategies, including the recognition of the importance of diagnosing and treating RA early. Observations Early diagnosis and treatment of RA can avert or substantially slow progression of joint damage in up to 90% of patients, thereby preventing irreversible disability. The development of novel instruments to measure disease activity and identify the presence or absence of remission have facilitated new treatment strategies to arrest RA before joints are damaged irreversibly. Outcomes have been improved by recognizing the benefits of early diagnosis and early therapy with disease-modifying antirheumatic drugs (DMARDs). The treatment target is remission or a state of at least low disease activity, which should be attained within 6 months. Methotrexate is first-line therapy and should be prescribed at an optimal dose of 25 mg weekly and in combination with glucocorticoids; 40% to 50% of patients reach remission or at least low disease activity with this regimen. If this treatment fails, sequential application of targeted therapies, such as biologic agents (eg, tumor necrosis factor [TNF] inhibitors) or Janus kinase inhibitors in combination with methotrexate, have allowed up to 75% of these patients to reach the treatment target over time. New therapies have been developed in response to new pathogenetic findings. The costs of some therapies are considerable, but these costs are decreasing with the advent of biosimilar drugs (drugs essentially identical to the original biologic drugs but usually available at lower cost). Conclusions and relevance Scientific advances have improved therapies that prevent progression of irreversible joint damage in up to 90% of patients with RA. Early treatment with methotrexate plus glucocorticoids and subsequently with other DMARDs, such as inhibitors of TNF, IL-6, or Janus kinases, improves outcomes and prevents RA-related disability. A treat-to-target strategy aimed at reducing disease activity by at least 50% within 3 months and achieving remission or low disease activity within 6 months, with sequential drug treatment if needed, can prevent RA-related disability.

1,042 citations


Journal ArticleDOI
13 Mar 2018-JAMA
TL;DR: The United States spent approximately twice as much as other high-income countries on medical care, yet utilization rates in the United States were largely similar to those in other nations, and prices of labor and goods, including pharmaceuticals, and administrative costs appeared to be the major drivers of the difference in overall cost.
Abstract: Importance Health care spending in the United States is a major concern and is higher than in other high-income countries, but there is little evidence that efforts to reform US health care delivery have had a meaningful influence on controlling health care spending and costs. Objective To compare potential drivers of spending, such as structural capacity and utilization, in the United States with those of 10 of the highest-income countries (United Kingdom, Canada, Germany, Australia, Japan, Sweden, France, the Netherlands, Switzerland, and Denmark) to gain insight into what the United States can learn from these nations. Evidence Analysis of data primarily from 2013-2016 from key international organizations including the Organisation for Economic Co-operation and Development (OECD), comparing underlying differences in structural features, types of health care and social spending, and performance between the United States and 10 high-income countries. When data were not available for a given country or more accurate country-level estimates were available from sources other than the OECD, country-specific data sources were used. Findings In 2016, the US spent 17.8% of its gross domestic product on health care, and spending in the other countries ranged from 9.6% (Australia) to 12.4% (Switzerland). The proportion of the population with health insurance was 90% in the US, lower than the other countries (range, 99%-100%), and the US had the highest proportion of private health insurance (55.3%). For some determinants of health such as smoking, the US ranked second lowest of the countries (11.4% of the US population ≥15 years smokes daily; mean of all 11 countries, 16.6%), but the US had the highest percentage of adults who were overweight or obese at 70.1% (range for other countries, 23.8%-63.4%; mean of all 11 countries, 55.6%). Life expectancy in the US was the lowest of the 11 countries at 78.8 years (range for other countries, 80.7-83.9 years; mean of all 11 countries, 81.7 years), and infant mortality was the highest (5.8 deaths per 1000 live births in the US; 3.6 per 1000 for all 11 countries). The US did not differ substantially from the other countries in physician workforce (2.6 physicians per 1000; 43% primary care physicians), or nursing workforce (11.1 nurses per 1000). The US had comparable numbers of hospital beds (2.8 per 1000) but higher utilization of magnetic resonance imaging (118 per 1000) and computed tomography (245 per 1000) vs other countries. The US had similar rates of utilization (US discharges per 100 000 were 192 for acute myocardial infarction, 365 for pneumonia, 230 for chronic obstructive pulmonary disease; procedures per 100 000 were 204 for hip replacement, 226 for knee replacement, and 79 for coronary artery bypass graft surgery). Administrative costs of care (activities relating to planning, regulating, and managing health systems and services) accounted for 8% in the US vs a range of 1% to 3% in the other countries. For pharmaceutical costs, spending per capita was $1443 in the US vs a range of $466 to $939 in other countries. Salaries of physicians and nurses were higher in the US; for example, generalist physicians salaries were $218 173 in the US compared with a range of $86 607 to $154 126 in the other countries. Conclusions and Relevance The United States spent approximately twice as much as other high-income countries on medical care, yet utilization rates in the United States were largely similar to those in other nations. Prices of labor and goods, including pharmaceuticals, and administrative costs appeared to be the major drivers of the difference in overall cost between the United States and other high-income countries. As patients, physicians, policy makers, and legislators actively debate the future of the US health system, data such as these are needed to inform policy decisions.

980 citations


Journal ArticleDOI
18 Sep 2018-JAMA
TL;DR: There was substantial variability in prevalence estimates of burnout among practicing physicians and marked variation in burnout definitions, assessment methods, and study quality.
Abstract: Importance Burnout is a self-reported job-related syndrome increasingly recognized as a critical factor affecting physicians and their patients An accurate estimate of burnout prevalence among physicians would have important health policy implications, but the overall prevalence is unknown Objective To characterize the methods used to assess burnout and provide an estimate of the prevalence of physician burnout Data Sources and Study Selection Systematic search of EMBASE, ERIC, MEDLINE/PubMed, psycARTICLES, and psycINFO for studies on the prevalence of burnout in practicing physicians (ie, excluding physicians in training) published before June 1, 2018 Data Extraction and Synthesis Burnout prevalence and study characteristics were extracted independently by 3 investigators Although meta-analytic pooling was planned, variation in study designs and burnout ascertainment methods, as well as statistical heterogeneity, made quantitative pooling inappropriate Therefore, studies were summarized descriptively and assessed qualitatively Main Outcomes and Measures Point or period prevalence of burnout assessed by questionnaire Results Burnout prevalence data were extracted from 182 studies involving 109 628 individuals in 45 countries published between 1991 and 2018 In all, 857% (156/182) of studies used a version of the Maslach Burnout Inventory (MBI) to assess burnout Studies variably reported prevalence estimates of overall burnout or burnout subcomponents: 670% (122/182) on overall burnout, 720% (131/182) on emotional exhaustion, 681% (124/182) on depersonalization, and 632% (115/182) on low personal accomplishment Studies used at least 142 unique definitions for meeting overall burnout or burnout subscale criteria, indicating substantial disagreement in the literature on what constituted burnout Studies variably defined burnout based on predefined cutoff scores or sample quantiles and used markedly different cutoff definitions Among studies using instruments based on the MBI, there were at least 47 distinct definitions of overall burnout prevalence and 29, 26, and 26 definitions of emotional exhaustion, depersonalization, and low personal accomplishment prevalence, respectively Overall burnout prevalence ranged from 0% to 805% Emotional exhaustion, depersonalization, and low personal accomplishment prevalence ranged from 0% to 862%, 0% to 899%, and 0% to 871%, respectively Because of inconsistencies in definitions of and assessment methods for burnout across studies, associations between burnout and sex, age, geography, time, specialty, and depressive symptoms could not be reliably determined Conclusions and Relevance In this systematic review, there was substantial variability in prevalence estimates of burnout among practicing physicians and marked variation in burnout definitions, assessment methods, and study quality These findings preclude definitive conclusions about the prevalence of burnout and highlight the importance of developing a consensus definition of burnout and of standardizing measurement tools to assess the effects of chronic occupational stress on physicians

978 citations


Journal ArticleDOI
Ali H. Mokdad1, Katherine Ballestros1, Michelle Echko1, Scott D Glenn1, Helen E Olsen1, Erin C Mullany1, Alexander Lee1, Abdur Rahman Khan2, Alireza Ahmadi3, Alireza Ahmadi4, Alize J. Ferrari1, Alize J. Ferrari5, Alize J. Ferrari6, Amir Kasaeian7, Andrea Werdecker, Austin Carter1, Ben Zipkin1, Benn Sartorius8, Benn Sartorius9, Berrin Serdar10, Bryan L. Sykes11, Christopher Troeger1, Christina Fitzmaurice1, Christina Fitzmaurice12, Colin D. Rehm13, Damian Santomauro6, Damian Santomauro5, Damian Santomauro1, Daniel Kim14, Danny V. Colombara1, David C. Schwebel15, Derrick Tsoi1, Dhaval Kolte16, Elaine O. Nsoesie1, Emma Nichols1, Eyal Oren17, Fiona J Charlson5, Fiona J Charlson6, Fiona J Charlson1, George C Patton18, Gregory A. Roth1, H. Dean Hosgood19, Harvey Whiteford1, Harvey Whiteford6, Harvey Whiteford5, Hmwe H Kyu1, Holly E. Erskine1, Holly E. Erskine6, Holly E. Erskine5, Hsiang Huang20, Ira Martopullo1, Jasvinder A. Singh15, Jean B. Nachega21, Jean B. Nachega22, Jean B. Nachega23, Juan Sanabria24, Juan Sanabria25, Kaja Abbas26, Kanyin Ong1, Karen M. Tabb27, Kristopher J. Krohn1, Leslie Cornaby1, Louisa Degenhardt1, Louisa Degenhardt28, Mark Moses1, Maryam S. Farvid29, Max Griswold1, Michael H. Criqui30, Michelle L. Bell31, Minh Nguyen1, Mitch T Wallin32, Mitch T Wallin33, Mojde Mirarefin1, Mostafa Qorbani, Mustafa Z. Younis34, Nancy Fullman1, Patrick Liu1, Paul S Briant1, Philimon Gona35, Rasmus Havmoller3, Ricky Leung36, Ruth W Kimokoti37, Shahrzad Bazargan-Hejazi38, Shahrzad Bazargan-Hejazi39, Simon I. Hay40, Simon I. Hay1, Simon Yadgir1, Stan Biryukov1, Stein Emil Vollset1, Stein Emil Vollset41, Tahiya Alam1, Tahvi Frank1, Talha Farid2, Ted R. Miller42, Ted R. Miller43, Theo Vos1, Till Bärnighausen29, Till Bärnighausen44, Tsegaye Telwelde Gebrehiwot45, Yuichiro Yano46, Ziyad Al-Aly47, Alem Mehari48, Alexis J. Handal49, Amit Kandel50, Ben Anderson51, Brian J. Biroscak52, Brian J. Biroscak31, Dariush Mozaffarian53, E. Ray Dorsey54, Eric L. Ding29, Eun-Kee Park55, Gregory R. Wagner29, Guoqing Hu56, Honglei Chen57, Jacob E. Sunshine51, Jagdish Khubchandani58, Janet L Leasher59, Janni Leung5, Janni Leung51, Joshua A. Salomon29, Jürgen Unützer51, Leah E. Cahill60, Leah E. Cahill29, Leslie T. Cooper61, Masako Horino, Michael Brauer1, Michael Brauer62, Nicholas J K Breitborde63, Peter J. Hotez64, Roman Topor-Madry65, Roman Topor-Madry66, Samir Soneji67, Saverio Stranges68, Spencer L. James1, Stephen M. Amrock69, Sudha Jayaraman70, Tejas V. Patel, Tomi Akinyemiju15, Vegard Skirbekk71, Vegard Skirbekk41, Yohannes Kinfu72, Zulfiqar A Bhutta73, Jost B. Jonas44, Christopher J L Murray1 
Institute for Health Metrics and Evaluation1, University of Louisville2, Karolinska Institutet3, Kermanshah University of Medical Sciences4, University of Queensland5, Centre for Mental Health6, Tehran University of Medical Sciences7, South African Medical Research Council8, University of KwaZulu-Natal9, University of Colorado Boulder10, University of California, Irvine11, Fred Hutchinson Cancer Research Center12, Montefiore Medical Center13, Northeastern University14, University of Alabama at Birmingham15, Brown University16, San Diego State University17, University of Melbourne18, Albert Einstein College of Medicine19, Cambridge Health Alliance20, Johns Hopkins University21, University of Cape Town22, University of Pittsburgh23, Marshall University24, Case Western Reserve University25, University of London26, University of Illinois at Urbana–Champaign27, National Drug and Alcohol Research Centre28, Harvard University29, University of California, San Diego30, Yale University31, Georgetown University32, Veterans Health Administration33, Jackson State University34, University of Massachusetts Boston35, State University of New York System36, Simmons College37, University of California, Los Angeles38, Charles R. Drew University of Medicine and Science39, University of Oxford40, Norwegian Institute of Public Health41, Curtin University42, Pacific Institute43, Heidelberg University44, Jimma University45, Northwestern University46, Washington University in St. Louis47, Howard University48, University of New Mexico49, University at Buffalo50, University of Washington51, University of South Florida52, Tufts University53, University of Rochester Medical Center54, Kosin University55, Central South University56, Michigan State University57, Ball State University58, Nova Southeastern University59, Dalhousie University60, Mayo Clinic61, University of British Columbia62, Ohio State University63, Baylor University64, Wrocław Medical University65, Jagiellonian University Medical College66, Dartmouth College67, University of Western Ontario68, Oregon Health & Science University69, Virginia Commonwealth University70, Columbia University71, University of Canberra72, Aga Khan University73
10 Apr 2018-JAMA
TL;DR: There are wide differences in the burden of disease at the state level and specific diseases and risk factors, such as drug use disorders, high BMI, poor diet, high fasting plasma glucose level, and alcohol use disorders are increasing and warrant increased attention.
Abstract: Introduction Several studies have measured health outcomes in the United States, but none have provided a comprehensive assessment of patterns of health by state. Objective To use the results of the Global Burden of Disease Study (GBD) to report trends in the burden of diseases, injuries, and risk factors at the state level from 1990 to 2016. Design and Setting A systematic analysis of published studies and available data sources estimates the burden of disease by age, sex, geography, and year. Main Outcomes and Measures Prevalence, incidence, mortality, life expectancy, healthy life expectancy (HALE), years of life lost (YLLs) due to premature mortality, years lived with disability (YLDs), and disability-adjusted life-years (DALYs) for 333 causes and 84 risk factors with 95% uncertainty intervals (UIs) were computed. Results Between 1990 and 2016, overall death rates in the United States declined from 745.2 (95% UI, 740.6 to 749.8) per 100 000 persons to 578.0 (95% UI, 569.4 to 587.1) per 100 000 persons. The probability of death among adults aged 20 to 55 years declined in 31 states and Washington, DC from 1990 to 2016. In 2016, Hawaii had the highest life expectancy at birth (81.3 years) and Mississippi had the lowest (74.7 years), a 6.6-year difference. Minnesota had the highest HALE at birth (70.3 years), and West Virginia had the lowest (63.8 years), a 6.5-year difference. The leading causes of DALYs in the United States for 1990 and 2016 were ischemic heart disease and lung cancer, while the third leading cause in 1990 was low back pain, and the third leading cause in 2016 was chronic obstructive pulmonary disease. Opioid use disorders moved from the 11th leading cause of DALYs in 1990 to the 7th leading cause in 2016, representing a 74.5% (95% UI, 42.8% to 93.9%) change. In 2016, each of the following 6 risks individually accounted for more than 5% of risk-attributable DALYs: tobacco consumption, high body mass index (BMI), poor diet, alcohol and drug use, high fasting plasma glucose, and high blood pressure. Across all US states, the top risk factors in terms of attributable DALYs were due to 1 of the 3 following causes: tobacco consumption (32 states), high BMI (10 states), or alcohol and drug use (8 states). Conclusions and Relevance There are wide differences in the burden of disease at the state level. Specific diseases and risk factors, such as drug use disorders, high BMI, poor diet, high fasting plasma glucose level, and alcohol use disorders are increasing and warrant increased attention. These data can be used to inform national health priorities for research, clinical care, and policy.

962 citations


Journal ArticleDOI
08 May 2018-JAMA
TL;DR: The USPSTF concludes with moderate certainty that the net benefit of PSA-based screening for prostate cancer in men aged 55 to 69 years is small for some men.
Abstract: Importance In the United States, the lifetime risk of being diagnosed with prostate cancer is approximately 11%, and the lifetime risk of dying of prostate cancer is 2.5%. The median age of death from prostate cancer is 80 years. Many men with prostate cancer never experience symptoms and, without screening, would never know they have the disease. African American men and men with a family history of prostate cancer have an increased risk of prostate cancer compared with other men. Objective To update the 2012 US Preventive Services Task Force (USPSTF) recommendation on prostate-specific antigen (PSA)–based screening for prostate cancer. Evidence Review The USPSTF reviewed the evidence on the benefits and harms of PSA-based screening for prostate cancer and subsequent treatment of screen-detected prostate cancer. The USPSTF also commissioned a review of existing decision analysis models and the overdiagnosis rate of PSA-based screening. The reviews also examined the benefits and harms of PSA-based screening in patient subpopulations at higher risk of prostate cancer, including older men, African American men, and men with a family history of prostate cancer. Findings Adequate evidence from randomized clinical trials shows that PSA-based screening programs in men aged 55 to 69 years may prevent approximately 1.3 deaths from prostate cancer over approximately 13 years per 1000 men screened. Screening programs may also prevent approximately 3 cases of metastatic prostate cancer per 1000 men screened. Potential harms of screening include frequent false-positive results and psychological harms. Harms of prostate cancer treatment include erectile dysfunction, urinary incontinence, and bowel symptoms. About 1 in 5 men who undergo radical prostatectomy develop long-term urinary incontinence, and 2 in 3 men will experience long-term erectile dysfunction. Adequate evidence shows that the harms of screening in men older than 70 years are at least moderate and greater than in younger men because of increased risk of false-positive results, diagnostic harms from biopsies, and harms from treatment. The USPSTF concludes with moderate certainty that the net benefit of PSA-based screening for prostate cancer in men aged 55 to 69 years is small for some men. How each man weighs specific benefits and harms will determine whether the overall net benefit is small. The USPSTF concludes with moderate certainty that the potential benefits of PSA-based screening for prostate cancer in men 70 years and older do not outweigh the expected harms. Conclusions and Recommendation For men aged 55 to 69 years, the decision to undergo periodic PSA-based screening for prostate cancer should be an individual one and should include discussion of the potential benefits and harms of screening with their clinician. Screening offers a small potential benefit of reducing the chance of death from prostate cancer in some men. However, many men will experience potential harms of screening, including false-positive results that require additional testing and possible prostate biopsy; overdiagnosis and overtreatment; and treatment complications, such as incontinence and erectile dysfunction. In determining whether this service is appropriate in individual cases, patients and clinicians should consider the balance of benefits and harms on the basis of family history, race/ethnicity, comorbid medical conditions, patient values about the benefits and harms of screening and treatment-specific outcomes, and other health needs. Clinicians should not screen men who do not express a preference for screening. (C recommendation) The USPSTF recommends against PSA-based screening for prostate cancer in men 70 years and older. (D recommendation)

902 citations


Journal ArticleDOI
20 Feb 2018-JAMA
TL;DR: The Berlin definition of acute respiratory distress syndrome addressed limitations of the American-European Consensus Conference definition, but poor reliability of some criteria may contribute to underrecognition by clinicians.
Abstract: Importance Acute respiratory distress syndrome (ARDS) is a life-threatening form of respiratory failure that affects approximately 200 000 patients each year in the United States, resulting in nearly 75 000 deaths annually. Globally, ARDS accounts for 10% of intensive care unit admissions, representing more than 3 million patients with ARDS annually. Objective To review advances in diagnosis and treatment of ARDS over the last 5 years. Evidence Review We searched MEDLINE, EMBASE, and the Cochrane Database of Systematic Reviews from 2012 to 2017 focusing on randomized clinical trials, meta-analyses, systematic reviews, and clinical practice guidelines. Articles were identified for full text review with manual review of bibliographies generating additional references. Findings After screening 1662 citations, 31 articles detailing major advances in the diagnosis or treatment of ARDS were selected. The Berlin definition proposed 3 categories of ARDS based on the severity of hypoxemia: mild (200 mm Hg 2 /Fio 2 ≤300 mm Hg), moderate (100 mm Hg 2 /Fio 2 ≤200 mm Hg), and severe (Pao 2 /Fio 2 ≤100 mm Hg), along with explicit criteria related to timing of the syndrome’s onset, origin of edema, and the chest radiograph findings. The Berlin definition has significantly greater predictive validity for mortality than the prior American-European Consensus Conference definition. Clinician interpretation of the origin of edema and chest radiograph criteria may be less reliable in making a diagnosis of ARDS. The cornerstone of management remains mechanical ventilation, with a goal to minimize ventilator-induced lung injury (VILI). Aspirin was not effective in preventing ARDS in patients at high-risk for the syndrome. Adjunctive interventions to further minimize VILI, such as prone positioning in patients with a Pao 2 /Fio 2 ratio less than 150 mm Hg, were associated with a significant mortality benefit whereas others (eg, extracorporeal carbon dioxide removal) remain experimental. Pharmacologic therapies such as β 2 agonists, statins, and keratinocyte growth factor, which targeted pathophysiologic alterations in ARDS, were not beneficial and demonstrated possible harm. Recent guidelines on mechanical ventilation in ARDS provide evidence-based recommendations related to 6 interventions, including low tidal volume and inspiratory pressure ventilation, prone positioning, high-frequency oscillatory ventilation, higher vs lower positive end-expiratory pressure, lung recruitment maneuvers, and extracorporeal membrane oxygenation. Conclusions and Relevance The Berlin definition of acute respiratory distress syndrome addressed limitations of the American-European Consensus Conference definition, but poor reliability of some criteria may contribute to underrecognition by clinicians. No pharmacologic treatments aimed at the underlying pathology have been shown to be effective, and management remains supportive with lung-protective mechanical ventilation. Guidelines on mechanical ventilation in patients with acute respiratory distress syndrome can assist clinicians in delivering evidence-based interventions that may lead to improved outcomes.

873 citations


Journal ArticleDOI
03 Apr 2018-JAMA
TL;DR: To understand the degree to which a predictive or diagnostic algorithm can be said to be an instance of machine learning requires understanding how much of its structure or parameters were predetermined by humans.
Abstract: Nearly all aspects of modern life are in some way being changed by big data and machine learning. Netflix knows what movies people like to watch and Google knows what people want to know based on their search histories. Indeed, Google has recently begun to replace much of its existing non–machine learning technology with machine learning algorithms, and there is great optimism that these techniques can provide similar improvements across many sectors. It isnosurprisethenthatmedicineisawashwithclaims of revolution from the application of machine learning to big health care data. Recent examples have demonstrated that big data and machine learning can create algorithms that perform on par with human physicians.1 Though machine learning and big data may seem mysterious at first, they are in fact deeply related to traditional statistical models that are recognizable to most clinicians. It is our hope that elucidating these connections will demystify these techniques and provide a set of reasonable expectations for the role of machine learning and big data in health care. Machine learning was originally described as a program that learns to perform a task or make a decision automatically from data, rather than having the behavior explicitlyprogrammed.However,thisdefinitionisverybroad and could cover nearly any form of data-driven approach. For instance, consider the Framingham cardiovascular risk score,whichassignspointstovariousfactorsandproduces a number that predicts 10-year cardiovascular risk. Should this be considered an example of machine learning? The answer might obviously seem to be no. Closer inspection oftheFraminghamriskscorerevealsthattheanswermight not be as obvious as it first seems. The score was originally created2 by fitting a proportional hazards model to data frommorethan5300patients,andsothe“rule”wasinfact learnedentirelyfromdata.Designatingariskscoreasamachine learning algorithm might seem a strange notion, but this example reveals the uncertain nature of the original definition of machine learning. It is perhaps more useful to imagine an algorithm as existing along a continuum between fully human-guided vs fully machine-guided data analysis. To understand the degree to which a predictive or diagnostic algorithm can said to be an instance of machine learning requires understanding how much of its structure or parameters were predetermined by humans. The trade-off between human specificationofapredictivealgorithm’spropertiesvslearning those properties from data is what is known as the machine learning spectrum. Returning to the Framingham study, to create the original risk score statisticians and clinical experts worked together to make many important decisions, such as which variables to include in the model, therelationshipbetweenthedependentandindependent variables, and variable transformations and interactions. Since considerable human effort was used to define these properties, it would place low on the machine learning spectrum (#19 in the Figure and Supplement). Many evidence-based clinical practices are based on a statistical model of this sort, and so many clinical decisions in fact exist on the machine learning spectrum (middle left of Figure). On the extreme low end of the machine learning spectrum would be heuristics and rules of thumb that do not directly involve the use of any rules or models explicitly derived from data (bottom left of Figure). Suppose a new cardiovascular risk score is created that includes possible extensions to the original model. For example, it could be that risk factors should not be added but instead should be multiplied or divided, or perhaps a particularly important risk factor should square the entire score if it is present. Moreover, if it is not known in advance which variables will be important, but thousands of individual measurements have been collected, how should a good model be identified from among the infinite possibilities? This is precisely what a machine learning algorithm attempts to do. As humans impose fewer assumptions on the algorithm, it moves further up the machine learning spectrum. However, there is never a specific threshold wherein a model suddenly becomes “machine learning”; rather, all of these approaches exist along a continuum, determined by how many human assumptions are placed onto the algorithm. An example of an approach high on the machine learning spectrum has recently emerged in the form of so-called deep learning models. Deep learning models are stunningly complex networks of artificial neurons that were designed expressly to create accurate models directly from raw data. Researchers recently demonstrated a deep learning algorithm capable of detecting diabetic retinopathy (#4 in the Figure, top center) from retinal photographs at a sensitivity equal to or greater than that of ophthalmologists.1 This model learned the diagnosis procedure directly from the raw pixels of the images with no human intervention outside of a team of ophthalmologists who annotated each image with the correct diagnosis. Because they are able to learn the task with little human instruction or prior assumptions, these deep learning algorithms rank very high on the machine learning spectrum (Figure, light blue circles). Though they require less human guidance, deep learning algorithms for image recognition require enormous amounts of data to capture the full complexity, variety, and nuance inherent to real-world images. Consequently, these algorithms often require hundreds of thousands of examples to extract the salient image features that are correlated with the outcome of interest. Higher placement on the machine learning spectrum does not imply superiority, because different tasks require different levels of human involvement. While algorithms high on the spectrum are often very flexible and can learn many tasks, they are often uninterpretable VIEWPOINT

828 citations


Journal ArticleDOI
16 Jan 2018-JAMA
TL;DR: The Swiss Multicenter Bypass or Sleeve Study (SM-BOSS), a 2-group randomized trial, was conducted from January 2007 until November 2011 (last follow-up in March 2017) as mentioned in this paper.
Abstract: Importance Sleeve gastrectomy is increasingly used in the treatment of morbid obesity, but its long-term outcome vs the standard Roux-en-Y gastric bypass procedure is unknown. Objective To determine whether there are differences between sleeve gastrectomy and Roux-en-Y gastric bypass in terms of weight loss, changes in comorbidities, increase in quality of life, and adverse events. Design, Setting, and Participants The Swiss Multicenter Bypass or Sleeve Study (SM-BOSS), a 2-group randomized trial, was conducted from January 2007 until November 2011 (last follow-up in March 2017). Of 3971 morbidly obese patients evaluated for bariatric surgery at 4 Swiss bariatric centers, 217 patients were enrolled and randomly assigned to sleeve gastrectomy or Roux-en-Y gastric bypass with a 5-year follow-up period. Interventions Patients were randomly assigned to undergo laparoscopic sleeve gastrectomy (n = 107) or laparoscopic Roux-en-Y gastric bypass (n = 110). Main Outcomes and Measures The primary end point was weight loss, expressed as percentage excess body mass index (BMI) loss. Exploratory end points were changes in comorbidities and adverse events. Results Among the 217 patients (mean age, 45.5 years; 72% women; mean BMI, 43.9) 205 (94.5%) completed the trial. Excess BMI loss was not significantly different at 5 years: for sleeve gastrectomy, 61.1%, vs Roux-en-Y gastric bypass, 68.3% (absolute difference, −7.18%; 95% CI, −14.30% to −0.06%;P = .22 after adjustment for multiple comparisons). Gastric reflux remission was observed more frequently after Roux-en-Y gastric bypass (60.4%) than after sleeve gastrectomy (25.0%). Gastric reflux worsened (more symptoms or increase in therapy) more often after sleeve gastrectomy (31.8%) than after Roux-en-Y gastric bypass (6.3%). The number of patients with reoperations or interventions was 16/101 (15.8%) after sleeve gastrectomy and 23/104 (22.1%) after Roux-en-Y gastric bypass. Conclusions and Relevance Among patients with morbid obesity, there was no significant difference in excess BMI loss between laparoscopic sleeve gastrectomy and laparoscopic Roux-en-Y gastric bypass at 5 years of follow-up after surgery. Trial Registration clinicaltrials.gov Identifier:NCT00356213

Journal ArticleDOI
21 Aug 2018-JAMA
TL;DR: The USPSTF concludes with high certainty that the benefits of screening every 3 years with cytology alone in women aged 21 to 29 years substantially outweigh the harms and screening women younger than 21 years does not provide significant benefit.
Abstract: Importance The number of deaths from cervical cancer in the United States has decreased substantially since the implementation of widespread cervical cancer screening and has declined from 2.8 to 2.3 deaths per 100 000 women from 2000 to 2015. Objective To update the US Preventive Services Task Force (USPSTF) 2012 recommendation on screening for cervical cancer. Evidence Review The USPSTF reviewed the evidence on screening for cervical cancer, with a focus on clinical trials and cohort studies that evaluated screening with high-risk human papillomavirus (hrHPV) testing alone or hrHPV and cytology together (cotesting) compared with cervical cytology alone. The USPSTF also commissioned a decision analysis model to evaluate the age at which to begin and end screening, the optimal interval for screening, the effectiveness of different screening strategies, and related benefits and harms of different screening strategies. Findings Screening with cervical cytology alone, primary hrHPV testing alone, or cotesting can detect high-grade precancerous cervical lesions and cervical cancer. Screening women aged 21 to 65 years substantially reduces cervical cancer incidence and mortality. The harms of screening for cervical cancer in women aged 30 to 65 years are moderate. The USPSTF concludes with high certainty that the benefits of screening every 3 years with cytology alone in women aged 21 to 29 years substantially outweigh the harms. The USPSTF concludes with high certainty that the benefits of screening every 3 years with cytology alone, every 5 years with hrHPV testing alone, or every 5 years with both tests (cotesting) in women aged 30 to 65 years outweigh the harms. Screening women older than 65 years who have had adequate prior screening and women younger than 21 years does not provide significant benefit. Screening women who have had a hysterectomy with removal of the cervix for indications other than a high-grade precancerous lesion or cervical cancer provides no benefit. The USPSTF concludes with moderate to high certainty that screening women older than 65 years who have had adequate prior screening and are not otherwise at high risk for cervical cancer, screening women younger than 21 years, and screening women who have had a hysterectomy with removal of the cervix for indications other than a high-grade precancerous lesion or cervical cancer does not result in a positive net benefit. Conclusions and Recommendation The USPSTF recommends screening for cervical cancer every 3 years with cervical cytology alone in women aged 21 to 29 years. (A recommendation) The USPSTF recommends screening every 3 years with cervical cytology alone, every 5 years with hrHPV testing alone, or every 5 years with hrHPV testing in combination with cytology (cotesting) in women aged 30 to 65 years. (A recommendation) The USPSTF recommends against screening for cervical cancer in women younger than 21 years. (D recommendation) The USPSTF recommends against screening for cervical cancer in women older than 65 years who have had adequate prior screening and are not otherwise at high risk for cervical cancer. (D recommendation) The USPSTF recommends against screening for cervical cancer in women who have had a hysterectomy with removal of the cervix and do not have a history of a high-grade precancerous lesion or cervical cancer. (D recommendation)

Journal ArticleDOI
06 Mar 2018-JAMA
TL;DR: Treatment with opioids was not superior to treatment with nonopioid medications for improving pain-related function over 12 months and results do not support initiation of opioid therapy for moderate to severe chronic back pain or hip or knee osteoarthritis pain.
Abstract: Importance Limited evidence is available regarding long-term outcomes of opioids compared with nonopioid medications for chronic pain. Objective To compare opioid vs nonopioid medications over 12 months on pain-related function, pain intensity, and adverse effects. Design, Setting, and Participants Pragmatic, 12-month, randomized trial with masked outcome assessment. Patients were recruited from Veterans Affairs primary care clinics from June 2013 through December 2015; follow-up was completed December 2016. Eligible patients had moderate to severe chronic back pain or hip or knee osteoarthritis pain despite analgesic use. Of 265 patients enrolled, 25 withdrew prior to randomization and 240 were randomized. Interventions Both interventions (opioid and nonopioid medication therapy) followed a treat-to-target strategy aiming for improved pain and function. Each intervention had its own prescribing strategy that included multiple medication options in 3 steps. In the opioid group, the first step was immediate-release morphine, oxycodone, or hydrocodone/acetaminophen. For the nonopioid group, the first step was acetaminophen (paracetamol) or a nonsteroidal anti-inflammatory drug. Medications were changed, added, or adjusted within the assigned treatment group according to individual patient response. Main Outcomes and Measures The primary outcome was pain-related function (Brief Pain Inventory [BPI] interference scale) over 12 months and the main secondary outcome was pain intensity (BPI severity scale). For both BPI scales (range, 0-10; higher scores = worse function or pain intensity), a 1-point improvement was clinically important. The primary adverse outcome was medication-related symptoms (patient-reported checklist; range, 0-19). Results Among 240 randomized patients (mean age, 58.3 years; women, 32 [13.0%]), 234 (97.5%) completed the trial. Groups did not significantly differ on pain-related function over 12 months (overallP = .58); mean 12-month BPI interference was 3.4 for the opioid group and 3.3 for the nonopioid group (difference, 0.1 [95% CI, −0.5 to 0.7]). Pain intensity was significantly better in the nonopioid group over 12 months (overallP = .03); mean 12-month BPI severity was 4.0 for the opioid group and 3.5 for the nonopioid group (difference, 0.5 [95% CI, 0.0 to 1.0]). Adverse medication-related symptoms were significantly more common in the opioid group over 12 months (overallP = .03); mean medication-related symptoms at 12 months were 1.8 in the opioid group and 0.9 in the nonopioid group (difference, 0.9 [95% CI, 0.3 to 1.5]). Conclusions and Relevance Treatment with opioids was not superior to treatment with nonopioid medications for improving pain-related function over 12 months. Results do not support initiation of opioid therapy for moderate to severe chronic back pain or hip or knee osteoarthritis pain. Trial Registration clinicaltrials.gov Identifier:NCT01583985

Journal ArticleDOI
16 Jan 2018-JAMA
TL;DR: Although gastric bypass compared with sleeve gastrectomy was associated with greater percentage excess weight loss at 5 years, the difference was not statistically significant, based on the prespecified equivalence margins.
Abstract: Importance Laparoscopic sleeve gastrectomy for treatment of morbid obesity has increased substantially despite the lack of long-term results compared with laparoscopic Roux-en-Y gastric bypass. Objective To determine whether laparoscopic sleeve gastrectomy and laparoscopic Roux-en-Y gastric bypass are equivalent for weight loss at 5 years in patients with morbid obesity. Design, Setting, and Participants The Sleeve vs Bypass (SLEEVEPASS) multicenter, multisurgeon, open-label, randomized clinical equivalence trial was conducted from March 2008 until June 2010 in Finland. The trial enrolled 240 morbidly obese patients aged 18 to 60 years, who were randomly assigned to sleeve gastrectomy or gastric bypass with a 5-year follow-up period (last follow-up, October 14, 2015). Interventions Laparoscopic sleeve gastrectomy (n = 121) or laparoscopic Roux-en-Y gastric bypass (n = 119). Main Outcomes and Measures The primary end point was weight loss evaluated by percentage excess weight loss. Prespecified equivalence margins for the clinical significance of weight loss differences between gastric bypass and sleeve gastrectomy were −9% to +9% excess weight loss. Secondary end points included resolution of comorbidities, improvement of quality of life (QOL), all adverse events (overall morbidity), and mortality. Results Among 240 patients randomized (mean age, 48 [SD, 9] years; mean baseline body mass index, 45.9,[SD, 6.0]; 69.6% women), 80.4% completed the 5-year follow-up. At baseline, 42.1% had type 2 diabetes, 34.6% dyslipidemia, and 70.8% hypertension. The estimated mean percentage excess weight loss at 5 years was 49% (95% CI, 45%-52%) after sleeve gastrectomy and 57% (95% CI, 53%-61%) after gastric bypass (difference, 8.2 percentage units [95% CI, 3.2%-13.2%], higher in the gastric bypass group) and did not meet criteria for equivalence. Complete or partial remission of type 2 diabetes was seen in 37% (n = 15/41) after sleeve gastrectomy and in 45% (n = 18/40) after gastric bypass (P > .99). Medication for dyslipidemia was discontinued in 47% (n = 14/30) after sleeve gastrectomy and 60% (n = 24/40) after gastric bypass (P = .15) and for hypertension in 29% (n = 20/68) and 51% (n = 37/73) (P = .02), respectively. There was no statistically significant difference in QOL between groups (P = .85) and no treatment-related mortality. At 5 years the overall morbidity rate was 19% (n = 23) for sleeve gastrectomy and 26% (n = 31) for gastric bypass (P = .19). Conclusions and Relevance Among patients with morbid obesity, use of laparoscopic sleeve gastrectomy compared with use of laparoscopic Roux-en-Y gastric bypass did not meet criteria for equivalence in terms of percentage excess weight loss at 5 years. Although gastric bypass compared with sleeve gastrectomy was associated with greater percentage excess weight loss at 5 years, the difference was not statistically significant, based on the prespecified equivalence margins. Trial Registration clinicaltrials.gov Identifier:NCT00793143

Journal ArticleDOI
24 Jul 2018-JAMA
TL;DR: Advances in HIV prevention and treatment with antiretroviral drugs continue to improve clinical management and outcomes for individuals at risk for and living with HIV.
Abstract: Importance Antiretroviral therapy (ART) is the cornerstone of prevention and management of HIV infection. Objective To evaluate new data and treatments and incorporate this information into updated recommendations for initiating therapy, monitoring individuals starting therapy, changing regimens, and preventing HIV infection for individuals at risk. Evidence Review New evidence collected since the International Antiviral Society–USA 2016 recommendations via monthly PubMed and EMBASE literature searches up to April 2018; data presented at peer-reviewed scientific conferences. A volunteer panel of experts in HIV research and patient care considered these data and updated previous recommendations. Findings ART is recommended for virtually all HIV-infected individuals, as soon as possible after HIV diagnosis. Immediate initiation (eg, rapid start), if clinically appropriate, requires adequate staffing, specialized services, and careful selection of medical therapy. An integrase strand transfer inhibitor (InSTI) plus 2 nucleoside reverse transcriptase inhibitors (NRTIs) is generally recommended for initial therapy, with unique patient circumstances (eg, concomitant diseases and conditions, potential for pregnancy, cost) guiding the treatment choice. CD4 cell count, HIV RNA level, genotype, and other laboratory tests for general health and co-infections are recommended at specified points before and during ART. If a regimen switch is indicated, treatment history, tolerability, adherence, and drug resistance history should first be assessed; 2 or 3 active drugs are recommended for a new regimen. HIV testing is recommended at least once for anyone who has ever been sexually active and more often for individuals at ongoing risk for infection. Preexposure prophylaxis with tenofovir disoproxil fumarate/emtricitabine and appropriate monitoring is recommended for individuals at risk for HIV. Conclusions and Relevance Advances in HIV prevention and treatment with antiretroviral drugs continue to improve clinical management and outcomes for individuals at risk for and living with HIV.

Journal ArticleDOI
02 Jan 2018-JAMA
TL;DR: Although there is a paucity of clinical trial evidence to support specific postdischarge rehabilitation treatment, experts recommend referral to physical therapy to improve exercise capacity, strength, and independent completion of activities of daily living in the months after hospital discharge for sepsis.
Abstract: Importance Survival from sepsis has improved in recent years, resulting in an increasing number of patients who have survived sepsis treatment. Current sepsis guidelines do not provide guidance on posthospital care or recovery. Observations Each year, more than 19 million individuals develop sepsis, defined as a life-threatening acute organ dysfunction secondary to infection. Approximately 14 million survive to hospital discharge and their prognosis varies. Half of patients recover, one-third die during the following year, and one-sixth have severe persistent impairments. Impairments include development of an average of 1 to 2 new functional limitations (eg, inability to bathe or dress independently), a 3-fold increase in prevalence of moderate to severe cognitive impairment (from 6.1% before hospitalization to 16.7% after hospitalization), and a high prevalence of mental health problems, including anxiety (32% of patients who survive), depression (29%), or posttraumatic stress disorder (44%). About 40% of patients are rehospitalized within 90 days of discharge, often for conditions that are potentially treatable in the outpatient setting, such as infection (11.9%) and exacerbation of heart failure (5.5%). Compared with patients hospitalized for other diagnoses, those who survive sepsis (11.9%) are at increased risk of recurrent infection than matched patients (8.0%) matched patients ( P P P Conclusions and Relevance In the months after hospital discharge for sepsis, management should focus on (1) identifying new physical, mental, and cognitive problems and referring for appropriate treatment, (2) reviewing and adjusting long-term medications, and (3) evaluating for treatable conditions that commonly result in hospitalization, such as infection, heart failure, renal failure, and aspiration. For patients with poor or declining health prior to sepsis who experience further deterioration after sepsis, it may be appropriate to focus on palliation of symptoms.

Journal ArticleDOI
10 Apr 2018-JAMA
TL;DR: P values and accompanying methods of statistical significance testing are creating challenges in biomedical science and other disciplines because they are misinterpreted, overtrusted, and misused and these misconceptions affect researchers, journals, readers, and users of research articles, and even the public who consume scientific information.
Abstract: P values and accompanying methods of statistical significance testing are creating challenges in biomedical science and other disciplines. The vast majority (96%) of articles that report P values in the abstract, full text, or both include some values of .05 or less.1 However, many of the claims that these reports highlight are likely false.2 Recognizing the major importance of the statistical significance conundrum, the American Statistical Association (ASA) published3 a statement on P values in 2016. The status quo is widely believed to be problematic, but how exactly to fix the problem is far more contentious. The contributors to the ASA statement also wrote 20 independent, accompanying commentaries focusing on different aspects and prioritizing different solutions. Another large coalition of 72 methodologists recently proposed4 a specific, simple move: lowering the routine P value threshold for claiming statistical significance from .05 to .005 for new discoveries. The proposal met with strong endorsement in some circles and concerns in others. P values are misinterpreted, overtrusted, and misused. The language of the ASA statement enables the dissection of these 3 problems. Multiple misinterpretations of P values exist, but the most common one is that they represent the “probability that the studied hypothesis is true.”3 A P value of .02 (2%) is wrongly considered to mean that the null hypothesis (eg, the drug is as effective as placebo) is 2% likely to be true and the alternative (eg, the drug is more effective than placebo) is 98% likely to be correct. Overtrust ensues when it is forgotten that “proper inference requires full reporting and transparency.”3 Better-looking (smaller) P values alone do not guarantee full reporting and transparency. In fact, smaller P values may hint to selective reporting and nontransparency. The most common misuse of the P value is to make “scientific conclusions and business or policy decisions” based on “whether a P value passes a specific threshold” even though “a P value, or statistical significance, does not measure the size of an effect or the importance of a result,” and “by itself, a P value does not provide a good measure of evidence.”3 These 3 major problems mean that passing a statistical significance threshold (traditionally P = .05) is wrongly equated with a finding or an outcome (eg, an association or a treatment effect) being true, valid, and worth acting on. These misconceptions affect researchers, journals, readers, and users of research articles, and even media and the public who consume scientific information. Most claims supported with P values slightly below .05 are probably false (ie, the claimed associations and treatment effects do not exist). Even among those claims that are true, few are worth acting on in medicine and health care. Lowering the threshold for claiming statistical significance is an old idea. Several scientific fields have carefully considered how low a P value should be for a research finding to have a sufficiently high chance of being true. For example, adoption of genome-wide significance thresholds (P < 5 × 10−8) in population genomics has made discovered associations highly replicable and these associations also appear consistently when tested in new populations. The human genome is very complex, but the extent of multiplicity of significance testing involved is known, the analyses are systematic and transparent, and a requirement for P < 5 × 10−8 can be cogently arrived at. However, for most other types of biomedical research, the multiplicity involved is unclear and the analyses are nonsystematic and nontransparent. For most observational exploratory research that lacks preregistered protocols and analysis plans, it is unclear how many analyses were performed and what various analytic paths were explored. Hidden multiplicity, nonsystematic exploration, and selective reporting may affect even experimental research and randomized trials. Even though it is now more common to have a preexisting protocol and statistical analysis plan and preregistration of the trial posted on a public database, there are still substantial degrees of freedom regarding how to analyze data and outcomes and what exactly to present. In addition, many studies in contemporary clinical investigation focus on smaller benefits or risks; therefore, the risk of various biases affecting the results increases. Moving the P value threshold from .05 to .005 will shift about one-third of the statistically significant results of past biomedical literature to the category of just “suggestive.”1 This shift is essential for those who believe (perhaps crudely) in black and white, significant or nonsignificant categorizations. For the vast majority of past observational research, this recategorization would be welcome. For example, mendelian randomization studies show that only few past claims from observational studies with P < .05 represent causal relationships.5 Thus, the proposed reduction in the level for declaring statistical significance may dismiss mostly noise with relatively little loss of valuable information. For randomized trials, the proportion of true effects that emerge with P values in the window from .005 to .05 will be higher, perhaps the majority in several fields. However, most findings would not represent treatment effects that are large enough for outcomes that are serious enough to make them worthy of further action. Thus, the reduction in the P value threshold may largely do more good than harm, despite also removing an occasional true and useful treatment effect from the coveted significance zone. Regardless, the need for also focusing on the magnitude of all treatment effects and their uncertainty (such as with confidence intervals) cannot be overstated. Lowering the threshold of statistical significance is a temporizing measure. It would work as a dam that could VIEWPOINT

Journal ArticleDOI
16 Jan 2018-JAMA
TL;DR: Among patients with morbid obesity, there was no significant difference in excess BMI loss between Laparoscopic sleeve gastrectomy and laparoscopic Roux-en-Y gastric bypass at 5 years of follow-up after surgery.
Abstract: Importance Sleeve gastrectomy is increasingly used in the treatment of morbid obesity, but its long-term outcome vs the standard Roux-en-Y gastric bypass procedure is unknown. Objective To determine whether there are differences between sleeve gastrectomy and Roux-en-Y gastric bypass in terms of weight loss, changes in comorbidities, increase in quality of life, and adverse events. Design, Setting, and Participants The Swiss Multicenter Bypass or Sleeve Study (SM-BOSS), a 2-group randomized trial, was conducted from January 2007 until November 2011 (last follow-up in March 2017). Of 3971 morbidly obese patients evaluated for bariatric surgery at 4 Swiss bariatric centers, 217 patients were enrolled and randomly assigned to sleeve gastrectomy or Roux-en-Y gastric bypass with a 5-year follow-up period. Interventions Patients were randomly assigned to undergo laparoscopic sleeve gastrectomy (n = 107) or laparoscopic Roux-en-Y gastric bypass (n = 110). Main Outcomes and Measures The primary end point was weight loss, expressed as percentage excess body mass index (BMI) loss. Exploratory end points were changes in comorbidities and adverse events. Results Among the 217 patients (mean age, 45.5 years; 72% women; mean BMI, 43.9) 205 (94.5%) completed the trial. Excess BMI loss was not significantly different at 5 years: for sleeve gastrectomy, 61.1%, vs Roux-en-Y gastric bypass, 68.3% (absolute difference, −7.18%; 95% CI, −14.30% to −0.06%; P = .22 after adjustment for multiple comparisons). Gastric reflux remission was observed more frequently after Roux-en-Y gastric bypass (60.4%) than after sleeve gastrectomy (25.0%). Gastric reflux worsened (more symptoms or increase in therapy) more often after sleeve gastrectomy (31.8%) than after Roux-en-Y gastric bypass (6.3%). The number of patients with reoperations or interventions was 16/101 (15.8%) after sleeve gastrectomy and 23/104 (22.1%) after Roux-en-Y gastric bypass. Conclusions and Relevance Among patients with morbid obesity, there was no significant difference in excess BMI loss between laparoscopic sleeve gastrectomy and laparoscopic Roux-en-Y gastric bypass at 5 years of follow-up after surgery. Trial Registration clinicaltrials.gov Identifier: NCT00356213

Journal ArticleDOI
11 Sep 2018-JAMA
TL;DR: Among patients with E coli or K pneumoniae bloodstream infection and ceftriaxone resistance, definitive treatment with piperacillin-tazobactam compared with meropenem did not result in a noninferior 30-day mortality, and findings do not support use of piperACillin- tazobactsam in this setting.
Abstract: Importance Extended-spectrum β-lactamases mediate resistance to third-generation cephalosporins (eg, ceftriaxone) in Escherichia coli and Klebsiella pneumoniae. Significant infections caused by these strains are usually treated with carbapenems, potentially selecting for carbapenem resistance. Piperacillin-tazobactam may be an effective “carbapenem-sparing” option to treat extended-spectrum β-lactamase producers. Objectives To determine whether definitive therapy with piperacillin-tazobactam is noninferior to meropenem (a carbapenem) in patients with bloodstream infection caused by ceftriaxone-nonsusceptible E coli or K pneumoniae . Design, Setting, and Participants Noninferiority, parallel group, randomized clinical trial included hospitalized patients enrolled from 26 sites in 9 countries from February 2014 to July 2017. Adult patients were eligible if they had at least 1 positive blood culture with E coli or Klebsiella spp testing nonsusceptible to ceftriaxone but susceptible to piperacillin-tazobactam. Of 1646 patients screened, 391 were included in the study. Interventions Patients were randomly assigned 1:1 to intravenous piperacillin-tazobactam, 4.5 g, every 6 hours (n = 188 participants) or meropenem, 1 g, every 8 hours (n = 191 participants) for a minimum of 4 days, up to a maximum of 14 days, with the total duration determined by the treating clinician. Main Outcomes and Measures The primary outcome was all-cause mortality at 30 days after randomization. A noninferiority margin of 5% was used. Results Among 379 patients (mean age, 66.5 years; 47.8% women) who were randomized appropriately, received at least 1 dose of study drug, and were included in the primary analysis population, 378 (99.7%) completed the trial and were assessed for the primary outcome. A total of 23 of 187 patients (12.3%) randomized to piperacillin-tazobactam met the primary outcome of mortality at 30 days compared with 7 of 191 (3.7%) randomized to meropenem (risk difference, 8.6% [1-sided 97.5% CI, −∞ to 14.5%]; P = .90 for noninferiority). Effects were consistent in an analysis of the per-protocol population. Nonfatal serious adverse events occurred in 5 of 188 patients (2.7%) in the piperacillin-tazobactam group and 3 of 191 (1.6%) in the meropenem group. Conclusions and relevance Among patients with E coli or K pneumoniae bloodstream infection and ceftriaxone resistance, definitive treatment with piperacillin-tazobactam compared with meropenem did not result in a noninferior 30-day mortality. These findings do not support use of piperacillin-tazobactam in this setting. Trial Registration anzctr.org.au Identifiers:ACTRN12613000532707andACTRN12615000403538and ClinicalTrials.gov Identifier:NCT02176122

Journal ArticleDOI
06 Feb 2018-JAMA
TL;DR: Estimated prevalence of fetal alcohol spectrum disorders among first-graders in 4 US communities ranged from 1.1% to 5.0% using a conservative approach, which may represent more accurate US prevalence estimates than previous studies but may not be generalizable to all communities.
Abstract: Importance Fetal alcohol spectrum disorders are costly, life-long disabilities. Older data suggested the prevalence of the disorder in the United States was 10 per 1000 children; however, there are few current estimates based on larger, diverse US population samples. Objective To estimate the prevalence of fetal alcohol spectrum disorders, including fetal alcohol syndrome, partial fetal alcohol syndrome, and alcohol-related neurodevelopmental disorder, in 4 regions of the United States. Design, Setting, and Participants Active case ascertainment methods using a cross-sectional design were used to assess children for fetal alcohol spectrum disorders between 2010 and 2016. Children were systematically assessed in the 4 domains that contribute to the fetal alcohol spectrum disorder continuum: dysmorphic features, physical growth, neurobehavioral development, and prenatal alcohol exposure. The settings were 4 communities in the Rocky Mountain, Midwestern, Southeastern, and Pacific Southwestern regions of the United States. First-grade children and their parents or guardians were enrolled. Exposures Alcohol consumption during pregnancy. Main Outcomes and Measures Prevalence of fetal alcohol spectrum disorders in the 4 communities was the main outcome. Conservative estimates for the prevalence of the disorder and 95% CIs were calculated using the eligible first-grade population as the denominator. Weighted prevalences and 95% CIs were also estimated, accounting for the sampling schemes and using data restricted to children who received a full evaluation. Results A total of 6639 children were selected for participation from a population of 13 146 first-graders (boys, 51.9%; mean age, 6.7 years [SD, 0.41] and white maternal race, 79.3%). A total of 222 cases of fetal alcohol spectrum disorders were identified. The conservative prevalence estimates for fetal alcohol spectrum disorders ranged from 11.3 (95% CI, 7.8-15.8) to 50.0 (95% CI, 39.9-61.7) per 1000 children. The weighted prevalence estimates for fetal alcohol spectrum disorders ranged from 31.1 (95% CI, 16.1-54.0) to 98.5 (95% CI, 57.5-139.5) per 1000 children. Conclusions and Relevance Estimated prevalence of fetal alcohol spectrum disorders among first-graders in 4 US communities ranged from 1.1% to 5.0% using a conservative approach. These findings may represent more accurate US prevalence estimates than previous studies but may not be generalizable to all communities.

Journal ArticleDOI
18 Dec 2018-JAMA
TL;DR: The ubiquitous social media landscape has created an information ecosystem populated by a cacophony of opinion, true and false information, and an unprecedented quantity of data on many topics, which can have adverse effects on public health.
Abstract: The ubiquitous social media landscape has created an information ecosystem populated by a cacophony of opinion, true and false information, and an unprecedented quantity of data on many topics. Policy makers and the social media industry grapple with the challenge of curbing fake news, disinformation, and hate speech; and the field of medicine is similarly confronted with the spread of false, inaccurate, or incomplete health information.1 From the discourse on the latest tobacco products, alcohol, and alternative therapies to skepticism about medical guidelines, misinformation on social media can have adverse effects on public health. For example, the social media rumors circulating during the Ebola outbreak in 2014 created hostility toward health workers, posing challenges to efforts to control the epidemic.2 Another example is the increasingly prevalent antivaccine social media posts that seemingly legitimize debate about vaccine safety and could be contributing to reductions in vaccination rates and increases in vaccine-preventable disease.3 The spread of health-related misinformation is exacerbated by information silos and echo chamber effects. Social media feeds are personally curated and tailored to individual beliefs, partisan bias, and identity. Consequently, information silos are created in which the likelihood for exchange of differing viewpoints

Journal ArticleDOI
20 Feb 2018-JAMA
TL;DR: There was no significant difference in weight change between a healthy low-fat diet vs ahealthy low-carbohydrate diet, and neither genotype pattern nor baseline insulin secretion was associated with the dietary effects on weight loss in this 12-month weight loss study.
Abstract: Importance Dietary modification remains key to successful weight loss. Yet, no one dietary strategy is consistently superior to others for the general population. Previous research suggests genotype or insulin-glucose dynamics may modify the effects of diets. Objective To determine the effect of a healthy low-fat (HLF) diet vs a healthy low-carbohydrate (HLC) diet on weight change and if genotype pattern or insulin secretion are related to the dietary effects on weight loss. Design, Setting, and Participants The Diet Intervention Examining The Factors Interacting with Treatment Success (DIETFITS) randomized clinical trial included 609 adults aged 18 to 50 years without diabetes with a body mass index between 28 and 40. The trial enrollment was from January 29, 2013, through April 14, 2015; the date of final follow-up was May 16, 2016. Participants were randomized to the 12-month HLF or HLC diet. The study also tested whether 3 single-nucleotide polymorphism multilocus genotype responsiveness patterns or insulin secretion (INS-30; blood concentration of insulin 30 minutes after a glucose challenge) were associated with weight loss. Interventions Health educators delivered the behavior modification intervention to HLF (n = 305) and HLC (n = 304) participants via 22 diet-specific small group sessions administered over 12 months. The sessions focused on ways to achieve the lowest fat or carbohydrate intake that could be maintained long-term and emphasized diet quality. Main Outcomes and Measures Primary outcome was 12-month weight change and determination of whether there were significant interactions among diet type and genotype pattern, diet and insulin secretion, and diet and weight loss. Results Among 609 participants randomized (mean age, 40 [SD, 7] years; 57% women; mean body mass index, 33 [SD, 3]; 244 [40%] had a low-fat genotype; 180 [30%] had a low-carbohydrate genotype; mean baseline INS-30, 93 μIU/mL), 481 (79%) completed the trial. In the HLF vs HLC diets, respectively, the mean 12-month macronutrient distributions were 48% vs 30% for carbohydrates, 29% vs 45% for fat, and 21% vs 23% for protein. Weight change at 12 months was −5.3 kg for the HLF diet vs −6.0 kg for the HLC diet (mean between-group difference, 0.7 kg [95% CI, −0.2 to 1.6 kg]). There was no significant diet-genotype pattern interaction (P = .20) or diet-insulin secretion (INS-30) interaction (P = .47) with 12-month weight loss. There were 18 adverse events or serious adverse events that were evenly distributed across the 2 diet groups. Conclusions and Relevance In this 12-month weight loss diet study, there was no significant difference in weight change between a healthy low-fat diet vs a healthy low-carbohydrate diet, and neither genotype pattern nor baseline insulin secretion was associated with the dietary effects on weight loss. In the context of these 2 common weight loss diet approaches, neither of the 2 hypothesized predisposing factors was helpful in identifying which diet was better for whom. Trial Registration clinicaltrials.gov Identifier:NCT01826591

Journal ArticleDOI
06 Feb 2018-JAMA
TL;DR: The SPIRIT-PRO guidelines provide recommendations for items that should be addressed and included in clinical trial protocols in which PROs are a primary or key secondary outcome and improved design of clinical trials including PROs could help ensure high-quality data that may inform patient-centered care.
Abstract: Importance Patient-reported outcome (PRO) data from clinical trials can provide valuable evidence to inform shared decision making, labeling claims, clinical guidelines, and health policy; however, the PRO content of clinical trial protocols is often suboptimal. The SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials) statement was published in 2013 and aims to improve the completeness of trial protocols by providing evidence-based recommendations for the minimum set of items to be addressed, but it does not provide PRO-specific guidance. Objective To develop international, consensus-based, PRO-specific protocol guidance (the SPIRIT-PRO Extension). Design, Setting, and Participants The SPIRIT-PRO Extension was developed following the Enhancing Quality and Transparency of Health Research (EQUATOR) Network’s methodological framework for guideline development. This included (1) a systematic review of existing PRO-specific protocol guidance to generate a list of potential PRO-specific protocol items (published in 2014); (2) refinements to the list and removal of duplicate items by the International Society for Quality of Life Research (ISOQOL) Protocol Checklist Taskforce; (3) an international stakeholder survey of clinical trial research personnel, PRO methodologists, health economists, psychometricians, patient advocates, funders, industry representatives, journal editors, policy makers, ethicists, and researchers responsible for evidence synthesis (distributed by 38 international partner organizations in October 2016); (4) an international Delphi exercise (n = 137 invited; October 2016 to February 2017); and (5) consensus meeting (n = 30 invited; May 2017). Prior to voting, consensus meeting participants were informed of the results of the Delphi exercise and given data from structured reviews evaluating the PRO protocol content of 3 defined samples of trial protocols. Results The systematic review identified 162 PRO-specific protocol recommendations from 54 sources. The ISOQOL Taskforce (n = 21) reduced this to 56 items, which were considered by 138 international stakeholder survey participants and 99 Delphi panelists. The final wording of the SPIRIT-PRO Extension was agreed on at a consensus meeting (n = 29 participants) and reviewed by external group of experts during a consultation period. Eleven extensions and 5 elaborations to the SPIRIT 2013 checklist were recommended for inclusion in clinical trial protocols in which PROs are a primary or key secondary outcome. Extension items focused on PRO-specific issues relating to the trial rationale, objectives, eligibility criteria, concepts used to evaluate the intervention, time points for assessment, PRO instrument selection and measurement properties, data collection plan, translation to other languages, proxy completion, strategies to minimize missing data, and whether PRO data will be monitored during the study to inform clinical care. Conclusions and Relevance The SPIRIT-PRO guidelines provide recommendations for items that should be addressed and included in clinical trial protocols in which PROs are a primary or key secondary outcome. Improved design of clinical trials including PROs could help ensure high-quality data that may inform patient-centered care.

Journal ArticleDOI
01 May 2018-JAMA
TL;DR: Antiviral treatment with either pegylated interferon or a nucleos(t)ide analogue (lamivudine, adefovir, entecavir, tenofovir disoproxil, or ten ofovir alafenamide) should be offered to patients with chronic HBV infection and liver inflammation in an effort to reduce progression of liver disease.
Abstract: Importance More than 240 million individuals worldwide are infected with chronic hepatitis B virus (HBV). Among individuals with chronic HBV infection who are untreated, 15% to 40% progress to cirrhosis, which may lead to liver failure and liver cancer. Observations Pegylated interferon and nucleos(t)ide analogues (lamivudine, adefovir, entecavir, tenofovir disoproxil, and tenofovir alafenamide) suppress HBV DNA replication and improve liver inflammation and fibrosis. Long-term viral suppression is associated with regression of liver fibrosis and reduced risk of hepatocellular carcinoma in cohort studies. The cure (defined as hepatitis B surface antigen loss with undetectable HBV DNA) rates after treatment remain low (3%-7% with pegylated interferon and 1%-12% with nucleos[t]ide analogue therapy). Pegylated interferon therapy can be completed in 48 weeks and is not associated with the development of resistance; however, its use is limited by poor tolerability and adverse effects such as bone marrow suppression and exacerbation of existing neuropsychiatric symptoms such as depression. Newer agents (entecavir, tenofovir disoproxil, and tenofovir alafenamide) may be associated with a significantly reduced risk of drug resistance compared with older agents (lamivudine and adefovir) and should be considered as the first-line treatment. Conclusions and Relevance Antiviral treatment with either pegylated interferon or a nucleos(t)ide analogue (lamivudine, adefovir, entecavir, tenofovir disoproxil, or tenofovir alafenamide) should be offered to patients with chronic HBV infection and liver inflammation in an effort to reduce progression of liver disease. Nucleos(t)ide analogues should be considered as first-line therapy. Because cure rates are low, most patients will require therapy indefinitely.

Journal ArticleDOI
18 Sep 2018-JAMA
TL;DR: The purpose of this Viewpoint is to give health care professionals an intuitive understanding of the technology underlying deep learning, used on billions of digital devices for complex tasks such as speech recognition, image interpretation, and language translation.
Abstract: Widespread application of artificial intelligence in health care has been anticipated for half a century. For most of that time, the dominant approach to artificial intelligence was inspired by logic: researchers assumed that the essence of intelligence was manipulating symbolic expressions, using rules of inference. This approach produced expert systems and graphical models that attempted to automate the reasoning processes of experts. In the last decade, however, a radically different approach to artificial intelligence, called deep learning, has produced major breakthroughs and is now used on billions of digital devices for complex tasks such as speech recognition, image interpretation, and language translation. The purpose of this Viewpoint is to give health care professionals an intuitive understanding of the technology underlying deep learning. In an accompanying Viewpoint, Naylor1 outlines some of the factors propelling adoption of this technology in medicine and health care.

Journal ArticleDOI
18 Dec 2018-JAMA
TL;DR: Evidence from high-quality studies showed that opioid use was associated with statistically significant but small improvements in pain and physical functioning, and increased risk of vomiting compared with placebo, and Comparisons of opioids with nonopioid alternatives suggested that the benefit for pain and functioning may be similar.
Abstract: Importance Harms and benefits of opioids for chronic noncancer pain remain unclear. Objective To systematically review randomized clinical trials (RCTs) of opioids for chronic noncancer pain. Data Sources and Study Selection The databases of CENTRAL, CINAHL, EMBASE, MEDLINE, AMED, and PsycINFO were searched from inception to April 2018 for RCTs of opioids for chronic noncancer pain vs any nonopioid control. Data Extraction and Synthesis Paired reviewers independently extracted data. The analyses used random-effects models and the Grading of Recommendations Assessment, Development and Evaluation to rate the quality of the evidence. Main Outcomes and Measures The primary outcomes were pain intensity (score range, 0-10 cm on a visual analog scale for pain; lower is better and the minimally important difference [MID] is 1 cm), physical functioning (score range, 0-100 points on the 36-item Short Form physical component score [SF-36 PCS]; higher is better and the MID is 5 points), and incidence of vomiting. Results Ninety-six RCTs including 26 169 participants (61% female; median age, 58 years [interquartile range, 51-61 years]) were included. Of the included studies, there were 25 trials of neuropathic pain, 32 trials of nociceptive pain, 33 trials of central sensitization (pain present in the absence of tissue damage), and 6 trials of mixed types of pain. Compared with placebo, opioid use was associated with reduced pain (weighted mean difference [WMD], −0.69 cm [95% CI, −0.82 to −0.56 cm] on a 10-cm visual analog scale for pain; modeled risk difference for achieving the MID, 11.9% [95% CI, 9.7% to 14.1%]), improved physical functioning (WMD, 2.04 points [95% CI, 1.41 to 2.68 points] on the 100-point SF-36 PCS; modeled risk difference for achieving the MID, 8.5% [95% CI, 5.9% to 11.2%]), and increased vomiting (5.9% with opioids vs 2.3% with placebo for trials that excluded patients with adverse events during a run-in period). Low- to moderate-quality evidence suggested similar associations of opioids with improvements in pain and physical functioning compared with nonsteroidal anti-inflammatory drugs (pain: WMD, −0.60 cm [95% CI, −1.54 to 0.34 cm]; physical functioning: WMD, −0.90 points [95% CI, −2.69 to 0.89 points]), tricyclic antidepressants (pain: WMD, −0.13 cm [95% CI, −0.99 to 0.74 cm]; physical functioning: WMD, −5.31 points [95% CI, −13.77 to 3.14 points]), and anticonvulsants (pain: WMD, −0.90 cm [95% CI, −1.65 to −0.14 cm]; physical functioning: WMD, 0.45 points [95% CI, −5.77 to 6.66 points]). Conclusions and Relevance In this meta-analysis of RCTs of patients with chronic noncancer pain, evidence from high-quality studies showed that opioid use was associated with statistically significant but small improvements in pain and physical functioning, and increased risk of vomiting compared with placebo. Comparisons of opioids with nonopioid alternatives suggested that the benefit for pain and functioning may be similar, although the evidence was from studies of only low to moderate quality.

Journal ArticleDOI
06 Mar 2018-JAMA
TL;DR: A diagnostic approach that uses ultrasound and, when indicated, fine-needle aspiration biopsy and molecular testing, facilitates a personalized, risk-based protocol that promotes high-quality care and minimizes cost and unnecessary testing.
Abstract: Importance Thyroid nodules are common, being detected in up to 65% of the general population. This is likely due to the increased use of diagnostic imaging for purposes unrelated to the thyroid. Most thyroid nodules are benign, clinically insignificant, and safely managed with a surveillance program. The main goal of initial and long-term follow-up is identification of the small subgroup of nodules that harbor a clinically significant cancer (≈10%), cause compressive symptoms (≈5%), or progress to functional disease (≈5%). Observations Thyroid function testing and ultrasonographic characteristics guide the initial management of thyroid nodules. Certain ultrasound features, such as a cystic or spongiform appearance, suggest a benign process that does not require additional testing. Suspicious sonographic patterns including solid composition, hypoechogenicity, irregular margins, and microcalcifications should prompt cytological evaluation. Additional diagnostic procedures, such as molecular testing, are indicated only in selected cases, such as indeterminate cytology (≈20%-30% of all biopsies). The initial risk estimate, derived from ultrasound and, if performed, cytology report, should determine the need for treatment and the type, frequency, and length of subsequent follow-up. Management includes simple observation, local treatments, and surgery and should be based on the estimated risk of malignancy and the presence and severity of compressive symptoms. Conclusions and Relevance Most thyroid nodules are benign. A diagnostic approach that uses ultrasound and, when indicated, fine-needle aspiration biopsy and molecular testing, facilitates a personalized, risk-based protocol that promotes high-quality care and minimizes cost and unnecessary testing.

Journal ArticleDOI
19 Jun 2018-JAMA
TL;DR: To determine whether inherited germline mutations in cancer predisposition genes are associated with increased risks of pancreatic cancer, a case-control study of patients diagnosed as having pancreaticcancer and enrolled in a Mayo Clinic registry between October 12, 2000, and March 31, 2016 was conducted.
Abstract: Importance Individuals genetically predisposed to pancreatic cancer may benefit from early detection. Genes that predispose to pancreatic cancer and the risks of pancreatic cancer associated with mutations in these genes are not well defined. Objective To determine whether inherited germline mutations in cancer predisposition genes are associated with increased risks of pancreatic cancer. Design, Setting, and Participants Case-control analysis to identify pancreatic cancer predisposition genes; longitudinal analysis of patients with pancreatic cancer for prognosis. The study included 3030 adults diagnosed as having pancreatic cancer and enrolled in a Mayo Clinic registry between October 12, 2000, and March 31, 2016, with last follow-up on June 22, 2017. Reference controls were 123 136 individuals with exome sequence data in the public Genome Aggregation Database and 53 105 in the Exome Aggregation Consortium database. Exposures Individuals were classified based on carrying a deleterious mutation in cancer predisposition genes and having a personal or family history of cancer. Main Outcomes and Measures Germline mutations in coding regions of 21 cancer predisposition genes were identified by sequencing of products from a custom multiplex polymerase chain reaction–based panel; associations of genes with pancreatic cancer were assessed by comparing frequency of mutations in genes of pancreatic cancer patients with those of reference controls. Results Comparing 3030 case patients with pancreatic cancer (43.2% female; 95.6% non-Hispanic white; mean age at diagnosis, 65.3 [SD, 10.7] years) with reference controls, significant associations were observed between pancreatic cancer and mutations in CDKN2A (0.3% of cases and 0.02% of controls; odds ratio [OR], 12.33; 95% CI, 5.43-25.61); TP53 (0.2% of cases and 0.02% of controls; OR, 6.70; 95% CI, 2.52-14.95); MLH1 (0.13% of cases and 0.02% of controls; OR, 6.66; 95% CI, 1.94-17.53) ; BRCA2 (1.9% of cases and 0.3% of controls; OR, 6.20; 95% CI, 4.62-8.17); ATM (2.3% of cases and 0.37% of controls; OR, 5.71; 95% CI, 4.38-7.33); and BRCA1 (0.6% of cases and 0.2% of controls; OR, 2.58; 95% CI, 1.54-4.05). Conclusions and Relevance In this case-control study, mutations in 6 genes associated with pancreatic cancer were found in 5.5% of all pancreatic cancer patients, including 7.9% of patients with a family history of pancreatic cancer and 5.2% of patients without a family history of pancreatic cancer. Further research is needed for replication in other populations.


Journal ArticleDOI
28 Aug 2018-JAMA
TL;DR: Naltrexone, which can be given once daily, reduces the likelihood of a return to any drinking by 5% and binge-drinking risk by 10%.
Abstract: Importance Alcohol consumption is associated with 88 000 US deaths annually. Although routine screening for heavy alcohol use can identify patients with alcohol use disorder (AUD) and has been recommended, only 1 in 6 US adults report ever having been asked by a health professional about their drinking behavior. Alcohol use disorder, a problematic pattern of alcohol use accompanied by clinically significant impairment or distress, is present in up to 14% of US adults during a 1-year period, although only about 8% of affected individuals are treated in an alcohol treatment facility. Observations Four medications are approved by the US Food and Drug Administration to treat AUD: disulfiram, naltrexone (oral and long-acting injectable formulations), and acamprosate. However, patients with AUD most commonly receive counseling. Medications are prescribed to less than 9% of patients who are likely to benefit from them, given evidence that they exert clinically meaningful effects and their inclusion in clinical practice guidelines as first-line treatments for moderate to severe AUD. Naltrexone, which can be given once daily, reduces the likelihood of a return to any drinking by 5% and binge-drinking risk by 10%. Randomized clinical trials also show that some medications approved for other indications, including seizure disorder (eg, topiramate), are efficacious in treating AUD. Currently, there is not sufficient evidence to support the use of pharmacogenetics to personalize AUD treatments. Conclusions and Relevance Alcohol consumption is associated with a high rate of morbidity and mortality, and heavy alcohol use is the major risk factor for AUD. Simple, valid screening methods can be used to identify patients with heavy alcohol use, who can then be evaluated for the presence of an AUD. Patients receiving a diagnosis of the disorder should be given brief counseling and prescribed a first-line medication (eg, naltrexone) or referred for a more intensive psychosocial intervention.