scispace - formally typeset
Search or ask a question

Showing papers by "University of California, Irvine published in 2018"


Journal ArticleDOI
Gregory A. Roth1, Gregory A. Roth2, Degu Abate3, Kalkidan Hassen Abate4  +1025 moreInstitutions (333)
TL;DR: Non-communicable diseases comprised the greatest fraction of deaths, contributing to 73·4% (95% uncertainty interval [UI] 72·5–74·1) of total deaths in 2017, while communicable, maternal, neonatal, and nutritional causes accounted for 18·6% (17·9–19·6), and injuries 8·0% (7·7–8·2).

5,211 citations


Journal ArticleDOI
TL;DR: In this paper, the authors assess the burden of 29 cancer groups over time to provide a framework for policy discussion, resource allocation, and research focus, and evaluate cancer incidence, mortality, years lived with disability, years of life lost, and disability-adjusted life-years (DALYs) for 195 countries and territories by age and sex using the Global Burden of Disease study estimation methods.
Abstract: Importance The increasing burden due to cancer and other noncommunicable diseases poses a threat to human development, which has resulted in global political commitments reflected in the Sustainable Development Goals as well as the World Health Organization (WHO) Global Action Plan on Non-Communicable Diseases. To determine if these commitments have resulted in improved cancer control, quantitative assessments of the cancer burden are required. Objective To assess the burden for 29 cancer groups over time to provide a framework for policy discussion, resource allocation, and research focus. Evidence Review Cancer incidence, mortality, years lived with disability, years of life lost, and disability-adjusted life-years (DALYs) were evaluated for 195 countries and territories by age and sex using the Global Burden of Disease study estimation methods. Levels and trends were analyzed over time, as well as by the Sociodemographic Index (SDI). Changes in incident cases were categorized by changes due to epidemiological vs demographic transition. Findings In 2016, there were 17.2 million cancer cases worldwide and 8.9 million deaths. Cancer cases increased by 28% between 2006 and 2016. The smallest increase was seen in high SDI countries. Globally, population aging contributed 17%; population growth, 12%; and changes in age-specific rates, −1% to this change. The most common incident cancer globally for men was prostate cancer (1.4 million cases). The leading cause of cancer deaths and DALYs was tracheal, bronchus, and lung cancer (1.2 million deaths and 25.4 million DALYs). For women, the most common incident cancer and the leading cause of cancer deaths and DALYs was breast cancer (1.7 million incident cases, 535 000 deaths, and 14.9 million DALYs). In 2016, cancer caused 213.2 million DALYs globally for both sexes combined. Between 2006 and 2016, the average annual age-standardized incidence rates for all cancers combined increased in 130 of 195 countries or territories, and the average annual age-standardized death rates decreased within that timeframe in 143 of 195 countries or territories. Conclusions and Relevance Large disparities exist between countries in cancer incidence, deaths, and associated disability. Scaling up cancer prevention and ensuring universal access to cancer care are required for health equity and to fulfill the global commitments for noncommunicable disease and cancer control.

4,621 citations


Journal ArticleDOI
Jeffrey D. Stanaway1, Ashkan Afshin1, Emmanuela Gakidou1, Stephen S Lim1  +1050 moreInstitutions (346)
TL;DR: This study estimated levels and trends in exposure, attributable deaths, and attributable disability-adjusted life-years (DALYs) by age group, sex, year, and location for 84 behavioural, environmental and occupational, and metabolic risks or groups of risks from 1990 to 2017 and explored the relationship between development and risk exposure.

2,910 citations


Journal ArticleDOI
Daniel J. Benjamin1, James O. Berger2, Magnus Johannesson1, Magnus Johannesson3, Brian A. Nosek4, Brian A. Nosek5, Eric-Jan Wagenmakers6, Richard A. Berk7, Kenneth A. Bollen8, Björn Brembs9, Lawrence D. Brown7, Colin F. Camerer10, David Cesarini11, David Cesarini12, Christopher D. Chambers13, Merlise A. Clyde2, Thomas D. Cook14, Thomas D. Cook15, Paul De Boeck16, Zoltan Dienes17, Anna Dreber3, Kenny Easwaran18, Charles Efferson19, Ernst Fehr20, Fiona Fidler21, Andy P. Field17, Malcolm R. Forster22, Edward I. George7, Richard Gonzalez23, Steven N. Goodman24, Edwin J. Green25, Donald P. Green26, Anthony G. Greenwald27, Jarrod D. Hadfield28, Larry V. Hedges15, Leonhard Held20, Teck-Hua Ho29, Herbert Hoijtink30, Daniel J. Hruschka31, Kosuke Imai32, Guido W. Imbens24, John P. A. Ioannidis24, Minjeong Jeon33, James Holland Jones34, Michael Kirchler35, David Laibson36, John A. List37, Roderick J. A. Little23, Arthur Lupia23, Edouard Machery38, Scott E. Maxwell39, Michael A. McCarthy21, Don A. Moore40, Stephen L. Morgan41, Marcus R. Munafò42, Shinichi Nakagawa43, Brendan Nyhan44, Timothy H. Parker45, Luis R. Pericchi46, Marco Perugini47, Jeffrey N. Rouder48, Judith Rousseau49, Victoria Savalei50, Felix D. Schönbrodt51, Thomas Sellke52, Betsy Sinclair53, Dustin Tingley36, Trisha Van Zandt16, Simine Vazire54, Duncan J. Watts55, Christopher Winship36, Robert L. Wolpert2, Yu Xie32, Cristobal Young24, Jonathan Zinman44, Valen E. Johnson1, Valen E. Johnson18 
University of Southern California1, Duke University2, Stockholm School of Economics3, Center for Open Science4, University of Virginia5, University of Amsterdam6, University of Pennsylvania7, University of North Carolina at Chapel Hill8, University of Regensburg9, California Institute of Technology10, New York University11, Research Institute of Industrial Economics12, Cardiff University13, Mathematica Policy Research14, Northwestern University15, Ohio State University16, University of Sussex17, Texas A&M University18, Royal Holloway, University of London19, University of Zurich20, University of Melbourne21, University of Wisconsin-Madison22, University of Michigan23, Stanford University24, Rutgers University25, Columbia University26, University of Washington27, University of Edinburgh28, National University of Singapore29, Utrecht University30, Arizona State University31, Princeton University32, University of California, Los Angeles33, Imperial College London34, University of Innsbruck35, Harvard University36, University of Chicago37, University of Pittsburgh38, University of Notre Dame39, University of California, Berkeley40, Johns Hopkins University41, University of Bristol42, University of New South Wales43, Dartmouth College44, Whitman College45, University of Puerto Rico46, University of Milan47, University of California, Irvine48, Paris Dauphine University49, University of British Columbia50, Ludwig Maximilian University of Munich51, Purdue University52, Washington University in St. Louis53, University of California, Davis54, Microsoft55
TL;DR: The default P-value threshold for statistical significance is proposed to be changed from 0.05 to 0.005 for claims of new discoveries in order to reduce uncertainty in the number of discoveries.
Abstract: We propose to change the default P-value threshold for statistical significance from 0.05 to 0.005 for claims of new discoveries.

1,586 citations


Proceedings Article
25 Apr 2018
TL;DR: This work introduces a novel model-agnostic system that explains the behavior of complex models with high-precision rules called anchors, representing local, “sufficient” conditions for predictions, and proposes an algorithm to efficiently compute these explanations for any black-box model with high probability guarantees.
Abstract: We introduce a novel model-agnostic system that explains the behavior of complex models with high-precision rules called anchors, representing local, "sufficient" conditions for predictions. We propose an algorithm to efficiently compute these explanations for any black-box model with high-probability guarantees. We demonstrate the flexibility of anchors by explaining a myriad of different models for different domains and tasks. In a user study, we show that anchors enable users to predict how a model would behave on unseen instances with less effort and higher precision, as compared to existing linear explanations or no explanations.

1,450 citations


Journal ArticleDOI
22 Jun 2018-Science
TL;DR: It is demonstrated that, in the general population, the personality trait neuroticism is significantly correlated with almost every psychiatric disorder and migraine, and it is shown that both psychiatric and neurological disorders have robust correlations with cognitive and personality measures.
Abstract: Disorders of the brain can exhibit considerable epidemiological comorbidity and often share symptoms, provoking debate about their etiologic overlap. We quantified the genetic sharing of 25 brain disorders from genome-wide association studies of 265,218 patients and 784,643 control participants and assessed their relationship to 17 phenotypes from 1,191,588 individuals. Psychiatric disorders share common variant risk, whereas neurological disorders appear more distinct from one another and from the psychiatric disorders. We also identified significant sharing between disorders and a number of brain phenotypes, including cognitive measures. Further, we conducted simulations to explore how statistical power, diagnostic misclassification, and phenotypic heterogeneity affect genetic correlations. These results highlight the importance of common genetic variation as a risk factor for brain disorders and the value of heritability-based methods in understanding their etiology.

1,357 citations


Journal ArticleDOI
Mary F. Feitosa1, Aldi T. Kraja1, Daniel I. Chasman2, Yun J. Sung1  +296 moreInstitutions (86)
18 Jun 2018-PLOS ONE
TL;DR: In insights into the role of alcohol consumption in the genetic architecture of hypertension, a large two-stage investigation incorporating joint testing of main genetic effects and single nucleotide variant (SNV)-alcohol consumption interactions is conducted.
Abstract: Heavy alcohol consumption is an established risk factor for hypertension; the mechanism by which alcohol consumption impact blood pressure (BP) regulation remains unknown. We hypothesized that a genome-wide association study accounting for gene-alcohol consumption interaction for BP might identify additional BP loci and contribute to the understanding of alcohol-related BP regulation. We conducted a large two-stage investigation incorporating joint testing of main genetic effects and single nucleotide variant (SNV)-alcohol consumption interactions. In Stage 1, genome-wide discovery meta-analyses in ≈131K individuals across several ancestry groups yielded 3,514 SNVs (245 loci) with suggestive evidence of association (P < 1.0 x 10-5). In Stage 2, these SNVs were tested for independent external replication in ≈440K individuals across multiple ancestries. We identified and replicated (at Bonferroni correction threshold) five novel BP loci (380 SNVs in 21 genes) and 49 previously reported BP loci (2,159 SNVs in 109 genes) in European ancestry, and in multi-ancestry meta-analyses (P < 5.0 x 10-8). For African ancestry samples, we detected 18 potentially novel BP loci (P < 5.0 x 10-8) in Stage 1 that warrant further replication. Additionally, correlated meta-analysis identified eight novel BP loci (11 genes). Several genes in these loci (e.g., PINX1, GATA4, BLK, FTO and GABBR2) have been previously reported to be associated with alcohol consumption. These findings provide insights into the role of alcohol consumption in the genetic architecture of hypertension.

1,218 citations


Book
27 Sep 2018
TL;DR: A broad range of illustrations is embedded throughout, including classical and modern results for covariance estimation, clustering, networks, semidefinite programming, coding, dimension reduction, matrix completion, machine learning, compressed sensing, and sparse regression.
Abstract: High-dimensional probability offers insight into the behavior of random vectors, random matrices, random subspaces, and objects used to quantify uncertainty in high dimensions Drawing on ideas from probability, analysis, and geometry, it lends itself to applications in mathematics, statistics, theoretical computer science, signal processing, optimization, and more It is the first to integrate theory, key tools, and modern applications of high-dimensional probability Concentration inequalities form the core, and it covers both classical results such as Hoeffding's and Chernoff's inequalities and modern developments such as the matrix Bernstein's inequality It then introduces the powerful methods based on stochastic processes, including such tools as Slepian's, Sudakov's, and Dudley's inequalities, as well as generic chaining and bounds based on VC dimension A broad range of illustrations is embedded throughout, including classical and modern results for covariance estimation, clustering, networks, semidefinite programming, coding, dimension reduction, matrix completion, machine learning, compressed sensing, and sparse regression

1,190 citations


Posted ContentDOI
Spyridon Bakas1, Mauricio Reyes, Andras Jakab2, Stefan Bauer3  +435 moreInstitutions (111)
TL;DR: This study assesses the state-of-the-art machine learning methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i.e., 2012-2018, and investigates the challenge of identifying the best ML algorithms for each of these tasks.
Abstract: Gliomas are the most common primary brain malignancies, with different degrees of aggressiveness, variable prognosis and various heterogeneous histologic sub-regions, i.e., peritumoral edematous/invaded tissue, necrotic core, active and non-enhancing core. This intrinsic heterogeneity is also portrayed in their radio-phenotype, as their sub-regions are depicted by varying intensity profiles disseminated across multi-parametric magnetic resonance imaging (mpMRI) scans, reflecting varying biological properties. Their heterogeneous shape, extent, and location are some of the factors that make these tumors difficult to resect, and in some cases inoperable. The amount of resected tumoris a factor also considered in longitudinal scans, when evaluating the apparent tumor for potential diagnosis of progression. Furthermore, there is mounting evidence that accurate segmentation of the various tumor sub-regions can offer the basis for quantitative image analysis towards prediction of patient overall survival. This study assesses thestate-of-the-art machine learning (ML) methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i.e., 2012-2018. Specifically, we focus on i) evaluating segmentations of the various glioma sub-regions in pre-operative mpMRI scans, ii) assessing potential tumor progression by virtue of longitudinal growth of tumor sub-regions, beyond use of the RECIST/RANO criteria, and iii) predicting the overall survival from pre-operative mpMRI scans of patients that underwent gross tota lresection. Finally, we investigate the challenge of identifying the best ML algorithms for each of these tasks, considering that apart from being diverse on each instance of the challenge, the multi-institutional mpMRI BraTS dataset has also been a continuously evolving/growing dataset.

1,165 citations


Journal ArticleDOI
TL;DR: This part of this series introduces JASP (http://www.jasp-stats.org), an open-source, cross-platform, user-friendly graphical software package that allows users to carry out Bayesian hypothesis tests for standard statistical problems.
Abstract: Bayesian hypothesis testing presents an attractive alternative to p value hypothesis testing. Part I of this series outlined several advantages of Bayesian hypothesis testing, including the ability to quantify evidence and the ability to monitor and update this evidence as data come in, without the need to know the intention with which the data were collected. Despite these and other practical advantages, Bayesian hypothesis tests are still reported relatively rarely. An important impediment to the widespread adoption of Bayesian tests is arguably the lack of user-friendly software for the run-of-the-mill statistical problems that confront psychologists for the analysis of almost every experiment: the t-test, ANOVA, correlation, regression, and contingency tables. In Part II of this series we introduce JASP (http://www.jasp-stats.org), an open-source, cross-platform, user-friendly graphical software package that allows users to carry out Bayesian hypothesis tests for standard statistical problems. JASP is based in part on the Bayesian analyses implemented in Morey and Rouder’s BayesFactor package for R. Armed with JASP, the practical advantages of Bayesian hypothesis testing are only a mouse click away.

1,031 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a comprehensive review of the data sources and estimation methods of 30 currently available global precipitation data sets, including gauge-based, satellite-related, and reanalysis data sets.
Abstract: In this paper, we present a comprehensive review of the data sources and estimation methods of 30 currently available global precipitation data sets, including gauge-based, satellite-related, and reanalysis data sets. We analyzed the discrepancies between the data sets from daily to annual timescales and found large differences in both the magnitude and the variability of precipitation estimates. The magnitude of annual precipitation estimates over global land deviated by as much as 300 mm/yr among the products. Reanalysis data sets had a larger degree of variability than the other types of data sets. The degree of variability in precipitation estimates also varied by region. Large differences in annual and seasonal estimates were found in tropical oceans, complex mountain areas, northern Africa, and some high-latitude regions. Overall, the variability associated with extreme precipitation estimates was slightly greater at lower latitudes than at higher latitudes. The reliability of precipitation data sets is mainly limited by the number and spatial coverage of surface stations, the satellite algorithms, and the data assimilation models. The inconsistencies described limit the capability of the products for climate monitoring, attribution, and model validation.

Journal ArticleDOI
TL;DR: Genomic profiling may enhance the predictive utility of PD-L1 expression and tumor mutation burden and facilitate establishment of personalized combination immunotherapy approaches for genomically defined LUAC subsets.
Abstract: KRAS is the most common oncogenic driver in lung adenocarcinoma (LUAC). We previously reported that STK11/LKB1 (KL) or TP53 (KP) comutations define distinct subgroups of KRAS-mutant LUAC. Here, we examine the efficacy of PD-1 inhibitors in these subgroups. Objective response rates to PD-1 blockade differed significantly among KL (7.4%), KP (35.7%), and K-only (28.6%) subgroups (P < 0.001) in the Stand Up To Cancer (SU2C) cohort (174 patients) with KRAS-mutant LUAC and in patients treated with nivolumab in the CheckMate-057 phase III trial (0% vs. 57.1% vs. 18.2%; P = 0.047). In the SU2C cohort, KL LUAC exhibited shorter progression-free (P < 0.001) and overall (P = 0.0015) survival compared with KRASMUT;STK11/LKB1WT LUAC. Among 924 LUACs, STK11/LKB1 alterations were the only marker significantly associated with PD-L1 negativity in TMBIntermediate/High LUAC. The impact of STK11/LKB1 alterations on clinical outcomes with PD-1/PD-L1 inhibitors extended to PD-L1-positive non-small cell lung cancer. In Kras-mutant murine LUAC models, Stk11/Lkb1 loss promoted PD-1/PD-L1 inhibitor resistance, suggesting a causal role. Our results identify STK11/LKB1 alterations as a major driver of primary resistance to PD-1 blockade in KRAS-mutant LUAC.Significance: This work identifies STK11/LKB1 alterations as the most prevalent genomic driver of primary resistance to PD-1 axis inhibitors in KRAS-mutant lung adenocarcinoma. Genomic profiling may enhance the predictive utility of PD-L1 expression and tumor mutation burden and facilitate establishment of personalized combination immunotherapy approaches for genomically defined LUAC subsets. Cancer Discov; 8(7); 822-35. ©2018 AACR.See related commentary by Etxeberria et al., p. 794This article is highlighted in the In This Issue feature, p. 781.

Journal ArticleDOI
Bela Abolfathi1, D. S. Aguado2, Gabriela Aguilar3, Carlos Allende Prieto2  +361 moreInstitutions (94)
TL;DR: SDSS-IV is the fourth generation of the Sloan Digital Sky Survey and has been in operation since 2014 July. as discussed by the authors describes the second data release from this phase, and the 14th from SDSS overall (making this Data Release Fourteen or DR14).
Abstract: The fourth generation of the Sloan Digital Sky Survey (SDSS-IV) has been in operation since 2014 July. This paper describes the second data release from this phase, and the 14th from SDSS overall (making this Data Release Fourteen or DR14). This release makes the data taken by SDSS-IV in its first two years of operation (2014-2016 July) public. Like all previous SDSS releases, DR14 is cumulative, including the most recent reductions and calibrations of all data taken by SDSS since the first phase began operations in 2000. New in DR14 is the first public release of data from the extended Baryon Oscillation Spectroscopic Survey; the first data from the second phase of the Apache Point Observatory (APO) Galactic Evolution Experiment (APOGEE-2), including stellar parameter estimates from an innovative data-driven machine-learning algorithm known as "The Cannon"; and almost twice as many data cubes from the Mapping Nearby Galaxies at APO (MaNGA) survey as were in the previous release (N = 2812 in total). This paper describes the location and format of the publicly available data from the SDSS-IV surveys. We provide references to the important technical papers describing how these data have been taken (both targeting and observation details) and processed for scientific use. The SDSS web site (www.sdss.org) has been updated for this release and provides links to data downloads, as well as tutorials and examples of data use. SDSS-IV is planning to continue to collect astronomical data until 2020 and will be followed by SDSS-V.

Journal ArticleDOI
Ali H. Mokdad1, Katherine Ballestros1, Michelle Echko1, Scott D Glenn1, Helen E Olsen1, Erin C Mullany1, Alexander Lee1, Abdur Rahman Khan2, Alireza Ahmadi3, Alireza Ahmadi4, Alize J. Ferrari5, Alize J. Ferrari6, Alize J. Ferrari1, Amir Kasaeian7, Andrea Werdecker, Austin Carter1, Ben Zipkin1, Benn Sartorius8, Benn Sartorius9, Berrin Serdar10, Bryan L. Sykes11, Christopher Troeger1, Christina Fitzmaurice12, Christina Fitzmaurice1, Colin D. Rehm13, Damian Santomauro5, Damian Santomauro6, Damian Santomauro1, Daniel Kim14, Danny V. Colombara1, David C. Schwebel15, Derrick Tsoi1, Dhaval Kolte16, Elaine O. Nsoesie1, Emma Nichols1, Eyal Oren17, Fiona J Charlson6, Fiona J Charlson1, Fiona J Charlson5, George C Patton18, Gregory A. Roth1, H. Dean Hosgood19, Harvey Whiteford5, Harvey Whiteford6, Harvey Whiteford1, Hmwe H Kyu1, Holly E. Erskine5, Holly E. Erskine6, Holly E. Erskine1, Hsiang Huang20, Ira Martopullo1, Jasvinder A. Singh15, Jean B. Nachega21, Jean B. Nachega22, Jean B. Nachega23, Juan Sanabria24, Juan Sanabria25, Kaja Abbas26, Kanyin Ong1, Karen M. Tabb27, Kristopher J. Krohn1, Leslie Cornaby1, Louisa Degenhardt1, Louisa Degenhardt28, Mark Moses1, Maryam S. Farvid29, Max Griswold1, Michael H. Criqui30, Michelle L. Bell31, Minh Nguyen1, Mitch T Wallin32, Mitch T Wallin33, Mojde Mirarefin1, Mostafa Qorbani, Mustafa Z. Younis34, Nancy Fullman1, Patrick Liu1, Paul S Briant1, Philimon Gona35, Rasmus Havmoller3, Ricky Leung36, Ruth W Kimokoti37, Shahrzad Bazargan-Hejazi38, Shahrzad Bazargan-Hejazi39, Simon I. Hay40, Simon I. Hay1, Simon Yadgir1, Stan Biryukov1, Stein Emil Vollset41, Stein Emil Vollset1, Tahiya Alam1, Tahvi Frank1, Talha Farid2, Ted R. Miller42, Ted R. Miller43, Theo Vos1, Till Bärnighausen44, Till Bärnighausen29, Tsegaye Telwelde Gebrehiwot45, Yuichiro Yano46, Ziyad Al-Aly47, Alem Mehari48, Alexis J. Handal49, Amit Kandel50, Ben Anderson51, Brian J. Biroscak31, Brian J. Biroscak52, Dariush Mozaffarian53, E. Ray Dorsey54, Eric L. Ding29, Eun-Kee Park55, Gregory R. Wagner29, Guoqing Hu56, Honglei Chen57, Jacob E. Sunshine51, Jagdish Khubchandani58, Janet L Leasher59, Janni Leung6, Janni Leung51, Joshua A. Salomon29, Jürgen Unützer51, Leah E. Cahill60, Leah E. Cahill29, Leslie T. Cooper61, Masako Horino, Michael Brauer1, Michael Brauer62, Nicholas J K Breitborde63, Peter J. Hotez64, Roman Topor-Madry65, Roman Topor-Madry66, Samir Soneji67, Saverio Stranges68, Spencer L. James1, Stephen M. Amrock69, Sudha Jayaraman70, Tejas V. Patel, Tomi Akinyemiju15, Vegard Skirbekk41, Vegard Skirbekk71, Yohannes Kinfu72, Zulfiqar A Bhutta73, Jost B. Jonas44, Christopher J L Murray1 
Institute for Health Metrics and Evaluation1, University of Louisville2, Karolinska Institutet3, Kermanshah University of Medical Sciences4, Centre for Mental Health5, University of Queensland6, Tehran University of Medical Sciences7, South African Medical Research Council8, University of KwaZulu-Natal9, University of Colorado Boulder10, University of California, Irvine11, Fred Hutchinson Cancer Research Center12, Montefiore Medical Center13, Northeastern University14, University of Alabama at Birmingham15, Brown University16, San Diego State University17, University of Melbourne18, Albert Einstein College of Medicine19, Cambridge Health Alliance20, Johns Hopkins University21, University of Cape Town22, University of Pittsburgh23, Marshall University24, Case Western Reserve University25, University of London26, University of Illinois at Urbana–Champaign27, National Drug and Alcohol Research Centre28, Harvard University29, University of California, San Diego30, Yale University31, Veterans Health Administration32, Georgetown University33, Jackson State University34, University of Massachusetts Boston35, State University of New York System36, Simmons College37, University of California, Los Angeles38, Charles R. Drew University of Medicine and Science39, University of Oxford40, Norwegian Institute of Public Health41, Curtin University42, Pacific Institute43, Heidelberg University44, Jimma University45, Northwestern University46, Washington University in St. Louis47, Howard University48, University of New Mexico49, University at Buffalo50, University of Washington51, University of South Florida52, Tufts University53, University of Rochester Medical Center54, Kosin University55, Central South University56, Michigan State University57, Ball State University58, Nova Southeastern University59, Dalhousie University60, Mayo Clinic61, University of British Columbia62, Ohio State University63, Baylor University64, Wrocław Medical University65, Jagiellonian University Medical College66, Dartmouth College67, University of Western Ontario68, Oregon Health & Science University69, Virginia Commonwealth University70, Columbia University71, University of Canberra72, Aga Khan University73
10 Apr 2018-JAMA
TL;DR: There are wide differences in the burden of disease at the state level and specific diseases and risk factors, such as drug use disorders, high BMI, poor diet, high fasting plasma glucose level, and alcohol use disorders are increasing and warrant increased attention.
Abstract: Introduction Several studies have measured health outcomes in the United States, but none have provided a comprehensive assessment of patterns of health by state. Objective To use the results of the Global Burden of Disease Study (GBD) to report trends in the burden of diseases, injuries, and risk factors at the state level from 1990 to 2016. Design and Setting A systematic analysis of published studies and available data sources estimates the burden of disease by age, sex, geography, and year. Main Outcomes and Measures Prevalence, incidence, mortality, life expectancy, healthy life expectancy (HALE), years of life lost (YLLs) due to premature mortality, years lived with disability (YLDs), and disability-adjusted life-years (DALYs) for 333 causes and 84 risk factors with 95% uncertainty intervals (UIs) were computed. Results Between 1990 and 2016, overall death rates in the United States declined from 745.2 (95% UI, 740.6 to 749.8) per 100 000 persons to 578.0 (95% UI, 569.4 to 587.1) per 100 000 persons. The probability of death among adults aged 20 to 55 years declined in 31 states and Washington, DC from 1990 to 2016. In 2016, Hawaii had the highest life expectancy at birth (81.3 years) and Mississippi had the lowest (74.7 years), a 6.6-year difference. Minnesota had the highest HALE at birth (70.3 years), and West Virginia had the lowest (63.8 years), a 6.5-year difference. The leading causes of DALYs in the United States for 1990 and 2016 were ischemic heart disease and lung cancer, while the third leading cause in 1990 was low back pain, and the third leading cause in 2016 was chronic obstructive pulmonary disease. Opioid use disorders moved from the 11th leading cause of DALYs in 1990 to the 7th leading cause in 2016, representing a 74.5% (95% UI, 42.8% to 93.9%) change. In 2016, each of the following 6 risks individually accounted for more than 5% of risk-attributable DALYs: tobacco consumption, high body mass index (BMI), poor diet, alcohol and drug use, high fasting plasma glucose, and high blood pressure. Across all US states, the top risk factors in terms of attributable DALYs were due to 1 of the 3 following causes: tobacco consumption (32 states), high BMI (10 states), or alcohol and drug use (8 states). Conclusions and Relevance There are wide differences in the burden of disease at the state level. Specific diseases and risk factors, such as drug use disorders, high BMI, poor diet, high fasting plasma glucose level, and alcohol use disorders are increasing and warrant increased attention. These data can be used to inform national health priorities for research, clinical care, and policy.

Journal ArticleDOI
TL;DR: In this article, a better understanding of compound events may improve projections of potential high-impact events, and can provide a bridge between climate scientists, engineers, social scientists, impact modellers and decision-makers.
Abstract: Floods, wildfires, heatwaves and droughts often result from a combination of interacting physical processes across multiple spatial and temporal scales. The combination of processes (climate drivers and hazards) leading to a significant impact is referred to as a ‘compound event’. Traditional risk assessment methods typically only consider one driver and/or hazard at a time, potentially leading to underestimation of risk, as the processes that cause extreme events often interact and are spatially and/or temporally dependent. Here we show how a better understanding of compound events may improve projections of potential high-impact events, and can provide a bridge between climate scientists, engineers, social scientists, impact modellers and decision-makers, who need to work closely together to understand these complex events.

Journal ArticleDOI
29 Jun 2018-Science
TL;DR: In this paper, the authors examine barriers and opportunities associated with these difficult-to-decarbonize services and processes, including possible technological solutions and research and development priorities, and examine the use of existing technologies to meet future demands for these services without net addition of CO2 to the atmosphere.
Abstract: Some energy services and industrial processes-such as long-distance freight transport, air travel, highly reliable electricity, and steel and cement manufacturing-are particularly difficult to provide without adding carbon dioxide (CO2) to the atmosphere. Rapidly growing demand for these services, combined with long lead times for technology development and long lifetimes of energy infrastructure, make decarbonization of these services both essential and urgent. We examine barriers and opportunities associated with these difficult-to-decarbonize services and processes, including possible technological solutions and research and development priorities. A range of existing technologies could meet future demands for these services and processes without net addition of CO2 to the atmosphere, but their use may depend on a combination of cost reductions via research and innovation, as well as coordinated deployment and integration of operations across currently discrete energy industries.



Journal ArticleDOI
TL;DR: Inotersen improved the course of neurologic disease and quality of life in patients with hereditary transthyretin amyloidosis and improvements were independent of disease stage, mutation type, or the presence of cardiomyopathy.
Abstract: Background Hereditary transthyretin amyloidosis is caused by pathogenic single-nucleotide variants in the gene encoding transthyretin (TTR) that induce transthyretin misfolding and systemi...

Journal ArticleDOI
TL;DR: Both patterns are unlikely to be the result of ecological drift, but are inevitable emergent properties of open microbial systems resulting mainly from biotic interactions and environmental and spatial processes.
Abstract: Microbial communities often exhibit incredible taxonomic diversity, raising questions regarding the mechanisms enabling species coexistence and the role of this diversity in community functioning. On the one hand, many coexisting but taxonomically distinct microorganisms can encode the same energy-yielding metabolic functions, and this functional redundancy contrasts with the expectation that species should occupy distinct metabolic niches. On the other hand, the identity of taxa encoding each function can vary substantially across space or time with little effect on the function, and this taxonomic variability is frequently thought to result from ecological drift between equivalent organisms. Here, we synthesize the powerful paradigm emerging from these two patterns, connecting the roles of function, functional redundancy and taxonomy in microbial systems. We conclude that both patterns are unlikely to be the result of ecological drift, but are inevitable emergent properties of open microbial systems resulting mainly from biotic interactions and environmental and spatial processes.

Journal ArticleDOI
TL;DR: The Feedback In Realistic Environments (FIRE) project explores feedback in cosmological galaxy formation simulations as mentioned in this paper, which has been used to explore new physics (e.g. magnetic fields).
Abstract: The Feedback In Realistic Environments (FIRE) project explores feedback in cosmological galaxy formation simulations. Previous FIRE simulations used an identical source code (“FIRE-1”) for consistency. Motivated by the development of more accurate numerics – including hydrodynamic solvers, gravitational softening, and supernova coupling algorithms – and exploration of new physics (e.g. magnetic fields), we introduce “FIRE-2”, an updated numerical implementation of FIRE physics for the GIZMO code. We run a suite of simulations and compare against FIRE-1: overall, FIRE-2 improvements do not qualitatively change galaxy-scale properties. We pursue an extensive study of numerics versus physics. Details of the star-formation algorithm, cooling physics, and chemistry have weak effects, provided that we include metal-line cooling and star formation occurs at higher-than-mean densities. We present new resolution criteria for high-resolution galaxy simulations. Most galaxy-scale properties are robust to numerics we test, provided: (1) Toomre masses are resolved; (2) feedback coupling ensures conservation, and (3) individual supernovae are time-resolved. Stellar masses and profiles are most robust to resolution, followed by metal abundances and morphologies, followed by properties of winds and circum-galactic media (CGM). Central (∼kpc) mass concentrations in massive (>L*) galaxies are sensitive to numerics (via trapping/recycling of winds in hot halos). Multiple feedback mechanisms play key roles: supernovae regulate stellar masses/winds; stellar mass-loss fuels late star formation; radiative feedback suppresses accretion onto dwarfs and instantaneous star formation in disks. We provide all initial conditions and numerical algorithms used.

Journal ArticleDOI
Andrew Shepherd1, Erik R. Ivins2, Eric Rignot3, Ben Smith4, Michiel R. van den Broeke, Isabella Velicogna3, Pippa L. Whitehouse5, Kate Briggs1, Ian Joughin4, Gerhard Krinner6, Sophie Nowicki7, Tony Payne8, Ted Scambos9, Nicole Schlegel2, Geruo A3, Cécile Agosta, Andreas P. Ahlstrøm10, Greg Babonis11, Valentina R. Barletta12, Alejandro Blazquez, Jennifer Bonin13, Beata Csatho11, Richard I. Cullather7, Denis Felikson14, Xavier Fettweis, René Forsberg12, Hubert Gallée6, Alex S. Gardner2, Lin Gilbert15, Andreas Groh16, Brian Gunter17, Edward Hanna18, Christopher Harig19, Veit Helm20, Alexander Horvath21, Martin Horwath16, Shfaqat Abbas Khan12, Kristian K. Kjeldsen10, Hannes Konrad1, Peter L. Langen22, Benoit S. Lecavalier23, Bryant D. Loomis7, Scott B. Luthcke7, Malcolm McMillan1, Daniele Melini24, Sebastian H. Mernild25, Sebastian H. Mernild26, Sebastian H. Mernild27, Yara Mohajerani3, Philip Moore28, Jeremie Mouginot3, Jeremie Mouginot6, Gorka Moyano, Alan Muir15, Thomas Nagler, Grace A. Nield5, Johan Nilsson2, Brice Noël, Ines Otosaka1, Mark E. Pattle, W. Richard Peltier29, Nadege Pie14, Roelof Rietbroek30, Helmut Rott, Louise Sandberg-Sørensen12, Ingo Sasgen20, Himanshu Save14, Bernd Scheuchl3, Ernst Schrama31, Ludwig Schröder16, Ki-Weon Seo32, Sebastian B. Simonsen12, Thomas Slater1, Giorgio Spada33, T. C. Sutterley3, Matthieu Talpe9, Lev Tarasov23, Willem Jan van de Berg, Wouter van der Wal31, Melchior van Wessem, Bramha Dutt Vishwakarma34, David N. Wiese2, Bert Wouters 
14 Jun 2018-Nature
TL;DR: This work combines satellite observations of its changing volume, flow and gravitational attraction with modelling of its surface mass balance to show that the Antarctic Ice Sheet lost 2,720 ± 1,390 billion tonnes of ice between 1992 and 2017, which corresponds to an increase in mean sea level of 7.6‚¬3.9 millimetres.
Abstract: The Antarctic Ice Sheet is an important indicator of climate change and driver of sea-level rise. Here we combine satellite observations of its changing volume, flow and gravitational attraction with modelling of its surface mass balance to show that it lost 2,720 ± 1,390 billion tonnes of ice between 1992 and 2017, which corresponds to an increase in mean sea level of 7.6 ± 3.9 millimetres (errors are one standard deviation). Over this period, ocean-driven melting has caused rates of ice loss from West Antarctica to increase from 53 ± 29 billion to 159 ± 26 billion tonnes per year; ice-shelf collapse has increased the rate of ice loss from the Antarctic Peninsula from 7 ± 13 billion to 33 ± 16 billion tonnes per year. We find large variations in and among model estimates of surface mass balance and glacial isostatic adjustment for East Antarctica, with its average rate of mass gain over the period 1992–2017 (5 ± 46 billion tonnes per year) being the least certain.

Journal ArticleDOI
TL;DR: A deeper understanding of the fundamental challenges faced for wearable sensors and of the state-of-the-art for wearable sensor technology, the roadmap becomes clearer for creating the next generation of innovations and breakthroughs.
Abstract: Wearable sensors have recently seen a large increase in both research and commercialization. However, success in wearable sensors has been a mix of both progress and setbacks. Most of commercial progress has been in smart adaptation of existing mechanical, electrical and optical methods of measuring the body. This adaptation has involved innovations in how to miniaturize sensing technologies, how to make them conformal and flexible, and in the development of companion software that increases the value of the measured data. However, chemical sensing modalities have experienced greater challenges in commercial adoption, especially for non-invasive chemical sensors. There have also been significant challenges in making significant fundamental improvements to existing mechanical, electrical, and optical sensing modalities, especially in improving their specificity of detection. Many of these challenges can be understood by appreciating the body's surface (skin) as more of an information barrier than as an information source. With a deeper understanding of the fundamental challenges faced for wearable sensors and of the state-of-the-art for wearable sensor technology, the roadmap becomes clearer for creating the next generation of innovations and breakthroughs.

Journal ArticleDOI
Douglas M. Ruderfer1, Stephan Ripke2, Stephan Ripke3, Stephan Ripke4  +628 moreInstitutions (156)
14 Jun 2018-Cell
TL;DR: For the first time, specific loci that distinguish between BD and SCZ are discovered and polygenic components underlying multiple symptom dimensions are identified that point to the utility of genetics to inform symptomology and potential treatment.

Journal ArticleDOI
TL;DR: In this paper, a deep neural network is used to represent all atmospheric subgrid processes in a climate model by learning from a multiscale model in which convection is treated explicitly.
Abstract: The representation of nonlinear subgrid processes, especially clouds, has been a major source of uncertainty in climate models for decades. Cloud-resolving models better represent many of these processes and can now be run globally but only for short-term simulations of at most a few years because of computational limitations. Here we demonstrate that deep learning can be used to capture many advantages of cloud-resolving modeling at a fraction of the computational cost. We train a deep neural network to represent all atmospheric subgrid processes in a climate model by learning from a multiscale model in which convection is treated explicitly. The trained neural network then replaces the traditional subgrid parameterizations in a global general circulation model in which it freely interacts with the resolved dynamics and the surface-flux scheme. The prognostic multiyear simulations are stable and closely reproduce not only the mean climate of the cloud-resolving simulation but also key aspects of variability, including precipitation extremes and the equatorial wave spectrum. Furthermore, the neural network approximately conserves energy despite not being explicitly instructed to. Finally, we show that the neural network parameterization generalizes to new surface forcing patterns but struggles to cope with temperatures far outside its training manifold. Our results show the feasibility of using deep learning for climate model parameterization. In a broader context, we anticipate that data-driven Earth system model development could play a key role in reducing climate prediction uncertainty in the coming decade.

Journal ArticleDOI
TL;DR: Overall and intracranial antitumour activity of lorlatinib in patients with ALK-positive, advanced non-small-cell lung cancer and safety data for all treated patients (EXP1-6) are presented.
Abstract: Summary Background Lorlatinib is a potent, brain-penetrant, third-generation inhibitor of ALK and ROS1 tyrosine kinases with broad coverage of ALK mutations. In a phase 1 study, activity was seen in patients with ALK-positive non-small-cell lung cancer, most of whom had CNS metastases and progression after ALK-directed therapy. We aimed to analyse the overall and intracranial antitumour activity of lorlatinib in patients with ALK-positive, advanced non-small-cell lung cancer. Methods In this phase 2 study, patients with histologically or cytologically ALK-positive or ROS1-positive, advanced, non-small-cell lung cancer, with or without CNS metastases, with an Eastern Cooperative Oncology Group performance status of 0, 1, or 2, and adequate end-organ function were eligible. Patients were enrolled into six different expansion cohorts (EXP1–6) on the basis of ALK and ROS1 status and previous therapy, and were given lorlatinib 100 mg orally once daily continuously in 21-day cycles. The primary endpoint was overall and intracranial tumour response by independent central review, assessed in pooled subgroups of ALK-positive patients. Analyses of activity and safety were based on the safety analysis set (ie, all patients who received at least one dose of lorlatinib) as assessed by independent central review. Patients with measurable CNS metastases at baseline by independent central review were included in the intracranial activity analyses. In this report, we present lorlatinib activity data for the ALK-positive patients (EXP1–5 only), and safety data for all treated patients (EXP1–6). This study is ongoing and is registered with ClinicalTrials.gov, number NCT01970865. Findings Between Sept 15, 2015, and Oct 3, 2016, 276 patients were enrolled: 30 who were ALK positive and treatment naive (EXP1); 59 who were ALK positive and received previous crizotinib without (n=27; EXP2) or with (n=32; EXP3A) previous chemotherapy; 28 who were ALK positive and received one previous non-crizotinib ALK tyrosine kinase inhibitor, with or without chemotherapy (EXP3B); 112 who were ALK positive with two (n=66; EXP4) or three (n=46; EXP5) previous ALK tyrosine kinase inhibitors with or without chemotherapy; and 47 who were ROS1 positive with any previous treatment (EXP6). One patient in EXP4 died before receiving lorlatinib and was excluded from the safety analysis set. In treatment-naive patients (EXP1), an objective response was achieved in 27 (90·0%; 95% CI 73·5–97·9) of 30 patients. Three patients in EXP1 had measurable baseline CNS lesions per independent central review, and objective intracranial responses were observed in two (66·7%; 95% CI 9·4–99·2). In ALK-positive patients with at least one previous ALK tyrosine kinase inhibitor (EXP2–5), objective responses were achieved in 93 (47·0%; 39·9–54·2) of 198 patients and objective intracranial response in those with measurable baseline CNS lesions in 51 (63·0%; 51·5–73·4) of 81 patients. Objective response was achieved in 41 (69·5%; 95% CI 56·1–80·8) of 59 patients who had only received previous crizotinib (EXP2–3A), nine (32·1%; 15·9–52·4) of 28 patients with one previous non-crizotinib ALK tyrosine kinase inhibitor (EXP3B), and 43 (38·7%; 29·6–48·5) of 111 patients with two or more previous ALK tyrosine kinase inhibitors (EXP4–5). Objective intracranial response was achieved in 20 (87·0%; 95% CI 66·4–97·2) of 23 patients with measurable baseline CNS lesions in EXP2–3A, five (55·6%; 21·2–86·3) of nine patients in EXP3B, and 26 (53·1%; 38·3–67·5) of 49 patients in EXP4–5. The most common treatment-related adverse events across all patients were hypercholesterolaemia (224 [81%] of 275 patients overall and 43 [16%] grade 3–4) and hypertriglyceridaemia (166 [60%] overall and 43 [16%] grade 3–4). Serious treatment-related adverse events occurred in 19 (7%) of 275 patients and seven patients (3%) permanently discontinued treatment because of treatment-related adverse events. No treatment-related deaths were reported. Interpretation Consistent with its broad ALK mutational coverage and CNS penetration, lorlatinib showed substantial overall and intracranial activity both in treatment-naive patients with ALK-positive non-small-cell lung cancer, and in those who had progressed on crizotinib, second-generation ALK tyrosine kinase inhibitors, or after up to three previous ALK tyrosine kinase inhibitors. Thus, lorlatinib could represent an effective treatment option for patients with ALK-positive non-small-cell lung cancer in first-line or subsequent therapy. Funding Pfizer.

Journal ArticleDOI
TL;DR: An overview of the molecular mechanisms of TGF-β/Smad signaling pathway in renal, hepatic, pulmonary and cardiac fibrosis is presented and particular challenges are presented and placed within the context of future applications against tissue fibrosis.

Journal ArticleDOI
TL;DR: Treatments that target calcitonin gene-related peptide (CGRP) and its receptor are proving effective for migraine treatment, and the hypothesis that CGRP has a major role in migraine pathophysiology is strongly supported.
Abstract: Treatment of migraine is on the cusp of a new era with the development of drugs that target the trigeminal sensory neuropeptide calcitonin gene-related peptide (CGRP) or its receptor. Several of these drugs are expected to receive approval for use in migraine headache in 2018 and 2019. CGRP-related therapies offer considerable improvements over existing drugs as they are the first to be designed specifically to act on the trigeminal pain system, they are more specific and they seem to have few or no adverse effects. CGRP receptor antagonists such as ubrogepant are effective for acute relief of migraine headache, whereas monoclonal antibodies against CGRP (eptinezumab, fremanezumab and galcanezumab) or the CGRP receptor (erenumab) effectively prevent migraine attacks. As these drugs come into clinical use, we provide an overview of knowledge that has led to successful development of these drugs. We describe the biology of CGRP signalling, summarize key clinical evidence for the role of CGRP in migraine headache, including the efficacy of CGRP-targeted treatment, and synthesize what is known about the role of CGRP in the trigeminovascular system. Finally, we consider how the latest findings provide new insight into the central role of the trigeminal ganglion in the pathophysiology of migraine.

Journal ArticleDOI
TL;DR: The findings indicate that the ENIGMA meta-analysis approach can achieve robust findings in clinical neuroscience studies; also, medication effects should be taken into account in future genetic association studies of cortical thickness in schizophrenia.

Journal ArticleDOI
TL;DR: In this paper, the scaling relation between galaxy-integrated molecular gas masses, stellar masses, and star formation rates (SFRs), in the framework of the star formation main sequence (MS), with the main goal of testing for possible systematic effects.
Abstract: This paper provides an update of our previous scaling relations between galaxy-integrated molecular gas masses, stellar masses, and star formation rates (SFRs), in the framework of the star formation main sequence (MS), with the main goal of testing for possible systematic effects. For this purpose our new study combines three independent methods of determining molecular gas masses from CO line fluxes, far-infrared dust spectral energy distributions, and ∼1 mm dust photometry, in a large sample of 1444 star-forming galaxies between z=0 and 4. The sample covers the stellar mass range log(M * /M e)=9.0-11.8, and SFRs relative to that on the MS, δMS=SFR/SFR (MS), from 10 −1.3 to 10 2.2. Our most important finding is that all data sets, despite the different techniques and analysis methods used, follow the same scaling trends, once method-to-method zero-point offsets are minimized and uncertainties are properly taken into account. The molecular gas depletion time t depl , defined as the ratio of molecular gas mass to SFR, scales as (1+z) −0.6 ×(δMS) −0.44 and is only weakly dependent on stellar mass. The ratio of molecular to stellar mass μ gas depends on (* d +´´-) () () z M 1 M S 2.5 0.52 0.36 , which tracks the evolution of the specific SFR. The redshift dependence of μ gas requires a curvature term, as may the mass dependences of t depl and μ gas. We find no or only weak correlations of t depl and μ gas with optical size R or surface density once one removes the above scalings, but we caution that optical sizes may not be appropriate for the high gas and dust columns at high z.