scispace - formally typeset
Search or ask a question

Showing papers by "University of Washington published in 2019"


Journal ArticleDOI
TL;DR: SciPy as discussed by the authors is an open source scientific computing library for the Python programming language, which includes functionality spanning clustering, Fourier transforms, integration, interpolation, file I/O, linear algebra, image processing, orthogonal distance regression, minimization algorithms, signal processing, sparse matrix handling, computational geometry, and statistics.
Abstract: SciPy is an open source scientific computing library for the Python programming language. SciPy 1.0 was released in late 2017, about 16 years after the original version 0.1 release. SciPy has become a de facto standard for leveraging scientific algorithms in the Python programming language, with more than 600 unique code contributors, thousands of dependent packages, over 100,000 dependent repositories, and millions of downloads per year. This includes usage of SciPy in almost half of all machine learning projects on GitHub, and usage by high profile projects including LIGO gravitational wave analysis and creation of the first-ever image of a black hole (M87). The library includes functionality spanning clustering, Fourier transforms, integration, interpolation, file I/O, linear algebra, image processing, orthogonal distance regression, minimization algorithms, signal processing, sparse matrix handling, computational geometry, and statistics. In this work, we provide an overview of the capabilities and development practices of the SciPy library and highlight some recent technical developments.

12,774 citations


Posted Content
TL;DR: PyTorch as discussed by the authors is a machine learning library that provides an imperative and Pythonic programming style that makes debugging easy and is consistent with other popular scientific computing libraries, while remaining efficient and supporting hardware accelerators such as GPUs.
Abstract: Deep learning frameworks have often focused on either usability or speed, but not both. PyTorch is a machine learning library that shows that these two goals are in fact compatible: it provides an imperative and Pythonic programming style that supports code as a model, makes debugging easy and is consistent with other popular scientific computing libraries, while remaining efficient and supporting hardware accelerators such as GPUs. In this paper, we detail the principles that drove the implementation of PyTorch and how they are reflected in its architecture. We emphasize that every aspect of PyTorch is a regular Python program under the full control of its user. We also explain how the careful and pragmatic implementation of the key components of its runtime enables them to work together to achieve compelling performance. We demonstrate the efficiency of individual subsystems, as well as the overall speed of PyTorch on several common benchmarks.

12,767 citations


Proceedings Article
01 Jan 2019
TL;DR: This paper details the principles that drove the implementation of PyTorch and how they are reflected in its architecture, and explains how the careful and pragmatic implementation of the key components of its runtime enables them to work together to achieve compelling performance.
Abstract: Deep learning frameworks have often focused on either usability or speed, but not both. PyTorch is a machine learning library that shows that these two goals are in fact compatible: it was designed from first principles to support an imperative and Pythonic programming style that supports code as a model, makes debugging easy and is consistent with other popular scientific computing libraries, while remaining efficient and supporting hardware accelerators such as GPUs. In this paper, we detail the principles that drove the implementation of PyTorch and how they are reflected in its architecture. We emphasize that every aspect of PyTorch is a regular Python program under the full control of its user. We also explain how the careful and pragmatic implementation of the key components of its runtime enables them to work together to achieve compelling performance. We demonstrate the efficiency of individual subsystems, as well as the overall speed of PyTorch on several commonly used benchmarks.

10,045 citations


Journal ArticleDOI
Evan Bolyen1, Jai Ram Rideout1, Matthew R. Dillon1, Nicholas A. Bokulich1, Christian C. Abnet2, Gabriel A. Al-Ghalith3, Harriet Alexander4, Harriet Alexander5, Eric J. Alm6, Manimozhiyan Arumugam7, Francesco Asnicar8, Yang Bai9, Jordan E. Bisanz10, Kyle Bittinger11, Asker Daniel Brejnrod7, Colin J. Brislawn12, C. Titus Brown4, Benjamin J. Callahan13, Andrés Mauricio Caraballo-Rodríguez14, John Chase1, Emily K. Cope1, Ricardo Silva14, Christian Diener15, Pieter C. Dorrestein14, Gavin M. Douglas16, Daniel M. Durall17, Claire Duvallet6, Christian F. Edwardson, Madeleine Ernst14, Madeleine Ernst18, Mehrbod Estaki17, Jennifer Fouquier19, Julia M. Gauglitz14, Sean M. Gibbons20, Sean M. Gibbons15, Deanna L. Gibson17, Antonio Gonzalez14, Kestrel Gorlick1, Jiarong Guo21, Benjamin Hillmann3, Susan Holmes22, Hannes Holste14, Curtis Huttenhower23, Curtis Huttenhower24, Gavin A. Huttley25, Stefan Janssen26, Alan K. Jarmusch14, Lingjing Jiang14, Benjamin D. Kaehler27, Benjamin D. Kaehler25, Kyo Bin Kang28, Kyo Bin Kang14, Christopher R. Keefe1, Paul Keim1, Scott T. Kelley29, Dan Knights3, Irina Koester14, Tomasz Kosciolek14, Jorden Kreps1, Morgan G. I. Langille16, Joslynn S. Lee30, Ruth E. Ley31, Ruth E. Ley32, Yong-Xin Liu, Erikka Loftfield2, Catherine A. Lozupone19, Massoud Maher14, Clarisse Marotz14, Bryan D Martin20, Daniel McDonald14, Lauren J. McIver24, Lauren J. McIver23, Alexey V. Melnik14, Jessica L. Metcalf33, Sydney C. Morgan17, Jamie Morton14, Ahmad Turan Naimey1, Jose A. Navas-Molina14, Jose A. Navas-Molina34, Louis-Félix Nothias14, Stephanie B. Orchanian, Talima Pearson1, Samuel L. Peoples20, Samuel L. Peoples35, Daniel Petras14, Mary L. Preuss36, Elmar Pruesse19, Lasse Buur Rasmussen7, Adam R. Rivers37, Michael S. Robeson38, Patrick Rosenthal36, Nicola Segata8, Michael Shaffer19, Arron Shiffer1, Rashmi Sinha2, Se Jin Song14, John R. Spear39, Austin D. Swafford, Luke R. Thompson40, Luke R. Thompson41, Pedro J. Torres29, Pauline Trinh20, Anupriya Tripathi14, Peter J. Turnbaugh10, Sabah Ul-Hasan42, Justin J. J. van der Hooft43, Fernando Vargas, Yoshiki Vázquez-Baeza14, Emily Vogtmann2, Max von Hippel44, William A. Walters32, Yunhu Wan2, Mingxun Wang14, Jonathan Warren45, Kyle C. Weber46, Kyle C. Weber37, Charles H. D. Williamson1, Amy D. Willis20, Zhenjiang Zech Xu14, Jesse R. Zaneveld20, Yilong Zhang47, Qiyun Zhu14, Rob Knight14, J. Gregory Caporaso1 
TL;DR: QIIME 2 development was primarily funded by NSF Awards 1565100 to J.G.C. and R.K.P. and partial support was also provided by the following: grants NIH U54CA143925 and U54MD012388.
Abstract: QIIME 2 development was primarily funded by NSF Awards 1565100 to J.G.C. and 1565057 to R.K. Partial support was also provided by the following: grants NIH U54CA143925 (J.G.C. and T.P.) and U54MD012388 (J.G.C. and T.P.); grants from the Alfred P. Sloan Foundation (J.G.C. and R.K.); ERCSTG project MetaPG (N.S.); the Strategic Priority Research Program of the Chinese Academy of Sciences QYZDB-SSW-SMC021 (Y.B.); the Australian National Health and Medical Research Council APP1085372 (G.A.H., J.G.C., Von Bing Yap and R.K.); the Natural Sciences and Engineering Research Council (NSERC) to D.L.G.; and the State of Arizona Technology and Research Initiative Fund (TRIF), administered by the Arizona Board of Regents, through Northern Arizona University. All NCI coauthors were supported by the Intramural Research Program of the National Cancer Institute. S.M.G. and C. Diener were supported by the Washington Research Foundation Distinguished Investigator Award.

8,821 citations


Journal ArticleDOI
TL;DR: Among patients with severe aortic stenosis who were at low surgical risk, the rate of the composite of death, stroke, or rehospitalization at 1 year was significantly lower with TAVR than with surgery.
Abstract: Background Among patients with aortic stenosis who are at intermediate or high risk for death with surgery, major outcomes are similar with transcatheter aortic-valve replacement (TAVR) an...

2,917 citations


Proceedings ArticleDOI
15 Jun 2019
TL;DR: DeepSDF as mentioned in this paper represents a shape's surface by a continuous volumetric field: the magnitude of a point in the field represents the distance to the surface boundary and the sign indicates whether the region is inside (-) or outside (+) of the shape.
Abstract: Computer graphics, 3D computer vision and robotics communities have produced multiple approaches to representing 3D geometry for rendering and reconstruction. These provide trade-offs across fidelity, efficiency and compression capabilities. In this work, we introduce DeepSDF, a learned continuous Signed Distance Function (SDF) representation of a class of shapes that enables high quality shape representation, interpolation and completion from partial and noisy 3D input data. DeepSDF, like its classical counterpart, represents a shape's surface by a continuous volumetric field: the magnitude of a point in the field represents the distance to the surface boundary and the sign indicates whether the region is inside (-) or outside (+) of the shape, hence our representation implicitly encodes a shape's boundary as the zero-level-set of the learned function while explicitly representing the classification of space as being part of the shapes interior or not. While classical SDF's both in analytical or discretized voxel form typically represent the surface of a single shape, DeepSDF can represent an entire class of shapes. Furthermore, we show state-of-the-art performance for learned 3D shape representation and completion while reducing the model size by an order of magnitude compared with previous work.

2,247 citations


Journal ArticleDOI
TL;DR: The latest updates to CADD are reviewed, including the most recent version, 1.4, which supports the human genome build GRCh38, and also present updates to the website that include simplified variant lookup, extended documentation, an Application Program Interface and improved mechanisms for integrating CADD scores into other tools or applications.
Abstract: Combined Annotation-Dependent Depletion (CADD) is a widely used measure of variant deleteriousness that can effectively prioritize causal variants in genetic analyses, particularly highly penetrant contributors to severe Mendelian disorders. CADD is an integrative annotation built from more than 60 genomic features, and can score human single nucleotide variants and short insertion and deletions anywhere in the reference assembly. CADD uses a machine learning model trained on a binary distinction between simulated de novo variants and variants that have arisen and become fixed in human populations since the split between humans and chimpanzees; the former are free of selective pressure and may thus include both neutral and deleterious alleles, while the latter are overwhelmingly neutral (or, at most, weakly deleterious) by virtue of having survived millions of years of purifying selection. Here we review the latest updates to CADD, including the most recent version, 1.4, which supports the human genome build GRCh38. We also present updates to our website that include simplified variant lookup, extended documentation, an Application Program Interface and improved mechanisms for integrating CADD scores into other tools or applications. CADD scores, software and documentation are available at https://cadd.gs.washington.edu.

2,091 citations


Journal ArticleDOI
01 Feb 2019-Nature
TL;DR: A cell atlas of mouse organogenesis provides a global view of developmental processes occurring during this critical period, including focused analyses of the apical ectodermal ridge, limb mesenchyme and skeletal muscle.
Abstract: Mammalian organogenesis is a remarkable process. Within a short timeframe, the cells of the three germ layers transform into an embryo that includes most of the major internal and external organs. Here we investigate the transcriptional dynamics of mouse organogenesis at single-cell resolution. Using single-cell combinatorial indexing, we profiled the transcriptomes of around 2 million cells derived from 61 embryos staged between 9.5 and 13.5 days of gestation, in a single experiment. The resulting ‘mouse organogenesis cell atlas’ (MOCA) provides a global view of developmental processes during this critical window. We use Monocle 3 to identify hundreds of cell types and 56 trajectories, many of which are detected only because of the depth of cellular coverage, and collectively define thousands of corresponding marker genes. We explore the dynamics of gene expression within cell types and trajectories over time, including focused analyses of the apical ectodermal ridge, limb mesenchyme and skeletal muscle. Data from single-cell combinatorial-indexing RNA-sequencing analysis of 2 million cells from mouse embryos between embryonic days 9.5 and 13.5 are compiled in a cell atlas of mouse organogenesis, which provides a global view of developmental processes occurring during this critical period.

1,865 citations


Proceedings ArticleDOI
01 Nov 2019
TL;DR: SciBERT leverages unsupervised pretraining on a large multi-domain corpus of scientific publications to improve performance on downstream scientific NLP tasks and demonstrates statistically significant improvements over BERT.
Abstract: Obtaining large-scale annotated data for NLP tasks in the scientific domain is challenging and expensive. We release SciBERT, a pretrained language model based on BERT (Devlin et. al., 2018) to address the lack of high-quality, large-scale labeled scientific data. SciBERT leverages unsupervised pretraining on a large multi-domain corpus of scientific publications to improve performance on downstream scientific NLP tasks. We evaluate on a suite of tasks including sequence tagging, sentence classification and dependency parsing, with datasets from a variety of scientific domains. We demonstrate statistically significant improvements over BERT and achieve new state-of-the-art results on several of these tasks. The code and pretrained models are available at https://github.com/allenai/scibert/.

1,864 citations


Journal ArticleDOI
TL;DR: This article summarizes the ATTD consensus recommendations for relevant aspects of CGM data utilization and reporting among the various diabetes populations.
Abstract: Improvements in sensor accuracy, greater convenience and ease of use, and expanding reimbursement have led to growing adoption of continuous glucose monitoring (CGM). However, successful utilization of CGM technology in routine clinical practice remains relatively low. This may be due in part to the lack of clear and agreed-upon glycemic targets that both diabetes teams and people with diabetes can work toward. Although unified recommendations for use of key CGM metrics have been established in three separate peer-reviewed articles, formal adoption by diabetes professional organizations and guidance in the practical application of these metrics in clinical practice have been lacking. In February 2019, the Advanced Technologies & Treatments for Diabetes (ATTD) Congress convened an international panel of physicians, researchers, and individuals with diabetes who are expert in CGM technologies to address this issue. This article summarizes the ATTD consensus recommendations for relevant aspects of CGM data utilization and reporting among the various diabetes populations.

1,776 citations


Journal ArticleDOI
TL;DR: Liu et al. as mentioned in this paper discuss crucial conditions needed to achieve a specific energy higher than 350 Wh kg−1, up to 500 Wh kg −1, for rechargeable Li metal batteries using high-nickel-content lithium nickel manganese cobalt oxides as cathode materials.
Abstract: State-of-the-art lithium (Li)-ion batteries are approaching their specific energy limits yet are challenged by the ever-increasing demand of today’s energy storage and power applications, especially for electric vehicles. Li metal is considered an ultimate anode material for future high-energy rechargeable batteries when combined with existing or emerging high-capacity cathode materials. However, much current research focuses on the battery materials level, and there have been very few accounts of cell design principles. Here we discuss crucial conditions needed to achieve a specific energy higher than 350 Wh kg−1, up to 500 Wh kg−1, for rechargeable Li metal batteries using high-nickel-content lithium nickel manganese cobalt oxides as cathode materials. We also provide an analysis of key factors such as cathode loading, electrolyte amount and Li foil thickness that impact the cell-level cycle life. Furthermore, we identify several important strategies to reduce electrolyte-Li reaction, protect Li surfaces and stabilize anode architectures for long-cycling high-specific-energy cells. Jun Liu and Battery500 Consortium colleagues contemplate the way forward towards high-energy and long-cycling practical batteries.

Journal ArticleDOI
TL;DR: Although some recommendations remain unchanged from the 2007 guideline, the availability of results from new therapeutic trials and epidemiological investigations led to revised recommendations for empiric treatment strategies and additional management decisions.
Abstract: Background: This document provides evidence-based clinical practice guidelines on the management of adult patients with community-acquired pneumonia.Methods: A multidisciplinary panel conducted pra...

Posted Content
TL;DR: This paper showed that decoding strategies alone alone can dramatically affect the quality of machine text, even when generated from exactly the same neural language model, and they proposed Nucleus Sampling, a simple but effective method to draw the best out of neural generation.
Abstract: Despite considerable advancements with deep neural language models, the enigma of neural text degeneration persists when these models are tested as text generators. The counter-intuitive empirical observation is that even though the use of likelihood as training objective leads to high quality models for a broad range of language understanding tasks, using likelihood as a decoding objective leads to text that is bland and strangely repetitive. In this paper, we reveal surprising distributional differences between human text and machine text. In addition, we find that decoding strategies alone can dramatically effect the quality of machine text, even when generated from exactly the same neural language model. Our findings motivate Nucleus Sampling, a simple but effective method to draw the best out of neural generation. By sampling text from the dynamic nucleus of the probability distribution, which allows for diversity while effectively truncating the less reliable tail of the distribution, the resulting text better demonstrates the quality of human text, yielding enhanced diversity without sacrificing fluency and coherence.

Journal ArticleDOI
TL;DR: Efforts to reverse global trends in freshwater degradation now depend on bridging an immense gap between the aspirations of conservation biologists and the accelerating rate of species endangerment.
Abstract: In the 12 years since Dudgeon et al. (2006) reviewed major pressures on freshwater ecosystems, the biodiversity crisis in the world’s lakes, reservoirs, rivers, streams and wetlands has deepened. While lakes, reservoirs and rivers cover only 2.3% of the Earth’s surface, these ecosystems host at least 9.5% of the Earth’s described animal species. Furthermore, using the World Wide Fund for Nature’s Living Planet Index, freshwater population declines (83% between 1970 and 2014) continue to outpace contemporaneous declines in marine or terrestrial systems. The Anthropocene has brought multiple new and varied threats that disproportionately impact freshwater systems. We document 12 emerging threats to freshwater biodiversity that are either entirely new since 2006 or have since intensified: (i) changing climates; (ii) e-commerce and invasions; (iii) infectious diseases; (iv) harmful algal blooms; (v) expanding hydropower; (vi) emerging contaminants; (vii) engineered nanomaterials; (viii) microplastic pollution; (ix) light and noise; (x) freshwater salinisation; (xi) declining calcium; and (xii) cumulative stressors. Effects are evidenced for amphibians, fishes, invertebrates, microbes, plants, turtles and waterbirds, with potential for ecosystem-level changes through bottom-up and top-down processes. In our highly uncertain future, the net effects of these threats raise serious concerns for freshwater ecosystems. However, we also highlight opportunities for conservation gains as a result of novel management tools (e.g. environmental flows, environmental DNA) and specific conservation-oriented actions (e.g. dam removal, habitat protection policies,managed relocation of species) that have been met with varying levels of success.Moving forward, we advocate hybrid approaches that manage fresh waters as crucial ecosystems for human life support as well as essential hotspots of biodiversity and ecological function. Efforts to reverse global trends in freshwater degradation now depend on bridging an immense gap between the aspirations of conservation biologists and the accelerating rate of species endangerment.

Journal ArticleDOI
08 Aug 2019-Cell
TL;DR: It is found that malignant cells in glioblastoma exist in four main cellular states that recapitulate distinct neural cell types, are influenced by the tumor microenvironment, and exhibit plasticity.


Posted ContentDOI
Konrad J. Karczewski1, Konrad J. Karczewski2, Laurent C. Francioli1, Laurent C. Francioli2, Grace Tiao1, Grace Tiao2, Beryl B. Cummings2, Beryl B. Cummings1, Jessica Alföldi1, Jessica Alföldi2, Qingbo Wang1, Qingbo Wang2, Ryan L. Collins1, Ryan L. Collins2, Kristen M. Laricchia1, Kristen M. Laricchia2, Andrea Ganna1, Andrea Ganna3, Andrea Ganna2, Daniel P. Birnbaum2, Laura D. Gauthier2, Harrison Brand2, Harrison Brand1, Matthew Solomonson1, Matthew Solomonson2, Nicholas A. Watts1, Nicholas A. Watts2, Daniel R. Rhodes4, Moriel Singer-Berk2, Eleanor G. Seaby2, Eleanor G. Seaby1, Jack A. Kosmicki1, Jack A. Kosmicki2, Raymond K. Walters1, Raymond K. Walters2, Katherine Tashman2, Katherine Tashman1, Yossi Farjoun2, Eric Banks2, Timothy Poterba2, Timothy Poterba1, Arcturus Wang1, Arcturus Wang2, Cotton Seed1, Cotton Seed2, Nicola Whiffin5, Nicola Whiffin2, Jessica X. Chong6, Kaitlin E. Samocha7, Emma Pierce-Hoffman2, Zachary Zappala2, Zachary Zappala8, Anne H. O’Donnell-Luria9, Anne H. O’Donnell-Luria1, Anne H. O’Donnell-Luria2, Eric Vallabh Minikel2, Ben Weisburd2, Monkol Lek10, Monkol Lek2, James S. Ware2, James S. Ware5, Christopher Vittal2, Christopher Vittal1, Irina M. Armean2, Irina M. Armean1, Irina M. Armean11, Louis Bergelson2, Kristian Cibulskis2, Kristen M. Connolly2, Miguel Covarrubias2, Stacey Donnelly2, Steven Ferriera2, Stacey Gabriel2, Jeff Gentry2, Namrata Gupta2, Thibault Jeandet2, Diane Kaplan2, Christopher Llanwarne2, Ruchi Munshi2, Sam Novod2, Nikelle Petrillo2, David Roazen2, Valentin Ruano-Rubio2, Andrea Saltzman2, Molly Schleicher2, Jose Soto2, Kathleen Tibbetts2, Charlotte Tolonen2, Gordon Wade2, Michael E. Talkowski1, Michael E. Talkowski2, Benjamin M. Neale2, Benjamin M. Neale1, Mark J. Daly2, Daniel G. MacArthur2, Daniel G. MacArthur1 
30 Jan 2019-bioRxiv
TL;DR: Using an improved human mutation rate model, human protein-coding genes are classified along a spectrum representing tolerance to inactivation, validate this classification using data from model organisms and engineered human cells, and show that it can be used to improve gene discovery power for both common and rare diseases.
Abstract: Summary Genetic variants that inactivate protein-coding genes are a powerful source of information about the phenotypic consequences of gene disruption: genes critical for an organism’s function will be depleted for such variants in natural populations, while non-essential genes will tolerate their accumulation. However, predicted loss-of-function (pLoF) variants are enriched for annotation errors, and tend to be found at extremely low frequencies, so their analysis requires careful variant annotation and very large sample sizes. Here, we describe the aggregation of 125,748 exomes and 15,708 genomes from human sequencing studies into the Genome Aggregation Database (gnomAD). We identify 443,769 high-confidence pLoF variants in this cohort after filtering for sequencing and annotation artifacts. Using an improved model of human mutation, we classify human protein-coding genes along a spectrum representing intolerance to inactivation, validate this classification using data from model organisms and engineered human cells, and show that it can be used to improve gene discovery power for both common and rare diseases.

Journal ArticleDOI
TL;DR: An overview of machine learning for fluid mechanics can be found in this article, where the strengths and limitations of these methods are addressed from the perspective of scientific inquiry that considers data as an inherent part of modeling, experimentation, and simulation.
Abstract: The field of fluid mechanics is rapidly advancing, driven by unprecedented volumes of data from field measurements, experiments and large-scale simulations at multiple spatiotemporal scales. Machine learning offers a wealth of techniques to extract information from data that could be translated into knowledge about the underlying fluid mechanics. Moreover, machine learning algorithms can augment domain knowledge and automate tasks related to flow control and optimization. This article presents an overview of past history, current developments, and emerging opportunities of machine learning for fluid mechanics. It outlines fundamental machine learning methodologies and discusses their uses for understanding, modeling, optimizing, and controlling fluid flows. The strengths and limitations of these methods are addressed from the perspective of scientific inquiry that considers data as an inherent part of modeling, experimentation, and simulation. Machine learning provides a powerful information processing framework that can enrich, and possibly even transform, current lines of fluid mechanics research and industrial applications.

Journal ArticleDOI
01 May 2019-Nature
TL;DR: A comprehensive assessment of the world’s rivers and their connectivity shows that only 37 per cent of rivers longer than 1,000 kilometres remain free-flowing over their entire length.
Abstract: Free-flowing rivers (FFRs) support diverse, complex and dynamic ecosystems globally, providing important societal and economic services. Infrastructure development threatens the ecosystem processes, biodiversity and services that these rivers support. Here we assess the connectivity status of 12 million kilometres of rivers globally and identify those that remain free-flowing in their entire length. Only 37 per cent of rivers longer than 1,000 kilometres remain free-flowing over their entire length and 23 per cent flow uninterrupted to the ocean. Very long FFRs are largely restricted to remote regions of the Arctic and of the Amazon and Congo basins. In densely populated areas only few very long rivers remain free-flowing, such as the Irrawaddy and Salween. Dams and reservoirs and their up- and downstream propagation of fragmentation and flow regulation are the leading contributors to the loss of river connectivity. By applying a new method to quantify riverine connectivity and map FFRs, we provide a foundation for concerted global and national strategies to maintain or restore them. A comprehensive assessment of the world’s rivers and their connectivity shows that only 37 per cent of rivers longer than 1,000 kilometres remain free-flowing over their entire length.

Journal ArticleDOI
21 Aug 2019-Nature
TL;DR: RNA-sequencing analysis of cells in the human cortex enabled identification of diverse cell types, revealing well-conserved architecture and homologous cell types as well as extensive differences when compared with datasets covering the analogous region of the mouse brain.
Abstract: Elucidating the cellular architecture of the human cerebral cortex is central to understanding our cognitive abilities and susceptibility to disease. Here we used single-nucleus RNA-sequencing analysis to perform a comprehensive study of cell types in the middle temporal gyrus of human cortex. We identified a highly diverse set of excitatory and inhibitory neuron types that are mostly sparse, with excitatory types being less layer-restricted than expected. Comparison to similar mouse cortex single-cell RNA-sequencing datasets revealed a surprisingly well-conserved cellular architecture that enables matching of homologous types and predictions of properties of human cell types. Despite this general conservation, we also found extensive differences between homologous human and mouse cell types, including marked alterations in proportions, laminar distributions, gene expression and morphology. These species-specific features emphasize the importance of directly studying human brain.

Journal ArticleDOI
Eric C. Bellm1, Shrinivas R. Kulkarni2, Matthew J. Graham2, Richard Dekany2, Roger M. H. Smith2, Reed Riddle2, Frank J. Masci2, George Helou2, Thomas A. Prince2, Scott M. Adams2, Cristina Barbarino3, Tom A. Barlow2, James Bauer4, Ron Beck2, Justin Belicki2, Rahul Biswas3, Nadejda Blagorodnova2, Dennis Bodewits4, Bryce Bolin1, V. Brinnel5, Tim Brooke2, Brian D. Bue2, Mattia Bulla3, Rick Burruss2, S. Bradley Cenko6, S. Bradley Cenko4, Chan-Kao Chang7, Andrew J. Connolly1, Michael W. Coughlin2, John Cromer2, Virginia Cunningham4, Kaushik De2, Alex Delacroix2, Vandana Desai2, Dmitry A. Duev2, Gwendolyn Eadie1, Tony L. Farnham4, Michael Feeney2, Ulrich Feindt3, David Flynn2, Anna Franckowiak, Sara Frederick4, Christoffer Fremling2, Avishay Gal-Yam8, Suvi Gezari4, Matteo Giomi5, Daniel A. Goldstein2, V. Zach Golkhou1, Ariel Goobar3, Steven Groom2, Eugean Hacopians2, David Hale2, John Henning2, Anna Y. Q. Ho2, David Hover2, Justin Howell2, Tiara Hung4, Daniela Huppenkothen1, David Imel2, Wing-Huen Ip7, Wing-Huen Ip9, Željko Ivezić1, Edward Jackson2, Lynne Jones1, Mario Juric1, Mansi M. Kasliwal2, Shai Kaspi10, Stephen Kaye2, Michael S. P. Kelley4, Marek Kowalski5, Emily Kramer2, Thomas Kupfer11, Thomas Kupfer2, Walter Landry2, Russ R. Laher2, Chien De Lee7, Hsing Wen Lin12, Hsing Wen Lin7, Zhong-Yi Lin7, Ragnhild Lunnan3, Ashish Mahabal2, Peter H. Mao2, Adam A. Miller13, Adam A. Miller14, Serge Monkewitz2, Patrick J. Murphy2, Chow-Choong Ngeow7, Jakob Nordin5, Peter Nugent15, Peter Nugent16, Eran O. Ofek8, Maria T. Patterson1, Bryan E. Penprase17, Michael Porter2, L. Rauch, Umaa Rebbapragada2, Daniel J. Reiley2, Mickael Rigault18, Hector P. Rodriguez2, Jan van Roestel19, Ben Rusholme2, J. V. Santen, Steve Schulze8, David L. Shupe2, Leo Singer6, Leo Singer4, Maayane T. Soumagnac8, Robert Stein, Jason Surace2, Jesper Sollerman3, Paula Szkody1, Francesco Taddia3, Scott Terek2, Angela Van Sistine20, Sjoert van Velzen4, W. Thomas Vestrand21, Richard Walters2, Charlotte Ward4, Quanzhi Ye2, Po-Chieh Yu7, Lin Yan2, Jeffry Zolkower2 
TL;DR: The Zwicky Transient Facility (ZTF) as mentioned in this paper is a new optical time-domain survey that uses the Palomar 48 inch Schmidt telescope, which provides a 47 deg^2 field of view and 8 s readout time, yielding more than an order of magnitude improvement in survey speed relative to its predecessor survey.
Abstract: The Zwicky Transient Facility (ZTF) is a new optical time-domain survey that uses the Palomar 48 inch Schmidt telescope. A custom-built wide-field camera provides a 47 deg^2 field of view and 8 s readout time, yielding more than an order of magnitude improvement in survey speed relative to its predecessor survey, the Palomar Transient Factory. We describe the design and implementation of the camera and observing system. The ZTF data system at the Infrared Processing and Analysis Center provides near-real-time reduction to identify moving and varying objects. We outline the analysis pipelines, data products, and associated archive. Finally, we present on-sky performance analysis and first scientific results from commissioning and the early survey. ZTF's public alert stream will serve as a useful precursor for that of the Large Synoptic Survey Telescope.

Posted Content
TL;DR: BART as mentioned in this paper is a denoising autoencoder for pretraining sequence-to-sequence models, which is trained by corrupting text with an arbitrary noising function, and then learning a model to reconstruct the original text.
Abstract: We present BART, a denoising autoencoder for pretraining sequence-to-sequence models. BART is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard Tranformer-based neural machine translation architecture which, despite its simplicity, can be seen as generalizing BERT (due to the bidirectional encoder), GPT (with the left-to-right decoder), and many other more recent pretraining schemes. We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of the original sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token. BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 6 ROUGE. BART also provides a 1.1 BLEU increase over a back-translation system for machine translation, with only target language pretraining. We also report ablation experiments that replicate other pretraining schemes within the BART framework, to better measure which factors most influence end-task performance.

Journal ArticleDOI
01 Aug 2019
TL;DR: The results of new trials continue to help us understand the role of these novel agents and which patients are more likely to benefit; ICIs are now part of the first-line NSCLC treatment armamentarium as monotherapy, combined with chemotherapy, or after definite chemoradiotherapy in patients with stage III unresectable NSCLCs.
Abstract: Lung cancer remains the leading cause of cancer deaths in the United States. In the past decade, significant advances have been made in the science of non-small cell lung cancer (NSCLC). Screening has been introduced with the goal of early detection. The National Lung Screening Trial found a lung cancer mortality benefit of 20% and a 6.7% decrease in all-cause mortality with the use of low-dose chest computed tomography in high-risk individuals. The treatment of lung cancer has also evolved with the introduction of several lines of tyrosine kinase inhibitors in patients with EGFR, ALK, ROS1, and NTRK mutations. Similarly, immune checkpoint inhibitors (ICIs) have dramatically changed the landscape of NSCLC treatment. Furthermore, the results of new trials continue to help us understand the role of these novel agents and which patients are more likely to benefit; ICIs are now part of the first-line NSCLC treatment armamentarium as monotherapy, combined with chemotherapy, or after definite chemoradiotherapy in patients with stage III unresectable NSCLC. Expression of programmed cell death protein-ligand 1 in malignant cells has been studied as a potential biomarker for response to ICIs. However, important drawbacks exist that limit its discriminatory potential. Identification of accurate predictive biomarkers beyond programmed cell death protein-ligand 1 expression remains essential to select the most appropriate candidates for ICI therapy. Many questions remain unanswered regarding the proper sequence and combinations of these new agents; however, the field is moving rapidly, and the overall direction is optimistic.

Journal ArticleDOI
Željko Ivezić1, Steven M. Kahn2, J. Anthony Tyson3, Bob Abel4  +332 moreInstitutions (55)
TL;DR: The Large Synoptic Survey Telescope (LSST) as discussed by the authors is a large, wide-field ground-based system designed to obtain repeated images covering the sky visible from Cerro Pachon in northern Chile.
Abstract: We describe here the most ambitious survey currently planned in the optical, the Large Synoptic Survey Telescope (LSST). The LSST design is driven by four main science themes: probing dark energy and dark matter, taking an inventory of the solar system, exploring the transient optical sky, and mapping the Milky Way. LSST will be a large, wide-field ground-based system designed to obtain repeated images covering the sky visible from Cerro Pachon in northern Chile. The telescope will have an 8.4 m (6.5 m effective) primary mirror, a 9.6 deg2 field of view, a 3.2-gigapixel camera, and six filters (ugrizy) covering the wavelength range 320–1050 nm. The project is in the construction phase and will begin regular survey operations by 2022. About 90% of the observing time will be devoted to a deep-wide-fast survey mode that will uniformly observe a 18,000 deg2 region about 800 times (summed over all six bands) during the anticipated 10 yr of operations and will yield a co-added map to r ~ 27.5. These data will result in databases including about 32 trillion observations of 20 billion galaxies and a similar number of stars, and they will serve the majority of the primary science programs. The remaining 10% of the observing time will be allocated to special projects such as Very Deep and Very Fast time domain surveys, whose details are currently under discussion. We illustrate how the LSST science drivers led to these choices of system parameters, and we describe the expected data products and their characteristics.

Journal ArticleDOI
TL;DR: The data show independent associations between short-term exposure to PM10 and PM2.5 and daily all-cause, cardiovascular, and respiratory mortality in more than 600 cities across the globe, and reinforce the evidence of a link between mortality and PM concentration established in regional and local studies.
Abstract: BACKGROUND: The systematic evaluation of the results of time-series studies of air pollution is challenged by differences in model specification and publication bias.METHODS: We evaluated the assoc ...

Journal ArticleDOI
25 Feb 2019-Nature
TL;DR: Results suggest that the origin of the observed effects is interlayer excitons trapped in a smooth moiré potential with inherited valley-contrasting physics, and presents opportunities to control two-dimensional moirÉ optics through variation of the twist angle.
Abstract: The formation of moire patterns in crystalline solids can be used to manipulate their electronic properties, which are fundamentally influenced by periodic potential landscapes. In two-dimensional materials, a moire pattern with a superlattice potential can be formed by vertically stacking two layered materials with a twist and/or a difference in lattice constant. This approach has led to electronic phenomena including the fractal quantum Hall effect1–3, tunable Mott insulators4,5 and unconventional superconductivity6. In addition, theory predicts that notable effects on optical excitations could result from a moire potential in two-dimensional valley semiconductors7–9, but these signatures have not been detected experimentally. Here we report experimental evidence of interlayer valley excitons trapped in a moire potential in molybdenum diselenide (MoSe2)/tungsten diselenide (WSe2) heterobilayers. At low temperatures, we observe photoluminescence close to the free interlayer exciton energy but with linewidths over one hundred times narrower (around 100 microelectronvolts). The emitter g-factors are homogeneous across the same sample and take only two values, −15.9 and 6.7, in samples with approximate twist angles of 60 degrees and 0 degrees, respectively. The g-factors match those of the free interlayer exciton, which is determined by one of two possible valley-pairing configurations. At twist angles of approximately 20 degrees the emitters become two orders of magnitude dimmer; however, they possess the same g-factor as the heterobilayer at a twist angle of approximately 60 degrees. This is consistent with the umklapp recombination of interlayer excitons near the commensurate 21.8-degree twist angle7. The emitters exhibit strong circular polarization of the same helicity for a given twist angle, which suggests that the trapping potential retains three-fold rotational symmetry. Together with a characteristic dependence on power and excitation energy, these results suggest that the origin of the observed effects is interlayer excitons trapped in a smooth moire potential with inherited valley-contrasting physics. This work presents opportunities to control two-dimensional moire optics through variation of the twist angle. The trapping of interlayer valley excitons in a moire potential formed by a molybdenum diselenide/tungsten diselenide heterobilayer with twist angle control is reported.

Journal ArticleDOI
TL;DR: Cleavage Under Targets and Tagmentation (CUT&Tag), an enzyme-tethering strategy that provides efficient high-resolution sequencing libraries for profiling diverse chromatin components, is described.
Abstract: Many chromatin features play critical roles in regulating gene expression. A complete understanding of gene regulation will require the mapping of specific chromatin features in small samples of cells at high resolution. Here we describe Cleavage Under Targets and Tagmentation (CUT&Tag), an enzyme-tethering strategy that provides efficient high-resolution sequencing libraries for profiling diverse chromatin components. In CUT&Tag, a chromatin protein is bound in situ by a specific antibody, which then tethers a protein A-Tn5 transposase fusion protein. Activation of the transposase efficiently generates fragment libraries with high resolution and exceptionally low background. All steps from live cells to sequencing-ready libraries can be performed in a single tube on the benchtop or a microwell in a high-throughput pipeline, and the entire procedure can be performed in one day. We demonstrate the utility of CUT&Tag by profiling histone modifications, RNA Polymerase II and transcription factors on low cell numbers and single cells.

Journal ArticleDOI
02 Apr 2019-JAMA
TL;DR: Among patients with AF, the strategy of catheter ablation, compared with medical therapy, did not significantly reduce the primary composite end point of death, disabling stroke, serious bleeding, or cardiac arrest, which should be considered in interpreting the results of the trial.
Abstract: Importance Catheter ablation is effective in restoring sinus rhythm in atrial fibrillation (AF), but its effects on long-term mortality and stroke risk are uncertain. Objective To determine whether catheter ablation is more effective than conventional medical therapy for improving outcomes in AF. Design, Setting, and Participants The Catheter Ablation vs Antiarrhythmic Drug Therapy for Atrial Fibrillation trial is an investigator-initiated, open-label, multicenter, randomized trial involving 126 centers in 10 countries. A total of 2204 symptomatic patients with AF aged 65 years and older or younger than 65 years with 1 or more risk factors for stroke were enrolled from November 2009 to April 2016, with follow-up through December 31, 2017. Interventions The catheter ablation group (n = 1108) underwent pulmonary vein isolation, with additional ablative procedures at the discretion of site investigators. The drug therapy group (n = 1096) received standard rhythm and/or rate control drugs guided by contemporaneous guidelines. Main Outcomes and Measures The primary end point was a composite of death, disabling stroke, serious bleeding, or cardiac arrest. Among 13 prespecified secondary end points, 3 are included in this report: all-cause mortality; total mortality or cardiovascular hospitalization; and AF recurrence. Results Of the 2204 patients randomized (median age, 68 years; 37.2% female; 42.9% had paroxysmal AF and 57.1% had persistent AF), 89.3% completed the trial. Of the patients assigned to catheter ablation, 1006 (90.8%) underwent the procedure. Of the patients assigned to drug therapy, 301 (27.5%) ultimately received catheter ablation. In the intention-to-treat analysis, over a median follow-up of 48.5 months, the primary end point occurred in 8.0% (n = 89) of patients in the ablation group vs 9.2% (n = 101) of patients in the drug therapy group (hazard ratio [HR], 0.86 [95% CI, 0.65-1.15];P = .30). Among the secondary end points, outcomes in the ablation group vs the drug therapy group, respectively, were 5.2% vs 6.1% for all-cause mortality (HR, 0.85 [95% CI, 0.60-1.21];P = .38), 51.7% vs 58.1% for death or cardiovascular hospitalization (HR, 0.83 [95% CI, 0.74-0.93];P = .001), and 49.9% vs 69.5% for AF recurrence (HR, 0.52 [95% CI, 0.45-0.60];P Conclusions and Relevance Among patients with AF, the strategy of catheter ablation, compared with medical therapy, did not significantly reduce the primary composite end point of death, disabling stroke, serious bleeding, or cardiac arrest. However, the estimated treatment effect of catheter ablation was affected by lower-than-expected event rates and treatment crossovers, which should be considered in interpreting the results of the trial. Trial Registration ClinicalTrials.gov Identifier:NCT00911508

Journal ArticleDOI
TL;DR: The 2019 report of The Lancet Countdown on health and climate change : ensuring that the health of a child born today is not defined by a changing climate is ensured.

Proceedings ArticleDOI
02 May 2019
TL;DR: This work proposes 18 generally applicable design guidelines for human-AI interaction that can serve as a resource to practitioners working on the design of applications and features that harness AI technologies, and to researchers interested in the further development of human- AI interaction design principles.
Abstract: Advances in artificial intelligence (AI) frame opportunities and challenges for user interface design. Principles for human-AI interaction have been discussed in the human-computer interaction community for over two decades, but more study and innovation are needed in light of advances in AI and the growing uses of AI technologies in human-facing applications. We propose 18 generally applicable design guidelines for human-AI interaction. These guidelines are validated through multiple rounds of evaluation including a user study with 49 design practitioners who tested the guidelines against 20 popular AI-infused products. The results verify the relevance of the guidelines over a spectrum of interaction scenarios and reveal gaps in our knowledge, highlighting opportunities for further research. Based on the evaluations, we believe the set of design guidelines can serve as a resource to practitioners working on the design of applications and features that harness AI technologies, and to researchers interested in the further development of human-AI interaction design principles.