scispace - formally typeset
Search or ask a question

Showing papers by "University of Virginia published in 2018"


Journal ArticleDOI
Clotilde Théry1, Kenneth W. Witwer2, Elena Aikawa3, María José Alcaraz4  +414 moreInstitutions (209)
TL;DR: The MISEV2018 guidelines include tables and outlines of suggested protocols and steps to follow to document specific EV-associated functional activities, and a checklist is provided with summaries of key points.
Abstract: The last decade has seen a sharp increase in the number of scientific publications describing physiological and pathological functions of extracellular vesicles (EVs), a collective term covering various subtypes of cell-released, membranous structures, called exosomes, microvesicles, microparticles, ectosomes, oncosomes, apoptotic bodies, and many other names. However, specific issues arise when working with these entities, whose size and amount often make them difficult to obtain as relatively pure preparations, and to characterize properly. The International Society for Extracellular Vesicles (ISEV) proposed Minimal Information for Studies of Extracellular Vesicles (“MISEV”) guidelines for the field in 2014. We now update these “MISEV2014” guidelines based on evolution of the collective knowledge in the last four years. An important point to consider is that ascribing a specific function to EVs in general, or to subtypes of EVs, requires reporting of specific information beyond mere description of function in a crude, potentially contaminated, and heterogeneous preparation. For example, claims that exosomes are endowed with exquisite and specific activities remain difficult to support experimentally, given our still limited knowledge of their specific molecular machineries of biogenesis and release, as compared with other biophysically similar EVs. The MISEV2018 guidelines include tables and outlines of suggested protocols and steps to follow to document specific EV-associated functional activities. Finally, a checklist is provided with summaries of key points.

5,988 citations


Journal ArticleDOI
TL;DR: In this paper, a Monte Carlo sampler (The Joker) is used to perform a search for companions to 96,231 red-giant stars observed in the APOGEE survey (DR14) with $ ≥ 3$ spectroscopic epochs.
Abstract: Multi-epoch radial velocity measurements of stars can be used to identify stellar, sub-stellar, and planetary-mass companions. Even a small number of observation epochs can be informative about companions, though there can be multiple qualitatively different orbital solutions that fit the data. We have custom-built a Monte Carlo sampler (The Joker) that delivers reliable (and often highly multi-modal) posterior samplings for companion orbital parameters given sparse radial-velocity data. Here we use The Joker to perform a search for companions to 96,231 red-giant stars observed in the APOGEE survey (DR14) with $\\geq 3$ spectroscopic epochs. We select stars with probable companions by making a cut on our posterior belief about the amplitude of the stellar radial-velocity variation induced by the orbit. We provide (1) a catalog of 320 companions for which the stellar companion properties can be confidently determined, (2) a catalog of 4,898 stars that likely have companions, but would require more observations to uniquely determine the orbital properties, and (3) posterior samplings for the full orbital parameters for all stars in the parent sample. We show the characteristics of systems with confidently determined companion properties and highlight interesting systems with candidate compact object companions.

2,564 citations


Journal ArticleDOI
TL;DR: Among patients with heart failure and moderate‐to‐severe or severe secondary mitral regurgitation who remained symptomatic despite the use of maximal doses of guideline‐directed medical therapy, transcatheter mitral‐valve repair resulted in a lower rate of hospitalization forHeart failure and lower all‐cause mortality within 24 months of follow‐up than medical therapy alone.
Abstract: Background Among patients with heart failure who have mitral regurgitation due to left ventricular dysfunction, the prognosis is poor Transcatheter mitral-valve repair may improve their clinical outcomes Methods At 78 sites in the United States and Canada, we enrolled patients with heart failure and moderate-to-severe or severe secondary mitral regurgitation who remained symptomatic despite the use of maximal doses of guideline-directed medical therapy Patients were randomly assigned to transcatheter mitral-valve repair plus medical therapy (device group) or medical therapy alone (control group) The primary effectiveness end point was all hospitalizations for heart failure within 24 months of follow-up The primary safety end point was freedom from device-related complications at 12 months; the rate for this end point was compared with a prespecified objective performance goal of 880% Results Of the 614 patients who were enrolled in the trial, 302 were assigned to the device group and 312 t

1,758 citations


Journal ArticleDOI
David Capper1, David Capper2, David Capper3, David T.W. Jones2  +168 moreInstitutions (54)
22 Mar 2018-Nature
TL;DR: This work presents a comprehensive approach for the DNA methylation-based classification of central nervous system tumours across all entities and age groups, and shows that the availability of this method may have a substantial impact on diagnostic precision compared to standard methods.
Abstract: Accurate pathological diagnosis is crucial for optimal management of patients with cancer. For the approximately 100 known tumour types of the central nervous system, standardization of the diagnostic process has been shown to be particularly challenging-with substantial inter-observer variability in the histopathological diagnosis of many tumour types. Here we present a comprehensive approach for the DNA methylation-based classification of central nervous system tumours across all entities and age groups, and demonstrate its application in a routine diagnostic setting. We show that the availability of this method may have a substantial impact on diagnostic precision compared to standard methods, resulting in a change of diagnosis in up to 12% of prospective cases. For broader accessibility, we have designed a free online classifier tool, the use of which does not require any additional onsite data processing. Our results provide a blueprint for the generation of machine-learning-based tumour classifiers across other cancer entities, with the potential to fundamentally transform tumour pathology.

1,620 citations


Journal ArticleDOI
Daniel J. Benjamin1, James O. Berger2, Magnus Johannesson3, Magnus Johannesson1, Brian A. Nosek4, Brian A. Nosek5, Eric-Jan Wagenmakers6, Richard A. Berk7, Kenneth A. Bollen8, Björn Brembs9, Lawrence D. Brown7, Colin F. Camerer10, David Cesarini11, David Cesarini12, Christopher D. Chambers13, Merlise A. Clyde2, Thomas D. Cook14, Thomas D. Cook15, Paul De Boeck16, Zoltan Dienes17, Anna Dreber3, Kenny Easwaran18, Charles Efferson19, Ernst Fehr20, Fiona Fidler21, Andy P. Field17, Malcolm R. Forster22, Edward I. George7, Richard Gonzalez23, Steven N. Goodman24, Edwin J. Green25, Donald P. Green26, Anthony G. Greenwald27, Jarrod D. Hadfield28, Larry V. Hedges14, Leonhard Held20, Teck-Hua Ho29, Herbert Hoijtink30, Daniel J. Hruschka31, Kosuke Imai32, Guido W. Imbens24, John P. A. Ioannidis24, Minjeong Jeon33, James Holland Jones34, Michael Kirchler35, David Laibson36, John A. List37, Roderick J. A. Little23, Arthur Lupia23, Edouard Machery38, Scott E. Maxwell39, Michael A. McCarthy21, Don A. Moore40, Stephen L. Morgan41, Marcus R. Munafò42, Shinichi Nakagawa43, Brendan Nyhan44, Timothy H. Parker45, Luis R. Pericchi46, Marco Perugini47, Jeffrey N. Rouder48, Judith Rousseau49, Victoria Savalei50, Felix D. Schönbrodt51, Thomas Sellke52, Betsy Sinclair53, Dustin Tingley36, Trisha Van Zandt16, Simine Vazire54, Duncan J. Watts55, Christopher Winship36, Robert L. Wolpert2, Yu Xie32, Cristobal Young24, Jonathan Zinman44, Valen E. Johnson18, Valen E. Johnson1 
University of Southern California1, Duke University2, Stockholm School of Economics3, Center for Open Science4, University of Virginia5, University of Amsterdam6, University of Pennsylvania7, University of North Carolina at Chapel Hill8, University of Regensburg9, California Institute of Technology10, New York University11, Research Institute of Industrial Economics12, Cardiff University13, Northwestern University14, Mathematica Policy Research15, Ohio State University16, University of Sussex17, Texas A&M University18, Royal Holloway, University of London19, University of Zurich20, University of Melbourne21, University of Wisconsin-Madison22, University of Michigan23, Stanford University24, Rutgers University25, Columbia University26, University of Washington27, University of Edinburgh28, National University of Singapore29, Utrecht University30, Arizona State University31, Princeton University32, University of California, Los Angeles33, Imperial College London34, University of Innsbruck35, Harvard University36, University of Chicago37, University of Pittsburgh38, University of Notre Dame39, University of California, Berkeley40, Johns Hopkins University41, University of Bristol42, University of New South Wales43, Dartmouth College44, Whitman College45, University of Puerto Rico46, University of Milan47, University of California, Irvine48, Paris Dauphine University49, University of British Columbia50, Ludwig Maximilian University of Munich51, Purdue University52, Washington University in St. Louis53, University of California, Davis54, Microsoft55
TL;DR: The default P-value threshold for statistical significance is proposed to be changed from 0.05 to 0.005 for claims of new discoveries in order to reduce uncertainty in the number of discoveries.
Abstract: We propose to change the default P-value threshold for statistical significance from 0.05 to 0.005 for claims of new discoveries.

1,586 citations


Journal ArticleDOI
TL;DR: It is found that deep learning has yet to revolutionize biomedicine or definitively resolve any of the most pressing challenges in the field, but promising advances have been made on the prior state of the art.
Abstract: Deep learning describes a class of machine learning algorithms that are capable of combining raw inputs into layers of intermediate features. These algorithms have recently shown impressive results across a variety of domains. Biology and medicine are data-rich disciplines, but the data are complex and often ill-understood. Hence, deep learning techniques may be particularly well suited to solve problems of these fields. We examine applications of deep learning to a variety of biomedical problems-patient classification, fundamental biological processes and treatment of patients-and discuss whether deep learning will be able to transform these tasks or if the biomedical sphere poses unique challenges. Following from an extensive literature review, we find that deep learning has yet to revolutionize biomedicine or definitively resolve any of the most pressing challenges in the field, but promising advances have been made on the prior state of the art. Even though improvements over previous baselines have been modest in general, the recent progress indicates that deep learning methods will provide valuable means for speeding up or aiding human investigation. Though progress has been made linking a specific neural network's prediction to input features, understanding how users should interpret these models to make testable hypotheses about the system under study remains an open challenge. Furthermore, the limited amount of labelled data for training presents problems in some domains, as do legal and privacy constraints on work with sensitive health records. Nonetheless, we foresee deep learning enabling changes at both bench and bedside with the potential to transform several areas of biology and medicine.

1,491 citations


Journal ArticleDOI
Corinne Le Quéré1, Robbie M. Andrew, Pierre Friedlingstein2, Stephen Sitch2, Judith Hauck3, Julia Pongratz4, Julia Pongratz5, Penelope A. Pickers1, Jan Ivar Korsbakken, Glen P. Peters, Josep G. Canadell6, Almut Arneth7, Vivek K. Arora, Leticia Barbero8, Leticia Barbero9, Ana Bastos5, Laurent Bopp10, Frédéric Chevallier11, Louise Chini12, Philippe Ciais11, Scott C. Doney13, Thanos Gkritzalis14, Daniel S. Goll11, Ian Harris1, Vanessa Haverd6, Forrest M. Hoffman15, Mario Hoppema3, Richard A. Houghton16, George C. Hurtt12, Tatiana Ilyina4, Atul K. Jain17, Truls Johannessen18, Chris D. Jones19, Etsushi Kato, Ralph F. Keeling20, Kees Klein Goldewijk21, Kees Klein Goldewijk22, Peter Landschützer4, Nathalie Lefèvre23, Sebastian Lienert24, Zhu Liu25, Zhu Liu1, Danica Lombardozzi26, Nicolas Metzl23, David R. Munro27, Julia E. M. S. Nabel4, Shin-Ichiro Nakaoka28, Craig Neill29, Craig Neill30, Are Olsen18, T. Ono, Prabir K. Patra31, Anna Peregon11, Wouter Peters32, Wouter Peters33, Philippe Peylin11, Benjamin Pfeil18, Benjamin Pfeil34, Denis Pierrot8, Denis Pierrot9, Benjamin Poulter35, Gregor Rehder36, Laure Resplandy37, Eddy Robertson19, Matthias Rocher11, Christian Rödenbeck4, Ute Schuster2, Jörg Schwinger34, Roland Séférian11, Ingunn Skjelvan34, Tobias Steinhoff38, Adrienne J. Sutton39, Pieter P. Tans39, Hanqin Tian40, Bronte Tilbrook30, Bronte Tilbrook29, Francesco N. Tubiello41, Ingrid T. van der Laan-Luijkx33, Guido R. van der Werf42, Nicolas Viovy11, Anthony P. Walker15, Andy Wiltshire19, Rebecca Wright1, Sönke Zaehle4, Bo Zheng11 
University of East Anglia1, University of Exeter2, Alfred Wegener Institute for Polar and Marine Research3, Max Planck Society4, Ludwig Maximilian University of Munich5, Commonwealth Scientific and Industrial Research Organisation6, Karlsruhe Institute of Technology7, Cooperative Institute for Marine and Atmospheric Studies8, Atlantic Oceanographic and Meteorological Laboratory9, École Normale Supérieure10, Centre national de la recherche scientifique11, University of Maryland, College Park12, University of Virginia13, Flanders Marine Institute14, Oak Ridge National Laboratory15, Woods Hole Research Center16, University of Illinois at Urbana–Champaign17, Geophysical Institute, University of Bergen18, Met Office19, University of California, San Diego20, Utrecht University21, Netherlands Environmental Assessment Agency22, University of Paris23, Oeschger Centre for Climate Change Research24, Tsinghua University25, National Center for Atmospheric Research26, Institute of Arctic and Alpine Research27, National Institute for Environmental Studies28, Cooperative Research Centre29, Hobart Corporation30, Japan Agency for Marine-Earth Science and Technology31, University of Groningen32, Wageningen University and Research Centre33, Bjerknes Centre for Climate Research34, Goddard Space Flight Center35, Leibniz Institute for Baltic Sea Research36, Princeton University37, Leibniz Institute of Marine Sciences38, National Oceanic and Atmospheric Administration39, Auburn University40, Food and Agriculture Organization41, VU University Amsterdam42
TL;DR: In this article, the authors describe data sets and methodology to quantify the five major components of the global carbon budget and their uncertainties, including emissions from land use and land-use change data and bookkeeping models.
Abstract: . Accurate assessment of anthropogenic carbon dioxide ( CO2 ) emissions and their redistribution among the atmosphere, ocean, and terrestrial biosphere – the “global carbon budget” – is important to better understand the global carbon cycle, support the development of climate policies, and project future climate change. Here we describe data sets and methodology to quantify the five major components of the global carbon budget and their uncertainties. Fossil CO2 emissions ( EFF ) are based on energy statistics and cement production data, while emissions from land use and land-use change ( ELUC ), mainly deforestation, are based on land use and land-use change data and bookkeeping models. Atmospheric CO2 concentration is measured directly and its growth rate ( GATM ) is computed from the annual changes in concentration. The ocean CO2 sink ( SOCEAN ) and terrestrial CO2 sink ( SLAND ) are estimated with global process models constrained by observations. The resulting carbon budget imbalance ( BIM ), the difference between the estimated total emissions and the estimated changes in the atmosphere, ocean, and terrestrial biosphere, is a measure of imperfect data and understanding of the contemporary carbon cycle. All uncertainties are reported as ±1σ . For the last decade available (2008–2017), EFF was 9.4±0.5 GtC yr −1 , ELUC 1.5±0.7 GtC yr −1 , GATM 4.7±0.02 GtC yr −1 , SOCEAN 2.4±0.5 GtC yr −1 , and SLAND 3.2±0.8 GtC yr −1 , with a budget imbalance BIM of 0.5 GtC yr −1 indicating overestimated emissions and/or underestimated sinks. For the year 2017 alone, the growth in EFF was about 1.6 % and emissions increased to 9.9±0.5 GtC yr −1 . Also for 2017, ELUC was 1.4±0.7 GtC yr −1 , GATM was 4.6±0.2 GtC yr −1 , SOCEAN was 2.5±0.5 GtC yr −1 , and SLAND was 3.8±0.8 GtC yr −1 , with a BIM of 0.3 GtC. The global atmospheric CO2 concentration reached 405.0±0.1 ppm averaged over 2017. For 2018, preliminary data for the first 6–9 months indicate a renewed growth in EFF of + 2.7 % (range of 1.8 % to 3.7 %) based on national emission projections for China, the US, the EU, and India and projections of gross domestic product corrected for recent changes in the carbon intensity of the economy for the rest of the world. The analysis presented here shows that the mean and trend in the five components of the global carbon budget are consistently estimated over the period of 1959–2017, but discrepancies of up to 1 GtC yr −1 persist for the representation of semi-decadal variability in CO2 fluxes. A detailed comparison among individual estimates and the introduction of a broad range of observations show (1) no consensus in the mean and trend in land-use change emissions, (2) a persistent low agreement among the different methods on the magnitude of the land CO2 flux in the northern extra-tropics, and (3) an apparent underestimation of the CO2 variability by ocean models, originating outside the tropics. This living data update documents changes in the methods and data sets used in this new global carbon budget and the progress in understanding the global carbon cycle compared with previous publications of this data set (Le Quere et al., 2018, 2016, 2015a, b, 2014, 2013). All results presented here can be downloaded from https://doi.org/10.18160/GCP-2018 .

1,458 citations


Journal ArticleDOI
22 Jun 2018-Science
TL;DR: It is demonstrated that, in the general population, the personality trait neuroticism is significantly correlated with almost every psychiatric disorder and migraine, and it is shown that both psychiatric and neurological disorders have robust correlations with cognitive and personality measures.
Abstract: Disorders of the brain can exhibit considerable epidemiological comorbidity and often share symptoms, provoking debate about their etiologic overlap. We quantified the genetic sharing of 25 brain disorders from genome-wide association studies of 265,218 patients and 784,643 control participants and assessed their relationship to 17 phenotypes from 1,191,588 individuals. Psychiatric disorders share common variant risk, whereas neurological disorders appear more distinct from one another and from the psychiatric disorders. We also identified significant sharing between disorders and a number of brain phenotypes, including cognitive measures. Further, we conducted simulations to explore how statistical power, diagnostic misclassification, and phenotypic heterogeneity affect genetic correlations. These results highlight the importance of common genetic variation as a risk factor for brain disorders and the value of heritability-based methods in understanding their etiology.

1,357 citations


Posted ContentDOI
Spyridon Bakas1, Mauricio Reyes, Andras Jakab2, Stefan Bauer3  +435 moreInstitutions (111)
TL;DR: This study assesses the state-of-the-art machine learning methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i.e., 2012-2018, and investigates the challenge of identifying the best ML algorithms for each of these tasks.
Abstract: Gliomas are the most common primary brain malignancies, with different degrees of aggressiveness, variable prognosis and various heterogeneous histologic sub-regions, i.e., peritumoral edematous/invaded tissue, necrotic core, active and non-enhancing core. This intrinsic heterogeneity is also portrayed in their radio-phenotype, as their sub-regions are depicted by varying intensity profiles disseminated across multi-parametric magnetic resonance imaging (mpMRI) scans, reflecting varying biological properties. Their heterogeneous shape, extent, and location are some of the factors that make these tumors difficult to resect, and in some cases inoperable. The amount of resected tumoris a factor also considered in longitudinal scans, when evaluating the apparent tumor for potential diagnosis of progression. Furthermore, there is mounting evidence that accurate segmentation of the various tumor sub-regions can offer the basis for quantitative image analysis towards prediction of patient overall survival. This study assesses thestate-of-the-art machine learning (ML) methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i.e., 2012-2018. Specifically, we focus on i) evaluating segmentations of the various glioma sub-regions in pre-operative mpMRI scans, ii) assessing potential tumor progression by virtue of longitudinal growth of tumor sub-regions, beyond use of the RECIST/RANO criteria, and iii) predicting the overall survival from pre-operative mpMRI scans of patients that underwent gross tota lresection. Finally, we investigate the challenge of identifying the best ML algorithms for each of these tasks, considering that apart from being diverse on each instance of the challenge, the multi-institutional mpMRI BraTS dataset has also been a continuously evolving/growing dataset.

1,165 citations


Journal ArticleDOI
TL;DR: This guideline update used an existing systematic evidence review of the CRC screening literature and microsimulation modeling analyses, including a new evaluation of the age to begin screening by race and sex and additional modeling that incorporates changes in US CRC incidence.
Abstract: In the United States, colorectal cancer (CRC) is the fourth most common cancer diagnosed among adults and the second leading cause of death from cancer. For this guideline update, the American Cancer Society (ACS) used an existing systematic evidence review of the CRC screening literature and microsimulation modeling analyses, including a new evaluation of the age to begin screening by race and sex and additional modeling that incorporates changes in US CRC incidence. Screening with any one of multiple options is associated with a significant reduction in CRC incidence through the detection and removal of adenomatous polyps and other precancerous lesions and with a reduction in mortality through incidence reduction and early detection of CRC. Results from modeling analyses identified efficient and model-recommendable strategies that started screening at age 45 years. The ACS Guideline Development Group applied the Grades of Recommendations, Assessment, Development, and Evaluation (GRADE) criteria in developing and rating the recommendations. The ACS recommends that adults aged 45 years and older with an average risk of CRC undergo regular screening with either a high-sensitivity stool-based test or a structural (visual) examination, depending on patient preference and test availability. As a part of the screening process, all positive results on noncolonoscopy screening tests should be followed up with timely colonoscopy. The recommendation to begin screening at age 45 years is a qualified recommendation. The recommendation for regular screening in adults aged 50 years and older is a strong recommendation. The ACS recommends (qualified recommendations) that: 1) average-risk adults in good health with a life expectancy of more than 10 years continue CRC screening through the age of 75 years; 2) clinicians individualize CRC screening decisions for individuals aged 76 through 85 years based on patient preferences, life expectancy, health status, and prior screening history; and 3) clinicians discourage individuals older than 85 years from continuing CRC screening. The options for CRC screening are: fecal immunochemical test annually; high-sensitivity, guaiac-based fecal occult blood test annually; multitarget stool DNA test every 3 years; colonoscopy every 10 years; computed tomography colonography every 5 years; and flexible sigmoidoscopy every 5 years. CA Cancer J Clin 2018;68:250-281. © 2018 American Cancer Society.

1,153 citations


Journal ArticleDOI
TL;DR: A brief history of gene-editing tools is presented and the wide range of CRISPR-based genome-targeting tools are described, to conclude with future directions and the broader impact ofCRISPR technologies.
Abstract: CRISPR is becoming an indispensable tool in biological research. Once known as the bacterial immune system against invading viruses, the programmable capacity of the Cas9 enzyme is now revolutionizing diverse fields of medical research, biotechnology, and agriculture. CRISPR-Cas9 is no longer just a gene-editing tool; the application areas of catalytically impaired inactive Cas9, including gene regulation, epigenetic editing, chromatin engineering, and imaging, now exceed the gene-editing functionality of WT Cas9. Here, we will present a brief history of gene-editing tools and describe the wide range of CRISPR-based genome-targeting tools. We will conclude with future directions and the broader impact of CRISPR technologies.

Journal ArticleDOI
TL;DR: Widespread adoption of preregistration will increase distinctiveness between hypothesis generation and hypothesis testing and will improve the credibility of research findings.
Abstract: Progress in science relies in part on generating hypotheses with existing observations and testing hypotheses with new observations. This distinction between postdiction and prediction is appreciated conceptually but is not respected in practice. Mistaking generation of postdictions with testing of predictions reduces the credibility of research findings. However, ordinary biases in human reasoning, such as hindsight bias, make it hard to avoid this mistake. An effective solution is to define the research questions and analysis plan before observing the research outcomes—a process called preregistration. Preregistration distinguishes analyses and outcomes that result from predictions from those that result from postdictions. A variety of practical strategies are available to make the best possible use of preregistration in circumstances that fall short of the ideal application, such as when the data are preexisting. Services are now available for preregistration across all disciplines, facilitating a rapid increase in the practice. Widespread adoption of preregistration will increase distinctiveness between hypothesis generation and hypothesis testing and will improve the credibility of research findings.

Proceedings ArticleDOI
01 Jan 2018
Abstract: Although deep neural networks (DNNs) have achieved great success in many tasks, they can often be fooled by \emph{adversarial examples} that are generated by adding small but purposeful distortions to natural examples. Previous studies to defend against adversarial examples mostly focused on refining the DNN models, but have either shown limited success or required expensive computation. We propose a new strategy, \emph{feature squeezing}, that can be used to harden DNN models by detecting adversarial examples. Feature squeezing reduces the search space available to an adversary by coalescing samples that correspond to many different feature vectors in the original space into a single sample. By comparing a DNN model's prediction on the original input with that on squeezed inputs, feature squeezing detects adversarial examples with high accuracy and few false positives. This paper explores two feature squeezing methods: reducing the color bit depth of each pixel and spatial smoothing. These simple strategies are inexpensive and complementary to other defenses, and can be combined in a joint detection framework to achieve high detection rates against state-of-the-art attacks.

Journal ArticleDOI
Bela Abolfathi1, D. S. Aguado2, Gabriela Aguilar3, Carlos Allende Prieto2  +361 moreInstitutions (94)
TL;DR: SDSS-IV is the fourth generation of the Sloan Digital Sky Survey and has been in operation since 2014 July. as discussed by the authors describes the second data release from this phase, and the 14th from SDSS overall (making this Data Release Fourteen or DR14).
Abstract: The fourth generation of the Sloan Digital Sky Survey (SDSS-IV) has been in operation since 2014 July. This paper describes the second data release from this phase, and the 14th from SDSS overall (making this Data Release Fourteen or DR14). This release makes the data taken by SDSS-IV in its first two years of operation (2014-2016 July) public. Like all previous SDSS releases, DR14 is cumulative, including the most recent reductions and calibrations of all data taken by SDSS since the first phase began operations in 2000. New in DR14 is the first public release of data from the extended Baryon Oscillation Spectroscopic Survey; the first data from the second phase of the Apache Point Observatory (APO) Galactic Evolution Experiment (APOGEE-2), including stellar parameter estimates from an innovative data-driven machine-learning algorithm known as "The Cannon"; and almost twice as many data cubes from the Mapping Nearby Galaxies at APO (MaNGA) survey as were in the previous release (N = 2812 in total). This paper describes the location and format of the publicly available data from the SDSS-IV surveys. We provide references to the important technical papers describing how these data have been taken (both targeting and observation details) and processed for scientific use. The SDSS web site (www.sdss.org) has been updated for this release and provides links to data downloads, as well as tutorials and examples of data use. SDSS-IV is planning to continue to collect astronomical data until 2020 and will be followed by SDSS-V.

Proceedings ArticleDOI
27 May 2018
TL;DR: DeepTest is a systematic testing tool for automatically detecting erroneous behaviors of DNN-driven vehicles that can potentially lead to fatal crashes and systematically explore different parts of the DNN logic by generating test inputs that maximize the numbers of activated neurons.
Abstract: Recent advances in Deep Neural Networks (DNNs) have led to the development of DNN-driven autonomous cars that, using sensors like camera, LiDAR, etc., can drive without any human intervention. Most major manufacturers including Tesla, GM, Ford, BMW, and Waymo/Google are working on building and testing different types of autonomous vehicles. The lawmakers of several US states including California, Texas, and New York have passed new legislation to fast-track the process of testing and deployment of autonomous vehicles on their roads. However, despite their spectacular progress, DNNs, just like traditional software, often demonstrate incorrect or unexpected corner-case behaviors that can lead to potentially fatal collisions. Several such real-world accidents involving autonomous cars have already happened including one which resulted in a fatality. Most existing testing techniques for DNN-driven vehicles are heavily dependent on the manual collection of test data under different driving conditions which become prohibitively expensive as the number of test conditions increases. In this paper, we design, implement, and evaluate DeepTest, a systematic testing tool for automatically detecting erroneous behaviors of DNN-driven vehicles that can potentially lead to fatal crashes. First, our tool is designed to automatically generated test cases leveraging real-world changes in driving conditions like rain, fog, lighting conditions, etc. DeepTest systematically explore different parts of the DNN logic by generating test inputs that maximize the numbers of activated neurons. DeepTest found thousands of erroneous behaviors under different realistic driving conditions (e.g., blurring, rain, fog, etc.) many of which lead to potentially fatal crashes in three top performing DNNs in the Udacity self-driving car challenge.

Journal ArticleDOI
26 Oct 2018-Science
TL;DR: These chromatin accessibility profiles identify cancer- and tissue-specific DNA regulatory elements that enable classification of tumor subtypes with newly recognized prognostic importance, and identify distinct TF activities in cancer based on differences in the inferred patterns of TF-DNA interaction and gene expression.
Abstract: INTRODUCTION Cancer is one of the leading causes of death worldwide. Although the 2% of the human genome that encodes proteins has been extensively studied, much remains to be learned about the noncoding genome and gene regulation in cancer. Genes are turned on and off in the proper cell types and cell states by transcription factor (TF) proteins acting on DNA regulatory elements that are scattered over the vast noncoding genome and exert long-range influences. The Cancer Genome Atlas (TCGA) is a global consortium that aims to accelerate the understanding of the molecular basis of cancer. TCGA has systematically collected DNA mutation, methylation, RNA expression, and other comprehensive datasets from primary human cancer tissue. TCGA has served as an invaluable resource for the identification of genomic aberrations, altered transcriptional networks, and cancer subtypes. Nonetheless, the gene regulatory landscapes of these tumors have largely been inferred through indirect means. RATIONALE A hallmark of active DNA regulatory elements is chromatin accessibility. Eukaryotic genomes are compacted in chromatin, a complex of DNA and proteins, and only the active regulatory elements are accessible by the cell’s machinery such as TFs. The assay for transposase-accessible chromatin using sequencing (ATAC-seq) quantifies DNA accessibility through the use of transposase enzymes that insert sequencing adapters at these accessible chromatin sites. ATAC-seq enables the genome-wide profiling of TF binding events that orchestrate gene expression programs and give a cell its identity. RESULTS We generated high-quality ATAC-seq data in 410 tumor samples from TCGA, identifying diverse regulatory landscapes across 23 cancer types. These chromatin accessibility profiles identify cancer- and tissue-specific DNA regulatory elements that enable classification of tumor subtypes with newly recognized prognostic importance. We identify distinct TF activities in cancer based on differences in the inferred patterns of TF-DNA interaction and gene expression. Genome-wide correlation of gene expression and chromatin accessibility predicts tens of thousands of putative interactions between distal regulatory elements and gene promoters, including key oncogenes and targets in cancer immunotherapy, such as MYC , SRC , BCL2 , and PDL1 . Moreover, these regulatory interactions inform known genetic risk loci linked to cancer predisposition, nominating biochemical mechanisms and target genes for many cancer-linked genetic variants. Lastly, integration with mutation profiling by whole-genome sequencing identifies cancer-relevant noncoding mutations that are associated with altered gene expression. A single-base mutation located 12 kilobases upstream of the FGD4 gene, a regulator of the actin cytoskeleton, generates a putative de novo binding site for an NKX TF and is associated with an increase in chromatin accessibility and a concomitant increase in FGD4 gene expression. CONCLUSION The accessible genome of primary human cancers provides a wealth of information on the susceptibility, mechanisms, prognosis, and potential therapeutic strategies of diverse cancer types. Prediction of interactions between DNA regulatory elements and gene promoters sets the stage for future integrative gene regulatory network analyses. The discovery of hundreds of noncoding somatic mutations that exhibit allele-specific regulatory effects suggests a pervasive mechanism for cancer cells to manipulate gene expression and increase cellular fitness. These data may serve as a foundational resource for the cancer research community.

Journal ArticleDOI
TL;DR: It is found that peer beliefs of replicability are strongly related to replicable, suggesting that the research community could predict which results would replicate and that failures to replicate were not the result of chance alone.
Abstract: Being able to replicate scientific findings is crucial for scientific progress. We replicate 21 systematically selected experimental studies in the social sciences published in Nature and Science between 2010 and 2015. The replications follow analysis plans reviewed by the original authors and pre-registered prior to the replications. The replications are high powered, with sample sizes on average about five times higher than in the original studies. We find a significant effect in the same direction as the original study for 13 (62%) studies, and the effect size of the replications is on average about 50% of the original effect size. Replicability varies between 12 (57%) and 14 (67%) studies for complementary replicability indicators. Consistent with these results, the estimated true-positive rate is 67% in a Bayesian analysis. The relative effect size of true positives is estimated to be 71%, suggesting that both false positives and inflated effect sizes of true positives contribute to imperfect reproducibility. Furthermore, we find that peer beliefs of replicability are strongly related to replicability, suggesting that the research community could predict which results would replicate and that failures to replicate were not the result of chance alone.

Journal ArticleDOI
TL;DR: Compared with the JNC7 guideline, the 2017 ACC/AHA guideline results in a substantial increase in the prevalence of hypertension, a small increase inThe percentage of US adults recommended for antihypertensive medication, and more intensive BP lowering for many adults taking antihyertensive medication.

Journal ArticleDOI
25 Jul 2018-Nature
TL;DR: It is shown that meningeal lymphatic vessels drain macromolecules from the CNS (cerebrospinal and interstitial fluids) into the cervical lymph nodes in mice and improves brain perfusion and learning and memory performance.
Abstract: Ageing is a major risk factor for many neurological pathologies, but its mechanisms remain unclear. Unlike other tissues, the parenchyma of the central nervous system (CNS) lacks lymphatic vasculature and waste products are removed partly through a paravascular route. (Re)discovery and characterization of meningeal lymphatic vessels has prompted an assessment of their role in waste clearance from the CNS. Here we show that meningeal lymphatic vessels drain macromolecules from the CNS (cerebrospinal and interstitial fluids) into the cervical lymph nodes in mice. Impairment of meningeal lymphatic function slows paravascular influx of macromolecules into the brain and efflux of macromolecules from the interstitial fluid, and induces cognitive impairment in mice. Treatment of aged mice with vascular endothelial growth factor C enhances meningeal lymphatic drainage of macromolecules from the cerebrospinal fluid, improving brain perfusion and learning and memory performance. Disruption of meningeal lymphatic vessels in transgenic mouse models of Alzheimer’s disease promotes amyloid-β deposition in the meninges, which resembles human meningeal pathology, and aggravates parenchymal amyloid-β accumulation. Meningeal lymphatic dysfunction may be an aggravating factor in Alzheimer’s disease pathology and in age-associated cognitive decline. Thus, augmentation of meningeal lymphatic function might be a promising therapeutic target for preventing or delaying age-associated neurological diseases.

Journal ArticleDOI
TL;DR: The updated version of the EFSUMB guidelines on the application of non-hepatic contrast-enhanced ultrasound (CEUS) deals with the use of microbubble ultrasound contrast outside the liver in the many established and emerging applications.
Abstract: The updated version of the EFSUMB guidelines on the application of non-hepatic contrast-enhanced ultrasound (CEUS) deals with the use of microbubble ultrasound contrast outside the liver in the many established and emerging applications.

Journal ArticleDOI
20 Feb 2018-Immunity
TL;DR: High‐dimensional cytometry reveals that microglia, several subsets of border‐associated macrophages and dendritic cells coexist in the CNS at steady state and exhibit disease‐specific transformations in the immune microenvironment during aging and in models of Alzheimer’s disease and multiple sclerosis.

Journal ArticleDOI
TL;DR: The clinical benefit from chemohormonal therapy in prolonging OS was confirmed for patients with high-volume disease; however, for patientsWith low- volume disease, no OS benefit was discerned.
Abstract: Purpose Docetaxel added to androgen-deprivation therapy (ADT) significantly increases the longevity of some patients with metastatic hormone-sensitive prostate cancer. Herein, we present the outcomes of the CHAARTED (Chemohormonal Therapy Versus Androgen Ablation Randomized Trial for Extensive Disease in Prostate Cancer) trial with more mature follow-up and focus on tumor volume. Patients and Methods In this phase III study, 790 patients with metastatic hormone-sensitive prostate cancer were equally randomly assigned to receive either ADT in combination with docetaxel 75 mg/m2 for up to six cycles or ADT alone. The primary end point of the study was overall survival (OS). Additional analyses of the prospectively defined low- and high-volume disease subgroups were performed. High-volume disease was defined as presence of visceral metastases and/or ≥ four bone metastases with at least one outside of the vertebral column and pelvis. Results At a median follow-up of 53.7 months, the median OS was 57.6 months for the chemohormonal therapy arm versus 47.2 months for ADT alone (hazard ratio [HR], 0.72; 95% CI, 0.59 to 0.89; P = .0018). For patients with high-volume disease (n = 513), the median OS was 51.2 months with chemohormonal therapy versus 34.4 months with ADT alone (HR, 0.63; 95% CI, 0.50 to 0.79; P < .001). For those with low-volume disease (n = 277), no OS benefit was observed (HR, 1.04; 95% CI, 0.70 to 1.55; P = .86). Conclusion The clinical benefit from chemohormonal therapy in prolonging OS was confirmed for patients with high-volume disease; however, for patients with low-volume disease, no OS benefit was discerned.

Journal ArticleDOI
TL;DR: This work first characterize a class of ‘learnable algorithms’ and then design DNNs to approximate some algorithms of interest in wireless communications, demonstrating the superior ability ofDNNs for approximating two considerably complex algorithms that are designed for power allocation in wireless transmit signal design, while giving orders of magnitude speedup in computational time.
Abstract: Numerical optimization has played a central role in addressing key signal processing (SP) problems Highly effective methods have been developed for a large variety of SP applications such as communications, radar, filter design, and speech and image analytics, just to name a few However, optimization algorithms often entail considerable complexity, which creates a serious gap between theoretical design/analysis and real-time processing In this paper, we aim at providing a new learning-based perspective to address this challenging issue The key idea is to treat the input and output of an SP algorithm as an unknown nonlinear mapping and use a deep neural network (DNN) to approximate it If the nonlinear mapping can be learned accurately by a DNN of moderate size, then SP tasks can be performed effectively—since passing the input through a DNN only requires a small number of simple operations In our paper, we first identify a class of optimization algorithms that can be accurately approximated by a fully connected DNN Second, to demonstrate the effectiveness of the proposed approach, we apply it to approximate a popular interference management algorithm, namely, the WMMSE algorithm Extensive experiments using both synthetically generated wireless channel data and real DSL channel data have been conducted It is shown that, in practice, only a small network is sufficient to obtain high approximation accuracy, and DNNs can achieve orders of magnitude speedup in computational time compared to the state-of-the-art interference management algorithm

Proceedings ArticleDOI
01 Jun 2018
TL;DR: A data-augmentation approach is demonstrated that, in combination with existing word-embedding debiasing techniques, removes the bias demonstrated by rule-based, feature-rich, and neural coreference systems in WinoBias without significantly affecting their performance on existing datasets.
Abstract: In this paper, we introduce a new benchmark for co-reference resolution focused on gender bias, WinoBias. Our corpus contains Winograd-schema style sentences with entities corresponding to people referred by their occupation (e.g. the nurse, the doctor, the carpenter). We demonstrate that a rule-based, a feature-rich, and a neural coreference system all link gendered pronouns to pro-stereotypical entities with higher accuracy than anti-stereotypical entities, by an average difference of 21.1 in F1 score. Finally, we demonstrate a data-augmentation approach that, in combination with existing word-embedding debiasing techniques, removes the bias demonstrated by these systems in WinoBias without significantly affecting their performance on existing datasets.

Journal ArticleDOI
TL;DR: Single‐dose baloxavir was without evident safety concerns, was superior to placebo in alleviating influenza symptoms, and was superiorto both oseltamivir and placebo in reducing the viral load 1 day after initiation of the trial regimen in patients with uncomplicated influenza.
Abstract: Background Baloxavir marboxil is a selective inhibitor of influenza cap-dependent endonuclease. It has shown therapeutic activity in preclinical models of influenza A and B virus infection...

Journal ArticleDOI
TL;DR: Findings from psychology and economics on subjective well-being across cultures are synthesized and identified to identify outstanding questions, priorities for future research and pathways to policy implementation.
Abstract: The empirical science of subjective well-being, popularly referred to as happiness or satisfaction, has grown enormously in the past decade. In this Review, we selectively highlight and summarize key researched areas that continue to develop. We describe the validity of measures and their potential biases, as well as the scientific methods used in this field. We describe some of the predictors of subjective well-being such as temperament, income and supportive social relationships. Higher subjective well-being has been associated with good health and longevity, better social relationships, work performance and creativity. At the community and societal levels, cultures differ not only in their levels of well-being but also to some extent in the types of subjective well-being they most value. Furthermore, there are both universal and unique predictors of subjective well-being in various societies. National accounts of subjective well-being to help inform policy decisions at the community and societal levels are now being considered and adopted. Finally we discuss the unknowns in the science and needed future research.

Proceedings ArticleDOI
24 May 2018
TL;DR: DeepWordBug as mentioned in this paper generates small text perturbations in a black-box setting that force a deep-learning classifier to misclassify a text input by scoring strategies to find the most important words to modify.
Abstract: Although various techniques have been proposed to generate adversarial samples for white-box attacks on text, little attention has been paid to a black-box attack, which is a more realistic scenario. In this paper, we present a novel algorithm, DeepWordBug, to effectively generate small text perturbations in a black-box setting that forces a deep-learning classifier to misclassify a text input. We develop novel scoring strategies to find the most important words to modify such that the deep classifier makes a wrong prediction. Simple character-level transformations are applied to the highest-ranked words in order to minimize the edit distance of the perturbation. We evaluated DeepWordBug on two real-world text datasets: Enron spam emails and IMDB movie reviews. Our experimental results indicate that DeepWordBug can reduce the classification accuracy from 99% to 40% on Enron and from 87% to 26% on IMDB. Our results strongly demonstrate that the generated adversarial sequences from a deep-learning model can similarly evade other deep models.

Journal ArticleDOI
TL;DR: Recommendations on delivery methods, data interpretation, dose normalization, the use of γ analysis routines and choice of tolerance limits for IMRT QA are made with focus on detecting differences between calculated and measured doses via the useof robust analysis methods and an in-depth understanding of IMRT verification metrics.
Abstract: Purpose Patient-specific IMRT QA measurements are important components of processes designed to identify discrepancies between calculated and delivered radiation doses. Discrepancy tolerance limits are neither well defined nor consistently applied across centers. The AAPM TG-218 report provides a comprehensive review aimed at improving the understanding and consistency of these processes as well as recommendations for methodologies and tolerance limits in patient-specific IMRT QA. Methods The performance of the dose difference/distance-to-agreement (DTA) and γ dose distribution comparison metrics are investigated. Measurement methods are reviewed and followed by a discussion of the pros and cons of each. Methodologies for absolute dose verification are discussed and new IMRT QA verification tools are presented. Literature on the expected or achievable agreement between measurements and calculations for different types of planning and delivery systems are reviewed and analyzed. Tests of vendor implementations of the γ verification algorithm employing benchmark cases are presented. Results Operational shortcomings that can reduce the γ tool accuracy and subsequent effectiveness for IMRT QA are described. Practical considerations including spatial resolution, normalization, dose threshold, and data interpretation are discussed. Published data on IMRT QA and the clinical experience of the group members are used to develop guidelines and recommendations on tolerance and action limits for IMRT QA. Steps to check failed IMRT QA plans are outlined. Conclusion Recommendations on delivery methods, data interpretation, dose normalization, the use of γ analysis routines and choice of tolerance limits for IMRT QA are made with focus on detecting differences between calculated and measured doses via the use of robust analysis methods and an in-depth understanding of IMRT verification metrics. The recommendations are intended to improve the IMRT QA process and establish consistent, and comparable IMRT QA criteria among institutions.

Journal ArticleDOI
TL;DR: It is demonstrated that meningeal lymphatics drain CSF-derived macromolecules and immune cells and play a key role in regulating neuroinflammation and may represent a new therapeutic target for multiple sclerosis.
Abstract: Neuroinflammatory diseases, such as multiple sclerosis, are characterized by invasion of the brain by autoreactive T cells. The mechanism for how T cells acquire their encephalitogenic phenotype and trigger disease remains, however, unclear. The existence of lymphatic vessels in the meninges indicates a relevant link between the CNS and peripheral immune system, perhaps affecting autoimmunity. Here we demonstrate that meningeal lymphatics fulfill two critical criteria: they assist in the drainage of cerebrospinal fluid components and enable immune cells to enter draining lymph nodes in a CCR7-dependent manner. Unlike other tissues, meningeal lymphatic endothelial cells do not undergo expansion during inflammation, and they express a unique transcriptional signature. Notably, the ablation of meningeal lymphatics diminishes pathology and reduces the inflammatory response of brain-reactive T cells during an animal model of multiple sclerosis. Our findings demonstrate that meningeal lymphatics govern inflammatory processes and immune surveillance of the CNS and pose a valuable target for therapeutic intervention.

Journal ArticleDOI
24 Dec 2018
TL;DR: This paper conducted preregistered replications of 28 classic and contemporary published findings, with protocols that were peer reviewed in advance, to examine variation in effect magnitudes across samples and settings, and found that very little heterogeneity was attributable to the order in which the tasks were performed or whether the task were administered in lab versus online.
Abstract: We conducted preregistered replications of 28 classic and contemporary published findings, with protocols that were peer reviewed in advance, to examine variation in effect magnitudes across samples and settings. Each protocol was administered to approximately half of 125 samples that comprised 15,305 participants from 36 countries and territories. Using the conventional criterion of statistical significance (p < .05), we found that 15 (54%) of the replications provided evidence of a statistically significant effect in the same direction as the original finding. With a strict significance criterion (p < .0001), 14 (50%) of the replications still provided such evidence, a reflection of the extremely high-powered design. Seven (25%) of the replications yielded effect sizes larger than the original ones, and 21 (75%) yielded effect sizes smaller than the original ones. The median comparable Cohen’s ds were 0.60 for the original findings and 0.15 for the replications. The effect sizes were small (< 0.20) in 16 of the replications (57%), and 9 effects (32%) were in the direction opposite the direction of the original effect. Across settings, the Q statistic indicated significant heterogeneity in 11 (39%) of the replication effects, and most of those were among the findings with the largest overall effect sizes; only 1 effect that was near zero in the aggregate showed significant heterogeneity according to this measure. Only 1 effect had a tau value greater than .20, an indication of moderate heterogeneity. Eight others had tau values near or slightly above .10, an indication of slight heterogeneity. Moderation tests indicated that very little heterogeneity was attributable to the order in which the tasks were performed or whether the tasks were administered in lab versus online. Exploratory comparisons revealed little heterogeneity between Western, educated, industrialized, rich, and democratic (WEIRD) cultures and less WEIRD cultures (i.e., cultures with relatively high and low WEIRDness scores, respectively). Cumulatively, variability in the observed effect sizes was attributable more to the effect being studied than to the sample or setting in which it was studied.