scispace - formally typeset
Search or ask a question

Showing papers by "Columbia University published in 2018"


Journal ArticleDOI
Gregory A. Roth1, Gregory A. Roth2, Degu Abate3, Kalkidan Hassen Abate4  +1025 moreInstitutions (333)
TL;DR: Non-communicable diseases comprised the greatest fraction of deaths, contributing to 73·4% (95% uncertainty interval [UI] 72·5–74·1) of total deaths in 2017, while communicable, maternal, neonatal, and nutritional causes accounted for 18·6% (17·9–19·6), and injuries 8·0% (7·7–8·2).

5,211 citations



Journal ArticleDOI
Lorenzo Galluzzi1, Lorenzo Galluzzi2, Ilio Vitale3, Stuart A. Aaronson4  +183 moreInstitutions (111)
TL;DR: The Nomenclature Committee on Cell Death (NCCD) has formulated guidelines for the definition and interpretation of cell death from morphological, biochemical, and functional perspectives.
Abstract: Over the past decade, the Nomenclature Committee on Cell Death (NCCD) has formulated guidelines for the definition and interpretation of cell death from morphological, biochemical, and functional perspectives. Since the field continues to expand and novel mechanisms that orchestrate multiple cell death pathways are unveiled, we propose an updated classification of cell death subroutines focusing on mechanistic and essential (as opposed to correlative and dispensable) aspects of the process. As we provide molecularly oriented definitions of terms including intrinsic apoptosis, extrinsic apoptosis, mitochondrial permeability transition (MPT)-driven necrosis, necroptosis, ferroptosis, pyroptosis, parthanatos, entotic cell death, NETotic cell death, lysosome-dependent cell death, autophagy-dependent cell death, immunogenic cell death, cellular senescence, and mitotic catastrophe, we discuss the utility of neologisms that refer to highly specialized instances of these processes. The mission of the NCCD is to provide a widely accepted nomenclature on cell death in support of the continued development of the field.

3,301 citations


Journal ArticleDOI
17 Apr 2018-Immunity
TL;DR: An extensive immunogenomic analysis of more than 10,000 tumors comprising 33 diverse cancer types by utilizing data compiled by TCGA identifies six immune subtypes that encompass multiple cancer types and are hypothesized to define immune response patterns impacting prognosis.

3,246 citations


Journal ArticleDOI
Jeffrey D. Stanaway1, Ashkan Afshin1, Emmanuela Gakidou1, Stephen S Lim1  +1050 moreInstitutions (346)
TL;DR: This study estimated levels and trends in exposure, attributable deaths, and attributable disability-adjusted life-years (DALYs) by age group, sex, year, and location for 84 behavioural, environmental and occupational, and metabolic risks or groups of risks from 1990 to 2017 and explored the relationship between development and risk exposure.

2,910 citations


Journal ArticleDOI
TL;DR: Using a deep learning approach to track user-defined body parts during various behaviors across multiple species, the authors show that their toolbox, called DeepLabCut, can achieve human accuracy with only a few hundred frames of training data.
Abstract: Quantifying behavior is crucial for many applications in neuroscience. Videography provides easy methods for the observation and recording of animal behavior in diverse settings, yet extracting particular aspects of a behavior for further analysis can be highly time consuming. In motor control studies, humans or other animals are often marked with reflective markers to assist with computer-based tracking, but markers are intrusive, and the number and location of the markers must be determined a priori. Here we present an efficient method for markerless pose estimation based on transfer learning with deep neural networks that achieves excellent results with minimal training data. We demonstrate the versatility of this framework by tracking various body parts in multiple species across a broad collection of behaviors. Remarkably, even when only a small number of frames are labeled (~200), the algorithm achieves excellent tracking performance on test frames that is comparable to human accuracy.

2,303 citations


Journal ArticleDOI
Naomi R. Wray1, Stephan Ripke2, Stephan Ripke3, Stephan Ripke4  +259 moreInstitutions (79)
TL;DR: A genome-wide association meta-analysis of individuals with clinically assessed or self-reported depression identifies 44 independent and significant loci and finds important relationships of genetic risk for major depression with educational attainment, body mass, and schizophrenia.
Abstract: Major depressive disorder (MDD) is a common illness accompanied by considerable morbidity, mortality, costs, and heightened risk of suicide. We conducted a genome-wide association meta-analysis based in 135,458 cases and 344,901 controls and identified 44 independent and significant loci. The genetic findings were associated with clinical features of major depression and implicated brain regions exhibiting anatomical differences in cases. Targets of antidepressant medications and genes involved in gene splicing were enriched for smaller association signal. We found important relationships of genetic risk for major depression with educational attainment, body mass, and schizophrenia: lower educational attainment and higher body mass were putatively causal, whereas major depression and schizophrenia reflected a partly shared biological etiology. All humans carry lesser or greater numbers of genetic risk factors for major depression. These findings help refine the basis of major depression and imply that a continuous measure of risk underlies the clinical phenotype.

1,898 citations


Journal ArticleDOI
Elena Aprile1, Jelle Aalbers2, F. Agostini3, M. Alfonsi4, L. Althueser5, F. D. Amaro6, M. Anthony1, F. Arneodo7, Laura Baudis8, Boris Bauermeister9, M. L. Benabderrahmane7, T. Berger10, P. A. Breur2, April S. Brown2, Ethan Brown10, S. Bruenner11, Giacomo Bruno7, Ran Budnik12, C. Capelli8, João Cardoso6, D. Cichon11, D. Coderre13, Auke-Pieter Colijn2, Jan Conrad9, Jean-Pierre Cussonneau14, M. P. Decowski2, P. de Perio1, P. Di Gangi3, A. Di Giovanni7, Sara Diglio14, A. Elykov13, G. Eurin11, J. Fei15, A. D. Ferella9, A. Fieguth5, W. Fulgione, A. Gallo Rosso, Michelle Galloway8, F. Gao1, M. Garbini3, C. Geis4, L. Grandi16, Z. Greene1, H. Qiu12, C. Hasterok11, E. Hogenbirk2, J. Howlett1, R. Itay12, F. Joerg11, B. Kaminsky13, Shingo Kazama8, A. Kish8, G. Koltman12, H. Landsman12, R. F. Lang17, L. Levinson12, Qing Lin1, Sebastian Lindemann13, Manfred Lindner11, F. Lombardi15, J. A. M. Lopes6, J. Mahlstedt9, A. Manfredini12, T. Marrodán Undagoitia11, Julien Masbou14, D. Masson17, M. Messina7, K. Micheneau14, Kate C. Miller16, A. Molinario, K. Morå9, M. Murra5, J. Naganoma18, Kaixuan Ni15, Uwe Oberlack4, Bart Pelssers9, F. Piastra8, J. Pienaar16, V. Pizzella11, Guillaume Plante1, R. Podviianiuk, N. Priel12, D. Ramírez García13, L. Rauch11, S. Reichard8, C. Reuter17, B. Riedel16, A. Rizzo1, A. Rocchetti13, N. Rupp11, J.M.F. dos Santos6, Gabriella Sartorelli3, M. Scheibelhut4, S. Schindler4, J. Schreiner11, D. Schulte5, Marc Schumann13, L. Scotto Lavina19, M. Selvi3, P. Shagin18, E. Shockley16, Manuel Gameiro da Silva6, H. Simgen11, Dominique Thers14, F. Toschi13, F. Toschi3, Gian Carlo Trinchero, C. Tunnell16, N. Upole16, M. Vargas5, O. Wack11, Hongwei Wang20, Zirui Wang, Yuehuan Wei15, Ch. Weinheimer5, C. Wittweg5, J. Wulf8, J. Ye15, Yanxi Zhang1, T. Zhu1 
TL;DR: In this article, a search for weakly interacting massive particles (WIMPs) using 278.8 days of data collected with the XENON1T experiment at LNGS is reported.
Abstract: We report on a search for weakly interacting massive particles (WIMPs) using 278.8 days of data collected with the XENON1T experiment at LNGS. XENON1T utilizes a liquid xenon time projection chamber with a fiducial mass of (1.30±0.01) ton, resulting in a 1.0 ton yr exposure. The energy region of interest, [1.4,10.6] keVee ([4.9,40.9] keVnr), exhibits an ultralow electron recoil background rate of [82-3+5(syst)±3(stat)] events/(ton yr keVee). No significant excess over background is found, and a profile likelihood analysis parametrized in spatial and energy dimensions excludes new parameter space for the WIMP-nucleon spin-independent elastic scatter cross section for WIMP masses above 6 GeV/c2, with a minimum of 4.1×10-47 cm2 at 30 GeV/c2 and a 90% confidence level.

1,808 citations


Journal ArticleDOI
TL;DR: Patisiran improved multiple clinical manifestations of hereditary transthyretin amyloidosis with polyneuropathy and showed an effect on gait speed and modified BMI.
Abstract: BACKGROUND: Patisiran, an investigational RNA interference therapeutic agent, specifically inhibits hepatic synthesis of transthyretin.METHODS: In this phase 3 trial, we randomly assigned patients ...

1,671 citations


Journal ArticleDOI
TL;DR: A general understanding of AI methods, particularly those pertaining to image-based tasks, is established and how these methods could impact multiple facets of radiology is explored, with a general focus on applications in oncology.
Abstract: Artificial intelligence (AI) algorithms, particularly deep learning, have demonstrated remarkable progress in image-recognition tasks. Methods ranging from convolutional neural networks to variational autoencoders have found myriad applications in the medical image analysis field, propelling it forward at a rapid pace. Historically, in radiology practice, trained physicians visually assessed medical images for the detection, characterization and monitoring of diseases. AI methods excel at automatically recognizing complex patterns in imaging data and providing quantitative, rather than qualitative, assessments of radiographic characteristics. In this Opinion article, we establish a general understanding of AI methods, particularly those pertaining to image-based tasks. We explore how these methods could impact multiple facets of radiology, with a general focus on applications in oncology, and demonstrate ways in which these methods are advancing the field. Finally, we discuss the challenges facing clinical implementation and provide our perspective on how the domain could be advanced.

1,599 citations


Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, Fausto Acernese3  +1235 moreInstitutions (132)
TL;DR: This analysis expands upon previous analyses by working under the hypothesis that both bodies were neutron stars that are described by the same equation of state and have spins within the range observed in Galactic binary neutron stars.
Abstract: On 17 August 2017, the LIGO and Virgo observatories made the first direct detection of gravitational waves from the coalescence of a neutron star binary system. The detection of this gravitational-wave signal, GW170817, offers a novel opportunity to directly probe the properties of matter at the extreme conditions found in the interior of these stars. The initial, minimal-assumption analysis of the LIGO and Virgo data placed constraints on the tidal effects of the coalescing bodies, which were then translated to constraints on neutron star radii. Here, we expand upon previous analyses by working under the hypothesis that both bodies were neutron stars that are described by the same equation of state and have spins within the range observed in Galactic binary neutron stars. Our analysis employs two methods: the use of equation-of-state-insensitive relations between various macroscopic properties of the neutron stars and the use of an efficient parametrization of the defining function pðρÞ of the equation of state itself. From the LIGO and Virgo data alone and the first method, we measure the two neutron star radii as R1 ¼ 10.8 þ2.0 −1.7 km for the heavier star and R2 ¼ 10.7 þ2.1 −1.5 km for the lighter star at the 90% credible level. If we additionally require that the equation of state supports neutron stars with masses larger than 1.97 M⊙ as required from electromagnetic observations and employ the equation-of-state parametrization, we further constrain R1 ¼ 11.9 þ1.4 −1.4 km and R2 ¼ 11.9 þ1.4 −1.4 km at the 90% credible level. Finally, we obtain constraints on pðρÞ at supranuclear densities, with pressure at twice nuclear saturation density measured at 3.5 þ2.7 −1.7 × 1034 dyn cm−2 at the 90% level.

Journal ArticleDOI
Daniel J. Benjamin1, James O. Berger2, Magnus Johannesson3, Magnus Johannesson1, Brian A. Nosek4, Brian A. Nosek5, Eric-Jan Wagenmakers6, Richard A. Berk7, Kenneth A. Bollen8, Björn Brembs9, Lawrence D. Brown7, Colin F. Camerer10, David Cesarini11, David Cesarini12, Christopher D. Chambers13, Merlise A. Clyde2, Thomas D. Cook14, Thomas D. Cook15, Paul De Boeck16, Zoltan Dienes17, Anna Dreber3, Kenny Easwaran18, Charles Efferson19, Ernst Fehr20, Fiona Fidler21, Andy P. Field17, Malcolm R. Forster22, Edward I. George7, Richard Gonzalez23, Steven N. Goodman24, Edwin J. Green25, Donald P. Green26, Anthony G. Greenwald27, Jarrod D. Hadfield28, Larry V. Hedges15, Leonhard Held20, Teck-Hua Ho29, Herbert Hoijtink30, Daniel J. Hruschka31, Kosuke Imai32, Guido W. Imbens24, John P. A. Ioannidis24, Minjeong Jeon33, James Holland Jones34, Michael Kirchler35, David Laibson36, John A. List37, Roderick J. A. Little23, Arthur Lupia23, Edouard Machery38, Scott E. Maxwell39, Michael A. McCarthy21, Don A. Moore40, Stephen L. Morgan41, Marcus R. Munafò42, Shinichi Nakagawa43, Brendan Nyhan44, Timothy H. Parker45, Luis R. Pericchi46, Marco Perugini47, Jeffrey N. Rouder48, Judith Rousseau49, Victoria Savalei50, Felix D. Schönbrodt51, Thomas Sellke52, Betsy Sinclair53, Dustin Tingley36, Trisha Van Zandt16, Simine Vazire54, Duncan J. Watts55, Christopher Winship36, Robert L. Wolpert2, Yu Xie32, Cristobal Young24, Jonathan Zinman44, Valen E. Johnson1, Valen E. Johnson18 
University of Southern California1, Duke University2, Stockholm School of Economics3, Center for Open Science4, University of Virginia5, University of Amsterdam6, University of Pennsylvania7, University of North Carolina at Chapel Hill8, University of Regensburg9, California Institute of Technology10, Research Institute of Industrial Economics11, New York University12, Cardiff University13, Mathematica Policy Research14, Northwestern University15, Ohio State University16, University of Sussex17, Texas A&M University18, Royal Holloway, University of London19, University of Zurich20, University of Melbourne21, University of Wisconsin-Madison22, University of Michigan23, Stanford University24, Rutgers University25, Columbia University26, University of Washington27, University of Edinburgh28, National University of Singapore29, Utrecht University30, Arizona State University31, Princeton University32, University of California, Los Angeles33, Imperial College London34, University of Innsbruck35, Harvard University36, University of Chicago37, University of Pittsburgh38, University of Notre Dame39, University of California, Berkeley40, Johns Hopkins University41, University of Bristol42, University of New South Wales43, Dartmouth College44, Whitman College45, University of Puerto Rico46, University of Milan47, University of California, Irvine48, Paris Dauphine University49, University of British Columbia50, Ludwig Maximilian University of Munich51, Purdue University52, Washington University in St. Louis53, University of California, Davis54, Microsoft55
TL;DR: The default P-value threshold for statistical significance is proposed to be changed from 0.05 to 0.005 for claims of new discoveries in order to reduce uncertainty in the number of discoveries.
Abstract: We propose to change the default P-value threshold for statistical significance from 0.05 to 0.005 for claims of new discoveries.

Journal ArticleDOI
22 Jun 2018-Science
TL;DR: It is demonstrated that, in the general population, the personality trait neuroticism is significantly correlated with almost every psychiatric disorder and migraine, and it is shown that both psychiatric and neurological disorders have robust correlations with cognitive and personality measures.
Abstract: Disorders of the brain can exhibit considerable epidemiological comorbidity and often share symptoms, provoking debate about their etiologic overlap. We quantified the genetic sharing of 25 brain disorders from genome-wide association studies of 265,218 patients and 784,643 control participants and assessed their relationship to 17 phenotypes from 1,191,588 individuals. Psychiatric disorders share common variant risk, whereas neurological disorders appear more distinct from one another and from the psychiatric disorders. We also identified significant sharing between disorders and a number of brain phenotypes, including cognitive measures. Further, we conducted simulations to explore how statistical power, diagnostic misclassification, and phenotypic heterogeneity affect genetic correlations. These results highlight the importance of common genetic variation as a risk factor for brain disorders and the value of heritability-based methods in understanding their etiology.

Journal ArticleDOI
TL;DR: In patients with transthyretin amyloid cardiomyopathy, tafamidis was associated with reductions in all‐cause mortality and cardiovascular‐related hospitalizations and reduced the decline in functional capacity and quality of life as compared with placebo.
Abstract: Background Transthyretin amyloid cardiomyopathy is caused by the deposition of transthyretin amyloid fibrils in the myocardium. The deposition occurs when wild-type or variant transthyretin becomes unstable and misfolds. Tafamidis binds to transthyretin, preventing tetramer dissociation and amyloidogenesis. Methods In a multicenter, international, double-blind, placebo-controlled, phase 3 trial, we randomly assigned 441 patients with transthyretin amyloid cardiomyopathy in a 2:1:2 ratio to receive 80 mg of tafamidis, 20 mg of tafamidis, or placebo for 30 months. In the primary analysis, we hierarchically assessed all-cause mortality, followed by frequency of cardiovascular-related hospitalizations according to the Finkelstein–Schoenfeld method. Key secondary end points were the change from baseline to month 30 for the 6-minute walk test and the score on the Kansas City Cardiomyopathy Questionnaire–Overall Summary (KCCQ-OS), in which higher scores indicate better health status. Results In the prim...

Journal ArticleDOI
TL;DR: A new periodontitis classification scheme has been adopted, in which forms of the disease previously recognized as "chronic" or "aggressive" are now grouped under a single category ("periodontitis") and are further characterized based on a multi-dimensional staging and grading system as mentioned in this paper.
Abstract: A new periodontitis classification scheme has been adopted, in which forms of the disease previously recognized as "chronic" or "aggressive" are now grouped under a single category ("periodontitis") and are further characterized based on a multi-dimensional staging and grading system. Staging is largely dependent upon the severity of disease at presentation as well as on the complexity of disease management, while grading provides supplemental information about biological features of the disease including a history-based analysis of the rate of periodontitis progression; assessment of the risk for further progression; analysis of possible poor outcomes of treatment; and assessment of the risk that the disease or its treatment may negatively affect the general health of the patient. Necrotizing periodontal diseases, whose characteristic clinical phenotype includes typical features (papilla necrosis, bleeding, and pain) and are associated with host immune response impairments, remain a distinct periodontitis category. Endodontic-periodontal lesions, defined by a pathological communication between the pulpal and periodontal tissues at a given tooth, occur in either an acute or a chronic form, and are classified according to signs and symptoms that have direct impact on their prognosis and treatment. Periodontal abscesses are defined as acute lesions characterized by localized accumulation of pus within the gingival wall of the periodontal pocket/sulcus, rapid tissue destruction and are associated with risk for systemic dissemination.

Journal ArticleDOI
01 Apr 2018
TL;DR: A new epigenetic biomarker of aging, DNAm PhenoAge, is developed that strongly outperforms previous measures in regards to predictions for a variety of aging outcomes, including all-cause mortality, cancers, healthspan, physical functioning, and Alzheimer's disease.
Abstract: Identifying reliable biomarkers of aging is a major goal in geroscience. While the first generation of epigenetic biomarkers of aging were developed using chronological age as a surrogate for biological age, we hypothesized that incorporation of composite clinical measures of phenotypic age that capture differences in lifespan and healthspan may identify novel CpGs and facilitate the development of a more powerful epigenetic biomarker of aging. Using an innovative two-step process, we develop a new epigenetic biomarker of aging, DNAm PhenoAge, that strongly outperforms previous measures in regards to predictions for a variety of aging outcomes, including all-cause mortality, cancers, healthspan, physical functioning, and Alzheimer's disease. While this biomarker was developed using data from whole blood, it correlates strongly with age in every tissue and cell tested. Based on an in-depth transcriptional analysis in sorted cells, we find that increased epigenetic, relative to chronological age, is associated with increased activation of pro-inflammatory and interferon pathways, and decreased activation of transcriptional/translational machinery, DNA damage response, and mitochondrial signatures. Overall, this single epigenetic biomarker of aging is able to capture risks for an array of diverse outcomes across multiple tissues and cells, and provide insight into important pathways in aging.

Journal ArticleDOI
TL;DR: This work introduces EEGNet, a compact convolutional neural network for EEG-based BCIs, and introduces the use of depthwise and separable convolutions to construct an EEG-specific model which encapsulates well-known EEG feature extraction concepts for BCI.
Abstract: Objective Brain-computer interfaces (BCI) enable direct communication with a computer, using neural activity as the control signal. This neural signal is generally chosen from a variety of well-studied electroencephalogram (EEG) signals. For a given BCI paradigm, feature extractors and classifiers are tailored to the distinct characteristics of its expected EEG control signal, limiting its application to that specific signal. Convolutional neural networks (CNNs), which have been used in computer vision and speech recognition to perform automatic feature extraction and classification, have successfully been applied to EEG-based BCIs; however, they have mainly been applied to single BCI paradigms and thus it remains unclear how these architectures generalize to other paradigms. Here, we ask if we can design a single CNN architecture to accurately classify EEG signals from different BCI paradigms, while simultaneously being as compact as possible. Approach In this work we introduce EEGNet, a compact convolutional neural network for EEG-based BCIs. We introduce the use of depthwise and separable convolutions to construct an EEG-specific model which encapsulates well-known EEG feature extraction concepts for BCI. We compare EEGNet, both for within-subject and cross-subject classification, to current state-of-the-art approaches across four BCI paradigms: P300 visual-evoked potentials, error-related negativity responses (ERN), movement-related cortical potentials (MRCP), and sensory motor rhythms (SMR). Main results We show that EEGNet generalizes across paradigms better than, and achieves comparably high performance to, the reference algorithms when only limited training data is available across all tested paradigms. In addition, we demonstrate three different approaches to visualize the contents of a trained EEGNet model to enable interpretation of the learned features. Significance Our results suggest that EEGNet is robust enough to learn a wide variety of interpretable features over a range of BCI tasks. Our models can be found at: https://github.com/vlawhern/arl-eegmodels.

Journal ArticleDOI
Mary F. Feitosa1, Aldi T. Kraja1, Daniel I. Chasman2, Yun J. Sung1  +296 moreInstitutions (86)
18 Jun 2018-PLOS ONE
TL;DR: In insights into the role of alcohol consumption in the genetic architecture of hypertension, a large two-stage investigation incorporating joint testing of main genetic effects and single nucleotide variant (SNV)-alcohol consumption interactions is conducted.
Abstract: Heavy alcohol consumption is an established risk factor for hypertension; the mechanism by which alcohol consumption impact blood pressure (BP) regulation remains unknown. We hypothesized that a genome-wide association study accounting for gene-alcohol consumption interaction for BP might identify additional BP loci and contribute to the understanding of alcohol-related BP regulation. We conducted a large two-stage investigation incorporating joint testing of main genetic effects and single nucleotide variant (SNV)-alcohol consumption interactions. In Stage 1, genome-wide discovery meta-analyses in ≈131K individuals across several ancestry groups yielded 3,514 SNVs (245 loci) with suggestive evidence of association (P < 1.0 x 10-5). In Stage 2, these SNVs were tested for independent external replication in ≈440K individuals across multiple ancestries. We identified and replicated (at Bonferroni correction threshold) five novel BP loci (380 SNVs in 21 genes) and 49 previously reported BP loci (2,159 SNVs in 109 genes) in European ancestry, and in multi-ancestry meta-analyses (P < 5.0 x 10-8). For African ancestry samples, we detected 18 potentially novel BP loci (P < 5.0 x 10-8) in Stage 1 that warrant further replication. Additionally, correlated meta-analysis identified eight novel BP loci (11 genes). Several genes in these loci (e.g., PINX1, GATA4, BLK, FTO and GABBR2) have been previously reported to be associated with alcohol consumption. These findings provide insights into the role of alcohol consumption in the genetic architecture of hypertension.

Journal ArticleDOI
23 Aug 2018-Cell
TL;DR: A preprocessing pipeline, SEQC, and a Bayesian clustering and normalization method, Biscuit, are developed to address computational challenges inherent to single-cell data and support a model of continuous activation in T cells and do not comport with the macrophage polarization model in cancer.

Posted ContentDOI
Spyridon Bakas1, Mauricio Reyes, Andras Jakab2, Stefan Bauer3  +435 moreInstitutions (111)
TL;DR: This study assesses the state-of-the-art machine learning methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i.e., 2012-2018, and investigates the challenge of identifying the best ML algorithms for each of these tasks.
Abstract: Gliomas are the most common primary brain malignancies, with different degrees of aggressiveness, variable prognosis and various heterogeneous histologic sub-regions, i.e., peritumoral edematous/invaded tissue, necrotic core, active and non-enhancing core. This intrinsic heterogeneity is also portrayed in their radio-phenotype, as their sub-regions are depicted by varying intensity profiles disseminated across multi-parametric magnetic resonance imaging (mpMRI) scans, reflecting varying biological properties. Their heterogeneous shape, extent, and location are some of the factors that make these tumors difficult to resect, and in some cases inoperable. The amount of resected tumoris a factor also considered in longitudinal scans, when evaluating the apparent tumor for potential diagnosis of progression. Furthermore, there is mounting evidence that accurate segmentation of the various tumor sub-regions can offer the basis for quantitative image analysis towards prediction of patient overall survival. This study assesses thestate-of-the-art machine learning (ML) methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i.e., 2012-2018. Specifically, we focus on i) evaluating segmentations of the various glioma sub-regions in pre-operative mpMRI scans, ii) assessing potential tumor progression by virtue of longitudinal growth of tumor sub-regions, beyond use of the RECIST/RANO criteria, and iii) predicting the overall survival from pre-operative mpMRI scans of patients that underwent gross tota lresection. Finally, we investigate the challenge of identifying the best ML algorithms for each of these tasks, considering that apart from being diverse on each instance of the challenge, the multi-institutional mpMRI BraTS dataset has also been a continuously evolving/growing dataset.

Book
28 Aug 2018
TL;DR: The Japanese were the most alien enemy the United States had ever fought in an all-out struggle as discussed by the authors, and it was necessary to take into account such exceedingly different habits of acting and thinking.
Abstract: The Japanese were the most alien enemy the United States had ever fought in an all-out struggle. In no other war with a major foe had it been necessary to take into account such exceedingly different habits of acting and thinking. Like Czarist Russia before us in 1905, we were fighting a nation fully armed and trained which did not belong to the Western cultural tradition. Conventions of war which Western nations had come to accept as facts of human nature obviously did not exist for the Japanese. It made the war in the Pacific more than a series of landings on island beaches, more than an unsurpassed problem of logistics. It made it a major problem in the nature of the enemy. We had to understand their behavior in order to cope with it.

Journal ArticleDOI
25 May 2018-Science
TL;DR: Research prospects for more sustainable routes to nitrogen commodity chemicals are reviewed, considering developments in enzymatic, homogeneous, and heterogeneous catalysis, as well as electrochemical, photochemical, and plasma-based approaches.
Abstract: BACKGROUND The invention of the Haber-Bosch (H-B) process in the early 1900s to produce ammonia industrially from nitrogen and hydrogen revolutionized the manufacture of fertilizer and led to fundamental changes in the way food is produced. Its impact is underscored by the fact that about 50% of the nitrogen atoms in humans today originate from this single industrial process. In the century after the H-B process was invented, the chemistry of carbon moved to center stage, resulting in remarkable discoveries and a vast array of products including plastics and pharmaceuticals. In contrast, little has changed in industrial nitrogen chemistry. This scenario reflects both the inherent efficiency of the H-B process and the particular challenge of breaking the strong dinitrogen bond. Nonetheless, the reliance of the H-B process on fossil fuels and its associated high CO 2 emissions have spurred recent interest in finding more sustainable and environmentally benign alternatives. Nitrogen in its more oxidized forms is also industrially, biologically, and environmentally important, and synergies in new combinations of oxidative and reductive transformations across the nitrogen cycle could lead to improved efficiencies. ADVANCES Major effort has been devoted to developing alternative and environmentally friendly processes that would allow NH 3 production at distributed sources under more benign conditions, rather than through the large-scale centralized H-B process. Hydrocarbons (particularly methane) and water are the only two sources of hydrogen atoms that can sustain long-term, large-scale NH 3 production. The use of water as the hydrogen source for NH 3 production requires substantially more energy than using methane, but it is also more environmentally benign, does not contribute to the accumulation of greenhouse gases, and does not compete for valuable and limited hydrocarbon resources. Microbes living in all major ecosystems are able to reduce N 2 to NH 3 by using the enzyme nitrogenase. A deeper understanding of this enzyme could lead to more efficient catalysts for nitrogen reduction under ambient conditions. Model molecular catalysts have been designed that mimic some of the functions of the active site of nitrogenase. Some modest success has also been achieved in designing electrocatalysts for dinitrogen reduction. Electrochemistry avoids the expense and environmental damage of steam reforming of methane (which accounts for most of the cost of the H-B process), and it may provide a means for distributed production of ammonia. On the oxidative side, nitric acid is the principal commodity chemical containing oxidized nitrogen. Nearly all nitric acid is manufactured by oxidation of NH 3 through the Ostwald process, but a more direct reaction of N 2 with O 2 might be practically feasible through further development of nonthermal plasma technology. Heterogeneous NH 3 oxidation with O 2 is at the heart of the Ostwald process and is practiced in a variety of environmental protection applications as well. Precious metals remain the workhorse catalysts, and opportunities therefore exist to develop lower-cost materials with equivalent or better activity and selectivity. Nitrogen oxides are also environmentally hazardous pollutants generated by industrial and transportation activities, and extensive research has gone into developing and applying reduction catalysts. Three-way catalytic converters are operating on hundreds of millions of vehicles worldwide. However, increasingly stringent emissions regulations, coupled with the low exhaust temperatures of high-efficiency engines, present challenges for future combustion emissions control. Bacterial denitrification is the natural analog of this chemistry and another source of study and inspiration for catalyst design. OUTLOOK Demands for greater energy efficiency, smaller-scale and more flexible processes, and environmental protection provide growing impetus for expanding the scope of nitrogen chemistry. Nitrogenase, as well as nitrifying and denitrifying enzymes, will eventually be understood in sufficient detail that robust molecular catalytic mimics will emerge. Electrochemical and photochemical methods also demand more study. Other intriguing areas of research that have provided tantalizing results include chemical looping and plasma-driven processes. The grand challenge in the field of nitrogen chemistry is the development of catalysts and processes that provide simple, low-energy routes to the manipulation of the redox states of nitrogen.

Journal ArticleDOI
26 Jul 2018-Cell
TL;DR: MAGIC as mentioned in this paper is a Markov affinity-based graph imputation of cells that shares information across similar cells, via data diffusion, to denoise the cell count matrix and fill in missing transcripts.

Journal ArticleDOI
TL;DR: In this article, an updated physical model to simulate the formation and evolution of galaxies in cosmological, large-scale gravity+magnetohydrodynamical simulations with the moving mesh code AREPO is introduced.
Abstract: We introduce an updated physical model to simulate the formation and evolution of galaxies in cosmological, large-scale gravity+magnetohydrodynamical simulations with the moving mesh code AREPO. The overall framework builds upon the successes of the Illustris galaxy formation model, and includes prescriptions for star formation, stellar evolution, chemical enrichment, primordial and metal-line cooling of the gas, stellar feedback with galactic outflows, and black hole formation, growth and multi-mode feedback. In this paper we give a comprehensive description of the physical and numerical advances which form the core of the IllustrisTNG (The Next Generation) framework. We focus on the revised implementation of the galactic winds, of which we modify the directionality, velocity, thermal content, and energy scalings, and explore its effects on the galaxy population. As described in earlier works, the model also includes a new black hole driven kinetic feedback at low accretion rates, magnetohydrodynamics, and improvements to the numerical scheme. Using a suite of (25 Mpc $h^{-1}$)$^3$ cosmological boxes we assess the outcome of the new model at our fiducial resolution. The presence of a self-consistently amplified magnetic field is shown to have an important impact on the stellar content of $10^{12} M_{\rm sun}$ haloes and above. Finally, we demonstrate that the new galactic winds promise to solve key problems identified in Illustris in matching observational constraints and affecting the stellar content and sizes of the low mass end of the galaxy population.

Journal ArticleDOI
TL;DR: An overview for the new classification of periodontal and peri-implant diseases and conditions is presented, along with a condensed scheme for each of four workgroup sections, but readers are directed to the pertinent consensus reports and review papers for a thorough discussion of the rationale, criteria, and interpretation of the proposed classification.
Abstract: A classification scheme for periodontal and peri-implant diseases and conditions is necessary for clinicians to properly diagnose and treat patients as well as for scientists to investigate etiology, pathogenesis, natural history, and treatment of the diseases and conditions. This paper summarizes the proceedings of the World Workshop on the Classification of Periodontal and Peri-implant Diseases and Conditions. The workshop was co-sponsored by the American Academy of Periodontology (AAP) and the European Federation of Periodontology (EFP) and included expert participants from all over the world. Planning for the conference, which was held in Chicago on November 9 to 11, 2017, began in early 2015. An organizing committee from the AAP and EFP commissioned 19 review papers and four consensus reports covering relevant areas in periodontology and implant dentistry. The authors were charged with updating the 1999 classification of periodontal diseases and conditions and developing a similar scheme for peri-implant diseases and conditions. Reviewers and workgroups were also asked to establish pertinent case definitions and to provide diagnostic criteria to aid clinicians in the use of the new classification. All findings and recommendations of the workshop were agreed to by consensus. This introductory paper presents an overview for the new classification of periodontal and peri-implant diseases and conditions, along with a condensed scheme for each of four workgroup sections, but readers are directed to the pertinent consensus reports and review papers for a thorough discussion of the rationale, criteria, and interpretation of the proposed classification. Changes to the 1999 classification are highlighted and discussed. Although the intent of the workshop was to base classification on the strongest available scientific evidence, lower level evidence and expert opinion were inevitably used whenever sufficient research data were unavailable. The scope of this workshop was to align and update the classification scheme to the current understanding of periodontal and peri-implant diseases and conditions. This introductory overview presents the schematic tables for the new classification of periodontal and peri-implant diseases and conditions and briefly highlights changes made to the 1999 classification. It cannot present the wealth of information included in the reviews, case definition papers, and consensus reports that has guided the development of the new classification, and reference to the consensus and case definition papers is necessary to provide a thorough understanding of its use for either case management or scientific investigation. Therefore, it is strongly recommended that the reader use this overview as an introduction to these subjects. Accessing this publication online will allow the reader to use the links in this overview and the tables to view the source papers (Table 1).

Journal ArticleDOI
TL;DR: A fully convolutional time-domain audio separation network (Conv-TasNet), a deep learning framework for end-to-end time- domain speech separation, which significantly outperforms previous time–frequency masking methods in separating two- and three-speaker mixtures.
Abstract: Single-channel, speaker-independent speech separation methods have recently seen great progress. However, the accuracy, latency, and computational cost of such methods remain insufficient. The majority of the previous methods have formulated the separation problem through the time-frequency representation of the mixed signal, which has several drawbacks, including the decoupling of the phase and magnitude of the signal, the suboptimality of time-frequency representation for speech separation, and the long latency in calculating the spectrograms. To address these shortcomings, we propose a fully-convolutional time-domain audio separation network (Conv-TasNet), a deep learning framework for end-to-end time-domain speech separation. Conv-TasNet uses a linear encoder to generate a representation of the speech waveform optimized for separating individual speakers. Speaker separation is achieved by applying a set of weighting functions (masks) to the encoder output. The modified encoder representations are then inverted back to the waveforms using a linear decoder. The masks are found using a temporal convolutional network (TCN) consisting of stacked 1-D dilated convolutional blocks, which allows the network to model the long-term dependencies of the speech signal while maintaining a small model size. The proposed Conv-TasNet system significantly outperforms previous time-frequency masking methods in separating two- and three-speaker mixtures. Additionally, Conv-TasNet surpasses several ideal time-frequency magnitude masks in two-speaker speech separation as evaluated by both objective distortion measures and subjective quality assessment by human listeners. Finally, Conv-TasNet has a significantly smaller model size and a shorter minimum latency, making it a suitable solution for both offline and real-time speech separation applications.

Journal ArticleDOI
TL;DR: In this article, the authors used the IllustrisTNG simulations to study the non-linear correlation functions and power spectra of baryons, dark matter, galaxies and haloes over an exceptionally large range of scales.
Abstract: Hydrodynamical simulations of galaxy formation have now reached sufficient volume to make precision predictions for clustering on cosmologically relevant scales. Here we use our new IllustrisTNG simulations to study the non-linear correlation functions and power spectra of baryons, dark matter, galaxies and haloes over an exceptionally large range of scales. We find that baryonic effects increase the clustering of dark matter on small scales and damp the total matter power spectrum on scales up to k ~ 10 h/Mpc by 20%. The non-linear two-point correlation function of the stellar mass is close to a power-law over a wide range of scales and approximately invariant in time from very high redshift to the present. The two-point correlation function of the simulated galaxies agrees well with SDSS at its mean redshift z ~ 0.1, both as a function of stellar mass and when split according to galaxy colour, apart from a mild excess in the clustering of red galaxies in the stellar mass range 10^9-10^10 Msun/h^2. Given this agreement, the TNG simulations can make valuable theoretical predictions for the clustering bias of different galaxy samples. We find that the clustering length of the galaxy auto-correlation function depends strongly on stellar mass and redshift. Its power-law slope gamma is nearly invariant with stellar mass, but declines from gamma ~ 1.8 at redshift z=0 to gamma ~ 1.6 at redshift z ~ 1, beyond which the slope steepens again. We detect significant scale-dependencies in the bias of different observational tracers of large-scale structure, extending well into the range of the baryonic acoustic oscillations and causing nominal (yet fortunately correctable) shifts of the acoustic peaks of around ~5%.

Journal ArticleDOI
10 Aug 2018-Science
TL;DR: The development of microresonator-generated frequency combs is reviewed to map out how understanding and control of their generation is providing a new basis for precision technology and establish a nascent research field at the interface of soliton physics, frequency metrology, and integrated photonics.
Abstract: The development of compact, chip-scale optical frequency comb sources (microcombs) based on parametric frequency conversion in microresonators has seen applications in terabit optical coherent communications, atomic clocks, ultrafast distance measurements, dual-comb spectroscopy, and the calibration of astophysical spectrometers and have enabled the creation of photonic-chip integrated frequency synthesizers. Underlying these recent advances has been the observation of temporal dissipative Kerr solitons in microresonators, which represent self-enforcing, stationary, and localized solutions of a damped, driven, and detuned nonlinear Schrodinger equation, which was first introduced to describe spatial self-organization phenomena. The generation of dissipative Kerr solitons provide a mechanism by which coherent optical combs with bandwidth exceeding one octave can be synthesized and have given rise to a host of phenomena, such as the Stokes soliton, soliton crystals, soliton switching, or dispersive waves. Soliton microcombs are compact, are compatible with wafer-scale processing, operate at low power, can operate with gigahertz to terahertz line spacing, and can enable the implementation of frequency combs in remote and mobile environments outside the laboratory environment, on Earth, airborne, or in outer space.

Journal ArticleDOI
TL;DR: Both anxious/distressed specifier and mixed-features specifier were associated with early onset, poor course and functioning, and suicidality in US adults, and much remains to be learned about the DSM-5 MDD specifiers in the general population.
Abstract: Importance No US national data are available on the prevalence and correlates of DSM-5 –defined major depressive disorder (MDD) or on MDD specifiers as defined in DSM-5 . Objective To present current nationally representative findings on the prevalence, correlates, psychiatric comorbidity, functioning, and treatment of DSM-5 MDD and initial information on the prevalence, severity, and treatment of DSM-5 MDD severity, anxious/distressed specifier, and mixed-features specifier, as well as cases that would have been characterized as bereavement in DSM-IV . Design, Setting, and Participants In-person interviews with a representative sample of US noninstitutionalized civilian adults (≥18 years) (n = 36 309) who participated in the 2012-2013 National Epidemiologic Survey on Alcohol and Related Conditions III (NESARC-III). Data were collected from April 2012 to June 2013 and were analyzed in 2016-2017. Main Outcomes and Measures Prevalence of DSM-5 MDD and the DSM-5 specifiers. Odds ratios (ORs), adjusted ORs (aORs), and 95% CIs indicated associations with demographic characteristics and other psychiatric disorders. Results Of the 36 309 adult participants in NESARC-III, 12-month and lifetime prevalences of MDD were 10.4% and 20.6%, respectively. Odds of 12-month MDD were significantly lower in men (OR, 0.5; 95% CI, 0.46-0.55) and in African American (OR, 0.6; 95% CI, 0.54-0.68), Asian/Pacific Islander (OR, 0.6; 95% CI, 0.45-0.67), and Hispanic (OR, 0.7; 95% CI, 0.62-0.78) adults than in white adults and were higher in younger adults (age range, 18-29 years; OR, 3.0; 95% CI, 2.48-3.55) and those with low incomes ($19 999 or less; OR, 1.7; 95% CI, 1.49-2.04). Associations of MDD with psychiatric disorders ranged from an aOR of 2.1 (95% CI, 1.84-2.35) for specific phobia to an aOR of 5.7 (95% CI, 4.98-6.50) for generalized anxiety disorder. Associations of MDD with substance use disorders ranged from an aOR of 1.8 (95% CI, 1.63-2.01) for alcohol to an aOR of 3.0 (95% CI, 2.57-3.55) for any drug. Most lifetime MDD cases were moderate (39.7%) or severe (49.5%). Almost 70% with lifetime MDD had some type of treatment. Functioning among those with severe MDD was approximately 1 SD below the national mean. Among 12.9% of those with lifetime MDD, all episodes occurred just after the death of someone close and lasted less than 2 months. The anxious/distressed specifier characterized 74.6% of MDD cases, and the mixed-features specifier characterized 15.5%. Controlling for severity, both specifiers were associated with early onset, poor course and functioning, and suicidality. Conclusions and Relevance Among US adults, DSM-5 MDD is highly prevalent, comorbid, and disabling. While most cases received some treatment, a substantial minority did not. Much remains to be learned about the DSM-5 MDD specifiers in the general population.

Proceedings ArticleDOI
27 May 2018
TL;DR: DeepTest is a systematic testing tool for automatically detecting erroneous behaviors of DNN-driven vehicles that can potentially lead to fatal crashes and systematically explore different parts of the DNN logic by generating test inputs that maximize the numbers of activated neurons.
Abstract: Recent advances in Deep Neural Networks (DNNs) have led to the development of DNN-driven autonomous cars that, using sensors like camera, LiDAR, etc., can drive without any human intervention. Most major manufacturers including Tesla, GM, Ford, BMW, and Waymo/Google are working on building and testing different types of autonomous vehicles. The lawmakers of several US states including California, Texas, and New York have passed new legislation to fast-track the process of testing and deployment of autonomous vehicles on their roads. However, despite their spectacular progress, DNNs, just like traditional software, often demonstrate incorrect or unexpected corner-case behaviors that can lead to potentially fatal collisions. Several such real-world accidents involving autonomous cars have already happened including one which resulted in a fatality. Most existing testing techniques for DNN-driven vehicles are heavily dependent on the manual collection of test data under different driving conditions which become prohibitively expensive as the number of test conditions increases. In this paper, we design, implement, and evaluate DeepTest, a systematic testing tool for automatically detecting erroneous behaviors of DNN-driven vehicles that can potentially lead to fatal crashes. First, our tool is designed to automatically generated test cases leveraging real-world changes in driving conditions like rain, fog, lighting conditions, etc. DeepTest systematically explore different parts of the DNN logic by generating test inputs that maximize the numbers of activated neurons. DeepTest found thousands of erroneous behaviors under different realistic driving conditions (e.g., blurring, rain, fog, etc.) many of which lead to potentially fatal crashes in three top performing DNNs in the Udacity self-driving car challenge.