Showing papers by "Stanford University published in 2018"
••
Mayo Clinic1, Rush University2, University of Gothenburg3, Alzheimer's Association4, Silver Spring Networks5, Biogen Idec6, Washington University in St. Louis7, University of California, Berkeley8, University of Cologne9, University of Pennsylvania10, Stanford University11, National Institutes of Health12, University of California, San Francisco13, University of Melbourne14, VU University Medical Center15, Eli Lilly and Company16, Brigham and Women's Hospital17
TL;DR: This research framework seeks to create a common language with which investigators can generate and test hypotheses about the interactions among different pathologic processes (denoted by biomarkers) and cognitive symptoms and envision that defining AD as a biological construct will enable a more accurate characterization and understanding of the sequence of events that lead to cognitive impairment that is associated with AD.
Abstract: In 2011, the National Institute on Aging and Alzheimer's Association created separate diagnostic recommendations for the preclinical, mild cognitive impairment, and dementia stages of Alzheimer's disease. Scientific progress in the interim led to an initiative by the National Institute on Aging and Alzheimer's Association to update and unify the 2011 guidelines. This unifying update is labeled a "research framework" because its intended use is for observational and interventional research, not routine clinical care. In the National Institute on Aging and Alzheimer's Association Research Framework, Alzheimer's disease (AD) is defined by its underlying pathologic processes that can be documented by postmortem examination or in vivo by biomarkers. The diagnosis is not based on the clinical consequences of the disease (i.e., symptoms/signs) in this research framework, which shifts the definition of AD in living people from a syndromal to a biological construct. The research framework focuses on the diagnosis of AD with biomarkers in living persons. Biomarkers are grouped into those of β amyloid deposition, pathologic tau, and neurodegeneration [AT(N)]. This ATN classification system groups different biomarkers (imaging and biofluids) by the pathologic process each measures. The AT(N) system is flexible in that new biomarkers can be added to the three existing AT(N) groups, and new biomarker groups beyond AT(N) can be added when they become available. We focus on AD as a continuum, and cognitive staging may be accomplished using continuous measures. However, we also outline two different categorical cognitive schemes for staging the severity of cognitive impairment: a scheme using three traditional syndromal categories and a six-stage numeric scheme. It is important to stress that this framework seeks to create a common language with which investigators can generate and test hypotheses about the interactions among different pathologic processes (denoted by biomarkers) and cognitive symptoms. We appreciate the concern that this biomarker-based research framework has the potential to be misused. Therefore, we emphasize, first, it is premature and inappropriate to use this research framework in general medical practice. Second, this research framework should not be used to restrict alternative approaches to hypothesis testing that do not use biomarkers. There will be situations where biomarkers are not available or requiring them would be counterproductive to the specific research goals (discussed in more detail later in the document). Thus, biomarker-based research should not be considered a template for all research into age-related cognitive impairment and dementia; rather, it should be applied when it is fit for the purpose of the specific research goals of a study. Importantly, this framework should be examined in diverse populations. Although it is possible that β-amyloid plaques and neurofibrillary tau deposits are not causal in AD pathogenesis, it is these abnormal protein deposits that define AD as a unique neurodegenerative disease among different disorders that can lead to dementia. We envision that defining AD as a biological construct will enable a more accurate characterization and understanding of the sequence of events that lead to cognitive impairment that is associated with AD, as well as the multifactorial etiology of dementia. This approach also will enable a more precise approach to interventional trials where specific pathways can be targeted in the disease process and in the appropriate people.
5,126 citations
••
Lorenzo Galluzzi1, Lorenzo Galluzzi2, Ilio Vitale3, Stuart A. Aaronson4 +183 more•Institutions (111)
TL;DR: The Nomenclature Committee on Cell Death (NCCD) has formulated guidelines for the definition and interpretation of cell death from morphological, biochemical, and functional perspectives.
Abstract: Over the past decade, the Nomenclature Committee on Cell Death (NCCD) has formulated guidelines for the definition and interpretation of cell death from morphological, biochemical, and functional perspectives. Since the field continues to expand and novel mechanisms that orchestrate multiple cell death pathways are unveiled, we propose an updated classification of cell death subroutines focusing on mechanistic and essential (as opposed to correlative and dispensable) aspects of the process. As we provide molecularly oriented definitions of terms including intrinsic apoptosis, extrinsic apoptosis, mitochondrial permeability transition (MPT)-driven necrosis, necroptosis, ferroptosis, pyroptosis, parthanatos, entotic cell death, NETotic cell death, lysosome-dependent cell death, autophagy-dependent cell death, immunogenic cell death, cellular senescence, and mitotic catastrophe, we discuss the utility of neologisms that refer to highly specialized instances of these processes. The mission of the NCCD is to provide a widely accepted nomenclature on cell death in support of the continued development of the field.
3,301 citations
••
Institute for Systems Biology1, BC Cancer Agency2, University of California, San Francisco3, University of North Carolina at Chapel Hill4, Columbia University5, Discovery Institute6, Massachusetts Institute of Technology7, Arizona State University8, Sage Bionetworks9, Harvard University10, Johns Hopkins University11, Stanford University12, University of Calgary13, Université libre de Bruxelles14, University of Texas MD Anderson Cancer Center15, Medical College of Wisconsin16, Qatar Airways17, Cold Spring Harbor Laboratory18, University of São Paulo19, Henry Ford Hospital20, University of Alabama at Birmingham21, Van Andel Institute22, Stony Brook University23
TL;DR: An extensive immunogenomic analysis of more than 10,000 tumors comprising 33 diverse cancer types by utilizing data compiled by TCGA identifies six immune subtypes that encompass multiple cancer types and are hypothesized to define immune response patterns impacting prognosis.
3,246 citations
••
University of Pennsylvania1, University of Texas Southwestern Medical Center2, University of Oslo3, Boston Children's Hospital4, University of Utah5, Université de Montréal6, Goethe University Frankfurt7, University of Minnesota8, Children's Mercy Hospital9, Emory University10, Ghent University11, Kyoto University12, Stanford University13, Duke University14, Oregon Health & Science University15, University of Michigan16, Medical University of Vienna17, University of Paris18, Royal Children's Hospital19, University of Milan20, University of Toronto21, Novartis22, University of Southern California23
TL;DR: In this global study of CAR T‐cell therapy, a single infusion of tisagenlecleucel provided durable remission with long‐term persistence in pediatric and young adult patients with relapsed or refractory B‐cell ALL, with transient high‐grade toxic effects.
Abstract: Background In a single-center phase 1–2a study, the anti-CD19 chimeric antigen receptor (CAR) T-cell therapy tisagenlecleucel produced high rates of complete remission and was associated with serious but mainly reversible toxic effects in children and young adults with relapsed or refractory B-cell acute lymphoblastic leukemia (ALL) Methods We conducted a phase 2, single-cohort, 25-center, global study of tisagenlecleucel in pediatric and young adult patients with CD19+ relapsed or refractory B-cell ALL The primary end point was the overall remission rate (the rate of complete remission or complete remission with incomplete hematologic recovery) within 3 months Results For this planned analysis, 75 patients received an infusion of tisagenlecleucel and could be evaluated for efficacy The overall remission rate within 3 months was 81%, with all patients who had a response to treatment found to be negative for minimal residual disease, as assessed by means of flow cytometry The rates of event-f
3,237 citations
••
Jeffrey D. Stanaway1, Ashkan Afshin1, Emmanuela Gakidou1, Stephen S Lim1 +1050 more•Institutions (346)
TL;DR: This study estimated levels and trends in exposure, attributable deaths, and attributable disability-adjusted life-years (DALYs) by age group, sex, year, and location for 84 behavioural, environmental and occupational, and metabolic risks or groups of risks from 1990 to 2017 and explored the relationship between development and risk exposure.
2,910 citations
••
TL;DR: The main roles of material science in the development of LIBs are discussed, with a statement of caution for the current modern battery research along with a brief discussion on beyond lithium-ion battery chemistries.
Abstract: Over the past 30 years, significant commercial and academic progress has been made on Li-based battery technologies. From the early Li-metal anode iterations to the current commercial Li-ion batteries (LIBs), the story of the Li-based battery is full of breakthroughs and back tracing steps. This review will discuss the main roles of material science in the development of LIBs. As LIB research progresses and the materials of interest change, different emphases on the different subdisciplines of material science are placed. Early works on LIBs focus more on solid state physics whereas near the end of the 20th century, researchers began to focus more on the morphological aspects (surface coating, porosity, size, and shape) of electrode materials. While it is easy to point out which specific cathode and anode materials are currently good candidates for the next-generation of batteries, it is difficult to explain exactly why those are chosen. In this review, for the reader a complete developmental story of LIB should be clearly drawn, along with an explanation of the reasons responsible for the various technological shifts. The review will end with a statement of caution for the current modern battery research along with a brief discussion on beyond lithium-ion battery chemistries.
2,867 citations
••
19 Jul 2018TL;DR: A novel method based on highly efficient random walks to structure the convolutions and a novel training strategy that relies on harder-and-harder training examples to improve robustness and convergence of the model are developed.
Abstract: Recent advancements in deep neural networks for graph-structured data have led to state-of-the-art performance on recommender system benchmarks. However, making these methods practical and scalable to web-scale recommendation tasks with billions of items and hundreds of millions of users remains an unsolved challenge. Here we describe a large-scale deep recommendation engine that we developed and deployed at Pinterest. We develop a data-efficient Graph Convolutional Network (GCN) algorithm, which combines efficient random walks and graph convolutions to generate embeddings of nodes (i.e., items) that incorporate both graph structure as well as node feature information. Compared to prior GCN approaches, we develop a novel method based on highly efficient random walks to structure the convolutions and design a novel training strategy that relies on harder-and-harder training examples to improve robustness and convergence of the model. We also develop an efficient MapReduce model inference algorithm to generate embeddings using a trained model. Overall, we can train on and embed graphs that are four orders of magnitude larger than typical GCN implementations. We show how GCN embeddings can be used to make high-quality recommendations in various settings at Pinterest, which has a massive underlying graph with 3 billion nodes representing pins and boards, and 17 billion edges. According to offline metrics, user studies, as well as A/B tests, our approach generates higher-quality recommendations than comparable deep learning based systems. To our knowledge, this is by far the largest application of deep graph embeddings to date and paves the way for a new generation of web-scale recommender systems based on graph convolutional architectures.
2,647 citations
•
03 Jul 2018TL;DR: A novel discriminatively-trained Cycle-Consistent Adversarial Domain Adaptation model that adapts representations at both the pixel-level and feature-level, enforces cycle-consistency while leveraging a task loss, and does not require aligned pairs is proposed.
Abstract: Domain adaptation is critical for success in new, unseen environments. Adversarial adaptation models have shown tremendous progress towards adapting to new environments by focusing either on discovering domain invariant representations or by mapping between unpaired image domains. While feature space methods are difficult to interpret and sometimes fail to capture pixel-level and low-level domain shifts, image space methods sometimes fail to incorporate high level semantic knowledge relevant for the end task. We propose a model which adapts between domains using both generative image space alignment and latent representation space alignment. Our approach, Cycle-Consistent Adversarial Domain Adaptation (CyCADA), guides transfer between domains according to a specific discriminatively trained task and avoids divergence by enforcing consistency of the relevant semantics before and after adaptation. We evaluate our method on a variety of visual recognition and prediction settings, including digit classification and semantic segmentation of road scenes, advancing state-of-the-art performance for unsupervised adaptation from synthetic to real world driving domains.
2,459 citations
••
University of Southern California1, Stanford University2, University of Iowa3, Brown University4, Ohio State University5, University of Cincinnati6, Medical University of South Carolina7, New York University8, Harvard University9, University of Texas Health Science Center at Houston10, University of Pennsylvania11, Northwestern University12, University of Calgary13
TL;DR: Endovascular thrombectomy for ischemic stroke 6 to 16 hours after a patient was last known to be well plus standard medical therapy resulted in better functional outcomes than standard medical Therapy alone among patients with proximal middle‐cerebral‐artery or internal‐carotid‐arterY occlusion and a region of tissue that was ischeMIC but not yet infarcted.
Abstract: Background Thrombectomy is currently recommended for eligible patients with stroke who are treated within 6 hours after the onset of symptoms. Methods We conducted a multicenter, randomized, open-label trial, with blinded outcome assessment, of thrombectomy in patients 6 to 16 hours after they were last known to be well and who had remaining ischemic brain tissue that was not yet infarcted. Patients with proximal middle-cerebral-artery or internal-carotid-artery occlusion, an initial infarct size of less than 70 ml, and a ratio of the volume of ischemic tissue on perfusion imaging to infarct volume of 1.8 or more were randomly assigned to endovascular therapy (thrombectomy) plus standard medical therapy (endovascular-therapy group) or standard medical therapy alone (medical-therapy group). The primary outcome was the ordinal score on the modified Rankin scale (range, 0 to 6, with higher scores indicating greater disability) at day 90. Results The trial was conducted at 38 U.S. centers and termina...
2,292 citations
••
TL;DR: A modified and improved SCARE checklist is presented, after a Delphi consensus exercise was completed to update the SCARE guidelines.
2,195 citations
••
18 Jun 2018TL;DR: This work directly operates on raw point clouds by popping up RGBD scans and leverages both mature 2D object detectors and advanced 3D deep learning for object localization, achieving efficiency as well as high recall for even small objects.
Abstract: In this work, we study 3D object detection from RGBD data in both indoor and outdoor scenes. While previous methods focus on images or 3D voxels, often obscuring natural 3D patterns and invariances of 3D data, we directly operate on raw point clouds by popping up RGB-D scans. However, a key challenge of this approach is how to efficiently localize objects in point clouds of large-scale scenes (region proposal). Instead of solely relying on 3D proposals, our method leverages both mature 2D object detectors and advanced 3D deep learning for object localization, achieving efficiency as well as high recall for even small objects. Benefited from learning directly in raw point clouds, our method is also able to precisely estimate 3D bounding boxes even under strong occlusion or with very sparse points. Evaluated on KITTI and SUN RGB-D 3D detection benchmarks, our method outperforms the state of the art by remarkable margins while having real-time capability.
••
Northeastern University1, Tufts Medical Center2, McGill University3, Johns Hopkins University4, Utrecht University5, Vanderbilt University Medical Center6, Brigham and Women's Hospital7, New York University8, McMaster University9, Ohio State University10, Radboud University Nijmegen11, University of Western Ontario12, London Health Sciences Centre13, University of Montpellier14, RMIT University15, University of Poitiers16, Maine Medical Center17, University of Washington18, University of Chicago19, Intermountain Healthcare20, Deakin University21, Johns Hopkins University School of Medicine22, Yale University23, University of Grenoble24, University of California, San Francisco25, Monash University26, Case Western Reserve University27, New York Medical College28, University of Toronto29, Stanford University30
TL;DR: Substantial agreement was found among a large, interdisciplinary cohort of international experts regarding evidence supporting recommendations, and the remaining literature gaps in the assessment, prevention, and treatment of Pain, Agitation/sedation, Delirium, Immobility (mobilization/rehabilitation), and Sleep (disruption) in critically ill adults.
Abstract: Objective:To update and expand the 2013 Clinical Practice Guidelines for the Management of Pain, Agitation, and Delirium in Adult Patients in the ICU.Design:Thirty-two international experts, four methodologists, and four critical illness survivors met virtually at least monthly. All section groups g
••
TL;DR: A genome-wide association meta-analysis of individuals with clinically assessed or self-reported depression identifies 44 independent and significant loci and finds important relationships of genetic risk for major depression with educational attainment, body mass, and schizophrenia.
Abstract: Major depressive disorder (MDD) is a common illness accompanied by considerable morbidity, mortality, costs, and heightened risk of suicide. We conducted a genome-wide association meta-analysis based in 135,458 cases and 344,901 controls and identified 44 independent and significant loci. The genetic findings were associated with clinical features of major depression and implicated brain regions exhibiting anatomical differences in cases. Targets of antidepressant medications and genes involved in gene splicing were enriched for smaller association signal. We found important relationships of genetic risk for major depression with educational attainment, body mass, and schizophrenia: lower educational attainment and higher body mass were putatively causal, whereas major depression and schizophrenia reflected a partly shared biological etiology. All humans carry lesser or greater numbers of genetic risk factors for major depression. These findings help refine the basis of major depression and imply that a continuous measure of risk underlies the clinical phenotype.
•
01 Oct 2018TL;DR: In this paper, the expressive power of GNNs to capture different graph structures is analyzed and a simple architecture for graph representation learning is proposed. But the results characterize the discriminative power of popular GNN variants and show that they cannot learn to distinguish certain simple graph structures.
Abstract: Graph Neural Networks (GNNs) are an effective framework for representation learning of graphs. GNNs follow a neighborhood aggregation scheme, where the representation vector of a node is computed by recursively aggregating and transforming representation vectors of its neighboring nodes. Many GNN variants have been proposed and have achieved state-of-the-art results on both node and graph classification tasks. However, despite GNNs revolutionizing graph representation learning, there is limited understanding of their representational properties and limitations. Here, we present a theoretical framework for analyzing the expressive power of GNNs to capture different graph structures. Our results characterize the discriminative power of popular GNN variants, such as Graph Convolutional Networks and GraphSAGE, and show that they cannot learn to distinguish certain simple graph structures. We then develop a simple architecture that is provably the most expressive among the class of GNNs and is as powerful as the Weisfeiler-Lehman graph isomorphism test. We empirically validate our theoretical findings on a number of graph classification benchmarks, and demonstrate that our model achieves state-of-the-art performance.
••
University of Texas Southwestern Medical Center1, Stanford University2, University of Southern California3, University of California, Los Angeles4, West Virginia University5, Harvard University6, University of Colorado Boulder7, Vanderbilt University8, Case Western Reserve University9, Cincinnati Children's Hospital Medical Center10, Cleveland Clinic11, Fox Chase Cancer Center12, University of Pennsylvania13, University of Washington14, Seattle Children's15, University of Texas MD Anderson Cancer Center16, Nemours Foundation17, International University, Cambodia18, Oregon Health & Science University19
TL;DR: Larotrectinib had marked and durable antitumor activity in patients with TRK fusion–positive cancer, regardless of the age of the patient or of the tumor type.
Abstract: Background Fusions involving one of three tropomyosin receptor kinases (TRK) occur in diverse cancers in children and adults. We evaluated the efficacy and safety of larotrectinib, a highly selective TRK inhibitor, in adults and children who had tumors with these fusions. Methods We enrolled patients with consecutively and prospectively identified TRK fusion–positive cancers, detected by molecular profiling as routinely performed at each site, into one of three protocols: a phase 1 study involving adults, a phase 1–2 study involving children, or a phase 2 study involving adolescents and adults. The primary end point for the combined analysis was the overall response rate according to independent review. Secondary end points included duration of response, progression-free survival, and safety. Results A total of 55 patients, ranging in age from 4 months to 76 years, were enrolled and treated. Patients had 17 unique TRK fusion–positive tumor types. The overall response rate was 75% (95% confidence ...
••
TL;DR: A compendium of single-cell transcriptomic data from the model organism Mus musculus that comprises more than 100,000 cells from 20 organs and tissues is presented, representing a new resource for cell biology and enabling the direct and controlled comparison of gene expression in cell types that are shared between tissues.
Abstract: Here we present a compendium of single-cell transcriptomic data from the model organism Mus musculus that comprises more than 100,000 cells from 20 organs and tissues. These data represent a new resource for cell biology, reveal gene expression in poorly characterized cell populations and enable the direct and controlled comparison of gene expression in cell types that are shared between tissues, such as T lymphocytes and endothelial cells from different anatomical locations. Two distinct technical approaches were used for most organs: one approach, microfluidic droplet-based 3'-end counting, enabled the survey of thousands of cells at relatively low coverage, whereas the other, full-length transcript analysis based on fluorescence-activated cell sorting, enabled the characterization of cell types with high sensitivity and coverage. The cumulative data provide the foundation for an atlas of transcriptomic cell biology.
••
University of Oxford1, Oxford Health NHS Foundation Trust2, Kyoto University3, University of Bern4, Sorbonne5, Paris Descartes University6, Cochrane Collaboration7, Warneford Hospital8, Technische Universität München9, Radboud University Nijmegen Medical Centre10, Oregon Health & Science University11, University of Bristol12, Stanford University13
TL;DR: This work aimed to update and expand previous work to compare and rank antidepressants for the acute treatment of adults with unipolar major depressive disorder, and found that all antidepressants were more effective than placebo.
••
Stockholm Resilience Centre1, Stockholm University2, University of Copenhagen3, University of Exeter4, Royal Swedish Academy of Sciences5, University of Arizona6, Scott Polar Research Institute7, Stanford University8, Université catholique de Louvain9, Potsdam Institute for Climate Impact Research10, Australian National University11, Wageningen University and Research Centre12, University of Potsdam13
TL;DR: The risk that self-reinforcing feedbacks could push the Earth System toward a planetary threshold that, if crossed, could prevent stabilization of the climate at intermediate temperature rises and cause continued warming on a “Hothouse Earth” pathway even as human emissions are reduced is explored.
Abstract: We explore the risk that self-reinforcing feedbacks could push the Earth System toward a planetary threshold that, if crossed, could prevent stabilization of the climate at intermediate temperature rises and cause continued warming on a "Hothouse Earth" pathway even as human emissions are reduced. Crossing the threshold would lead to a much higher global average temperature than any interglacial in the past 1.2 million years and to sea levels significantly higher than at any time in the Holocene. We examine the evidence that such a threshold might exist and where it might be. If the threshold is crossed, the resulting trajectory would likely cause serious disruptions to ecosystems, society, and economies. Collective human action is required to steer the Earth System away from a potential threshold and stabilize it in a habitable interglacial-like state. Such action entails stewardship of the entire Earth System-biosphere, climate, and societies-and could include decarbonization of the global economy, enhancement of biosphere carbon sinks, behavioral changes, technological innovations, new governance arrangements, and transformed social values.
••
University of Minnesota1, University of Colorado Boulder2, VU University Amsterdam3, Harvard University4, University of Southern California5, University of Queensland6, University of Tartu7, Erasmus University Rotterdam8, Hospital for Special Surgery9, University of Copenhagen10, Statens Serum Institut11, Broad Institute12, University of Essex13, University of Edinburgh14, University of Cambridge15, University Hospital of Lausanne16, Geisinger Health System17, Wenzhou Medical College18, Stanford University19, University of North Carolina at Chapel Hill20, University of Wisconsin-Madison21, The Feinstein Institute for Medical Research22, Hofstra University23, University of Dundee24, University of Toronto25, Princeton University26, Queen's University27, New York University Shanghai28, National Bureau of Economic Research29, Karolinska Institutet30, Uppsala University31, University of Lausanne32, New York University33, Stockholm School of Economics34
TL;DR: A joint (multi-phenotype) analysis of educational attainment and three related cognitive phenotypes generates polygenic scores that explain 11–13% of the variance ineducational attainment and 7–10% ofthe variance in cognitive performance, which substantially increases the utility ofpolygenic scores as tools in research.
Abstract: Here we conducted a large-scale genetic association analysis of educational attainment in a sample of approximately 1.1 million individuals and identify 1,271 independent genome-wide-significant SNPs. For the SNPs taken together, we found evidence of heterogeneous effects across environments. The SNPs implicate genes involved in brain-development processes and neuron-to-neuron communication. In a separate analysis of the X chromosome, we identify 10 independent genome-wide-significant SNPs and estimate a SNP heritability of around 0.3% in both men and women, consistent with partial dosage compensation. A joint (multi-phenotype) analysis of educational attainment and three related cognitive phenotypes generates polygenic scores that explain 11-13% of the variance in educational attainment and 7-10% of the variance in cognitive performance. This prediction accuracy substantially increases the utility of polygenic scores as tools in research.
••
TL;DR: This analysis expands upon previous analyses by working under the hypothesis that both bodies were neutron stars that are described by the same equation of state and have spins within the range observed in Galactic binary neutron stars.
Abstract: On 17 August 2017, the LIGO and Virgo observatories made the first direct detection of gravitational waves from the coalescence of a neutron star binary system. The detection of this gravitational-wave signal, GW170817, offers a novel opportunity to directly probe the properties of matter at the extreme conditions found in the interior of these stars. The initial, minimal-assumption analysis of the LIGO and Virgo data placed constraints on the tidal effects of the coalescing bodies, which were then translated to constraints on neutron star radii. Here, we expand upon previous analyses by working under the hypothesis that both bodies were neutron stars that are described by the same equation of state and have spins within the range observed in Galactic binary neutron stars. Our analysis employs two methods: the use of equation-of-state-insensitive relations between various macroscopic properties of the neutron stars and the use of an efficient parametrization of the defining function pðρÞ of the equation of state itself. From the LIGO and Virgo data alone and the first method, we measure the two neutron star radii as R1 ¼ 10.8 þ2.0 −1.7 km for the heavier star and R2 ¼ 10.7 þ2.1 −1.5 km for the lighter star at the 90% credible level. If we additionally require that the equation of state supports neutron stars with masses larger than 1.97 M⊙ as required from electromagnetic observations and employ the equation-of-state parametrization, we further constrain R1 ¼ 11.9 þ1.4 −1.4 km and R2 ¼ 11.9 þ1.4 −1.4 km at the 90% credible level. Finally, we obtain constraints on pðρÞ at supranuclear densities, with pressure at twice nuclear saturation density measured at 3.5 þ2.7 −1.7 × 1034 dyn cm−2 at the 90% level.
••
08 Sep 2018TL;DR: In this article, a sequential model-based optimization (SMBO) strategy is proposed to search for structures in order of increasing complexity, while simultaneously learning a surrogate model to guide the search through structure space.
Abstract: We propose a new method for learning the structure of convolutional neural networks (CNNs) that is more efficient than recent state-of-the-art methods based on reinforcement learning and evolutionary algorithms. Our approach uses a sequential model-based optimization (SMBO) strategy, in which we search for structures in order of increasing complexity, while simultaneously learning a surrogate model to guide the search through structure space. Direct comparison under the same search space shows that our method is up to 5 times more efficient than the RL method of Zoph et al. (2018) in terms of number of models evaluated, and 8 times faster in terms of total compute. The structures we discover in this way achieve state of the art classification accuracies on CIFAR-10 and ImageNet.
••
University of Southern California1, Duke University2, Stockholm School of Economics3, Center for Open Science4, University of Virginia5, University of Amsterdam6, University of Pennsylvania7, University of North Carolina at Chapel Hill8, University of Regensburg9, California Institute of Technology10, Research Institute of Industrial Economics11, New York University12, Cardiff University13, Mathematica Policy Research14, Northwestern University15, Ohio State University16, University of Sussex17, Texas A&M University18, Royal Holloway, University of London19, University of Zurich20, University of Melbourne21, University of Wisconsin-Madison22, University of Michigan23, Stanford University24, Rutgers University25, Columbia University26, University of Washington27, University of Edinburgh28, National University of Singapore29, Utrecht University30, Arizona State University31, Princeton University32, University of California, Los Angeles33, Imperial College London34, University of Innsbruck35, Harvard University36, University of Chicago37, University of Pittsburgh38, University of Notre Dame39, University of California, Berkeley40, Johns Hopkins University41, University of Bristol42, University of New South Wales43, Dartmouth College44, Whitman College45, University of Puerto Rico46, University of Milan47, University of California, Irvine48, Paris Dauphine University49, University of British Columbia50, Ludwig Maximilian University of Munich51, Purdue University52, Washington University in St. Louis53, University of California, Davis54, Microsoft55
TL;DR: The default P-value threshold for statistical significance is proposed to be changed from 0.05 to 0.005 for claims of new discoveries in order to reduce uncertainty in the number of discoveries.
Abstract: We propose to change the default P-value threshold for statistical significance from 0.05 to 0.005 for claims of new discoveries.
••
TL;DR: A door‐to‐intervention time of <90 minutes is suggested, based on a framework of 30‐30‐30 minutes, for the management of the patient with a ruptured aneurysm, and the Vascular Quality Initiative mortality risk score is suggested for mutual decision‐making with patients considering aneurYSm repair.
••
TL;DR: A primer on the CIBERSORT method is provided and its use for characterizing TILs in tumor samples profiled by microarray or RNA-Seq is illustrated.
Abstract: Tumor infiltrating leukocytes (TILs) are an integral component of the tumor microenvironment and have been found to correlate with prognosis and response to therapy. Methods to enumerate immune subsets such as immunohistochemistry or flow cytometry suffer from limitations in phenotypic markers and can be challenging to practically implement and standardize. An alternative approach is to acquire aggregative high dimensional data from cellular mixtures and to subsequently infer the cellular components computationally. We recently described CIBERSORT, a versatile computational method for quantifying cell fractions from bulk tissue gene expression profiles (GEPs). Combining support vector regression with prior knowledge of expression profiles from purified leukocyte subsets, CIBERSORT can accurately estimate the immune composition of a tumor biopsy. In this chapter, we provide a primer on the CIBERSORT method and illustrate its use for characterizing TILs in tumor samples profiled by microarray or RNA-Seq.
••
University of Hawaii at Manoa1, University of Pennsylvania2, University of Michigan3, Harvard University4, GlaxoSmithKline5, Imperial College London6, Princess Margaret Cancer Centre7, University of Toronto8, Vanderbilt University9, Drexel University10, Carnegie Mellon University11, Stanford University12, University of Virginia13, Broad Institute14, Toyota Technological Institute at Chicago15, Trinity University16, Princeton University17, National Institutes of Health18, Howard Hughes Medical Institute19, University of Florida20, University of Colorado Denver21, University of Münster22, Georgetown University Medical Center23, Washington University in St. Louis24, Brown University25, Morgridge Institute for Research26, University of Wisconsin-Madison27
TL;DR: It is found that deep learning has yet to revolutionize biomedicine or definitively resolve any of the most pressing challenges in the field, but promising advances have been made on the prior state of the art.
Abstract: Deep learning describes a class of machine learning algorithms that are capable of combining raw inputs into layers of intermediate features. These algorithms have recently shown impressive results across a variety of domains. Biology and medicine are data-rich disciplines, but the data are complex and often ill-understood. Hence, deep learning techniques may be particularly well suited to solve problems of these fields. We examine applications of deep learning to a variety of biomedical problems-patient classification, fundamental biological processes and treatment of patients-and discuss whether deep learning will be able to transform these tasks or if the biomedical sphere poses unique challenges. Following from an extensive literature review, we find that deep learning has yet to revolutionize biomedicine or definitively resolve any of the most pressing challenges in the field, but promising advances have been made on the prior state of the art. Even though improvements over previous baselines have been modest in general, the recent progress indicates that deep learning methods will provide valuable means for speeding up or aiding human investigation. Though progress has been made linking a specific neural network's prediction to input features, understanding how users should interpret these models to make testable hypotheses about the system under study remains an open challenge. Furthermore, the limited amount of labelled data for training presents problems in some domains, as do legal and privacy constraints on work with sensitive health records. Nonetheless, we foresee deep learning enabling changes at both bench and bedside with the potential to transform several areas of biology and medicine.
••
29 Mar 2018TL;DR: A recurrent sequence-to-sequence model observes motion histories and predicts future behavior, using a novel pooling mechanism to aggregate information across people, and outperforms prior work in terms of accuracy, variety, collision avoidance, and computational complexity.
Abstract: Understanding human motion behavior is critical for autonomous moving platforms (like self-driving cars and social robots) if they are to navigate human-centric environments. This is challenging because human motion is inherently multimodal: given a history of human motion paths, there are many socially plausible ways that people could move in the future. We tackle this problem by combining tools from sequence prediction and generative adversarial networks: a recurrent sequence-to-sequence model observes motion histories and predicts future behavior, using a novel pooling mechanism to aggregate information across people. We predict socially plausible futures by training adversarially against a recurrent discriminator, and encourage diverse predictions with a novel variety loss. Through experiments on several datasets we demonstrate that our approach outperforms prior work in terms of accuracy, variety, collision avoidance, and computational complexity.
••
TL;DR: The encyclopedia of DNA elements (ENCODE) Data Coordinating Center has developed the ENCODE Portal database and website as the source for the data and metadata generated by the Encode Consortium as discussed by the authors.
Abstract: The Encyclopedia of DNA Elements (ENCODE) Data Coordinating Center has developed the ENCODE Portal database and website as the source for the data and metadata generated by the ENCODE Consortium. Two principles have motivated the design. First, experimental protocols, analytical procedures and the data themselves should be made publicly accessible through a coherent, web-based search and download interface. Second, the same interface should serve carefully curated metadata that record the provenance of the data and justify its interpretation in biological terms. Since its initial release in 2013 and in response to recommendations from consortium members and the wider community of scientists who use the Portal to access ENCODE data, the Portal has been regularly updated to better reflect these design principles. Here we report on these updates, including results from new experiments, uniformly-processed data from other projects, new visualization tools and more comprehensive metadata to describe experiments and analyses. Additionally, the Portal is now home to meta(data) from related projects including Genomics of Gene Regulation, Roadmap Epigenome Project, Model organism ENCODE (modENCODE) and modERN. The Portal now makes available over 13000 datasets and their accompanying metadata and can be accessed at: https://www.encodeproject.org/.
••
Harvard University1, New York University2, World Bank3, Mexican Social Security Institute4, Wellcome Trust5, Inter-American Development Bank6, University of Ibadan7, Northwestern University8, Bill & Melinda Gates Foundation9, Malawi University of Science and Technology10, University of London11, Duke University12, University of Bergen13, Public Health Foundation of India14, Centers for Disease Control and Prevention15, Stanford University16, Kathmandu17
TL;DR: High-quality health systems in the Sustainable Development Goals era: time for a revolution.
••
TL;DR: This work shows that the performance of the commonly studied materials is limited by unfavorable scaling relationships (for binding energies of reaction intermediates), and presents a number of alternative strategies that may lead to the design and discovery of more promising materials for ORR.
Abstract: Despite the dedicated search for novel catalysts for fuel cell applications, the intrinsic oxygen reduction reaction (ORR) activity of materials has not improved significantly over the past decade. Here, we review the role of theory in understanding the ORR mechanism and highlight the descriptor-based approaches that have been used to identify catalysts with increased activity. Specifically, by showing that the performance of the commonly studied materials (e.g., metals, alloys, carbons, etc.) is limited by unfavorable scaling relationships (for binding energies of reaction intermediates), we present a number of alternative strategies that may lead to the design and discovery of more promising materials for ORR.
••
11 Jun 2018TL;DR: SQuADRUn as discussed by the authors is a new dataset that combines the existing Stanford Question Answering Dataset with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones.
Abstract: Extractive reading comprehension systems can often locate the correct answer to a question in a context document, but they also tend to make unreliable guesses on questions for which the correct answer is not stated in the context. Existing datasets either focus exclusively on answerable questions, or use automatically generated unanswerable questions that are easy to identify. To address these weaknesses, we present SQuADRUn, a new dataset that combines the existing Stanford Question Answering Dataset (SQuAD) with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuADRUn, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering. SQuADRUn is a challenging natural language understanding task for existing models: a strong neural system that gets 86% F1 on SQuAD achieves only 66% F1 on SQuADRUn. We release SQuADRUn to the community as the successor to SQuAD.