scispace - formally typeset
Search or ask a question

Showing papers by "University of Amsterdam published in 2018"


Journal ArticleDOI
Clotilde Théry1, Kenneth W. Witwer2, Elena Aikawa3, María José Alcaraz4  +414 moreInstitutions (209)
TL;DR: The MISEV2018 guidelines include tables and outlines of suggested protocols and steps to follow to document specific EV-associated functional activities, and a checklist is provided with summaries of key points.
Abstract: The last decade has seen a sharp increase in the number of scientific publications describing physiological and pathological functions of extracellular vesicles (EVs), a collective term covering various subtypes of cell-released, membranous structures, called exosomes, microvesicles, microparticles, ectosomes, oncosomes, apoptotic bodies, and many other names. However, specific issues arise when working with these entities, whose size and amount often make them difficult to obtain as relatively pure preparations, and to characterize properly. The International Society for Extracellular Vesicles (ISEV) proposed Minimal Information for Studies of Extracellular Vesicles (“MISEV”) guidelines for the field in 2014. We now update these “MISEV2014” guidelines based on evolution of the collective knowledge in the last four years. An important point to consider is that ascribing a specific function to EVs in general, or to subtypes of EVs, requires reporting of specific information beyond mere description of function in a crude, potentially contaminated, and heterogeneous preparation. For example, claims that exosomes are endowed with exquisite and specific activities remain difficult to support experimentally, given our still limited knowledge of their specific molecular machineries of biogenesis and release, as compared with other biophysically similar EVs. The MISEV2018 guidelines include tables and outlines of suggested protocols and steps to follow to document specific EV-associated functional activities. Finally, a checklist is provided with summaries of key points.

5,988 citations


Journal ArticleDOI
Gregory A. Roth1, Gregory A. Roth2, Degu Abate3, Kalkidan Hassen Abate4  +1025 moreInstitutions (333)
TL;DR: Non-communicable diseases comprised the greatest fraction of deaths, contributing to 73·4% (95% uncertainty interval [UI] 72·5–74·1) of total deaths in 2017, while communicable, maternal, neonatal, and nutritional causes accounted for 18·6% (17·9–19·6), and injuries 8·0% (7·7–8·2).

5,211 citations


Journal ArticleDOI
Lorenzo Galluzzi1, Lorenzo Galluzzi2, Ilio Vitale3, Stuart A. Aaronson4  +183 moreInstitutions (111)
TL;DR: The Nomenclature Committee on Cell Death (NCCD) has formulated guidelines for the definition and interpretation of cell death from morphological, biochemical, and functional perspectives.
Abstract: Over the past decade, the Nomenclature Committee on Cell Death (NCCD) has formulated guidelines for the definition and interpretation of cell death from morphological, biochemical, and functional perspectives. Since the field continues to expand and novel mechanisms that orchestrate multiple cell death pathways are unveiled, we propose an updated classification of cell death subroutines focusing on mechanistic and essential (as opposed to correlative and dispensable) aspects of the process. As we provide molecularly oriented definitions of terms including intrinsic apoptosis, extrinsic apoptosis, mitochondrial permeability transition (MPT)-driven necrosis, necroptosis, ferroptosis, pyroptosis, parthanatos, entotic cell death, NETotic cell death, lysosome-dependent cell death, autophagy-dependent cell death, immunogenic cell death, cellular senescence, and mitotic catastrophe, we discuss the utility of neologisms that refer to highly specialized instances of these processes. The mission of the NCCD is to provide a widely accepted nomenclature on cell death in support of the continued development of the field.

3,301 citations


Book ChapterDOI
03 Jun 2018
TL;DR: It is shown that factorization models for link prediction such as DistMult can be significantly improved through the use of an R-GCN encoder model to accumulate evidence over multiple inference steps in the graph, demonstrating a large improvement of 29.8% on FB15k-237 over a decoder-only baseline.
Abstract: Knowledge graphs enable a wide variety of applications, including question answering and information retrieval. Despite the great effort invested in their creation and maintenance, even the largest (e.g., Yago, DBPedia or Wikidata) remain incomplete. We introduce Relational Graph Convolutional Networks (R-GCNs) and apply them to two standard knowledge base completion tasks: Link prediction (recovery of missing facts, i.e. subject-predicate-object triples) and entity classification (recovery of missing entity attributes). R-GCNs are related to a recent class of neural networks operating on graphs, and are developed specifically to handle the highly multi-relational data characteristic of realistic knowledge bases. We demonstrate the effectiveness of R-GCNs as a stand-alone model for entity classification. We further show that factorization models for link prediction such as DistMult can be significantly improved through the use of an R-GCN encoder model to accumulate evidence over multiple inference steps in the graph, demonstrating a large improvement of 29.8% on FB15k-237 over a decoder-only baseline.

3,168 citations


Posted Content
TL;DR: This paper builds on Double Q-learning, by taking the minimum value between a pair of critics to limit overestimation, and draws the connection between target networks and overestimation bias.
Abstract: In value-based reinforcement learning methods such as deep Q-learning, function approximation errors are known to lead to overestimated value estimates and suboptimal policies. We show that this problem persists in an actor-critic setting and propose novel mechanisms to minimize its effects on both the actor and the critic. Our algorithm builds on Double Q-learning, by taking the minimum value between a pair of critics to limit overestimation. We draw the connection between target networks and overestimation bias, and suggest delaying policy updates to reduce per-update error and further improve performance. We evaluate our method on the suite of OpenAI gym tasks, outperforming the state of the art in every environment tested.

1,968 citations


Journal ArticleDOI
Elena Aprile1, Jelle Aalbers2, F. Agostini3, M. Alfonsi4, L. Althueser5, F. D. Amaro6, M. Anthony1, F. Arneodo7, Laura Baudis8, Boris Bauermeister9, M. L. Benabderrahmane7, T. Berger10, P. A. Breur2, April S. Brown2, Ethan Brown10, S. Bruenner11, Giacomo Bruno7, Ran Budnik12, C. Capelli8, João Cardoso6, D. Cichon11, D. Coderre13, Auke-Pieter Colijn2, Jan Conrad9, Jean-Pierre Cussonneau14, M. P. Decowski2, P. de Perio1, P. Di Gangi3, A. Di Giovanni7, Sara Diglio14, A. Elykov13, G. Eurin11, J. Fei15, A. D. Ferella9, A. Fieguth5, W. Fulgione, A. Gallo Rosso, Michelle Galloway8, F. Gao1, M. Garbini3, C. Geis4, L. Grandi16, Z. Greene1, H. Qiu12, C. Hasterok11, E. Hogenbirk2, J. Howlett1, R. Itay12, F. Joerg11, B. Kaminsky13, Shingo Kazama8, A. Kish8, G. Koltman12, H. Landsman12, R. F. Lang17, L. Levinson12, Qing Lin1, Sebastian Lindemann13, Manfred Lindner11, F. Lombardi15, J. A. M. Lopes6, J. Mahlstedt9, A. Manfredini12, T. Marrodán Undagoitia11, Julien Masbou14, D. Masson17, M. Messina7, K. Micheneau14, Kate C. Miller16, A. Molinario, K. Morå9, M. Murra5, J. Naganoma18, Kaixuan Ni15, Uwe Oberlack4, Bart Pelssers9, F. Piastra8, J. Pienaar16, V. Pizzella11, Guillaume Plante1, R. Podviianiuk, N. Priel12, D. Ramírez García13, L. Rauch11, S. Reichard8, C. Reuter17, B. Riedel16, A. Rizzo1, A. Rocchetti13, N. Rupp11, J.M.F. dos Santos6, Gabriella Sartorelli3, M. Scheibelhut4, S. Schindler4, J. Schreiner11, D. Schulte5, Marc Schumann13, L. Scotto Lavina19, M. Selvi3, P. Shagin18, E. Shockley16, Manuel Gameiro da Silva6, H. Simgen11, Dominique Thers14, F. Toschi13, F. Toschi3, Gian Carlo Trinchero, C. Tunnell16, N. Upole16, M. Vargas5, O. Wack11, Hongwei Wang20, Zirui Wang, Yuehuan Wei15, Ch. Weinheimer5, C. Wittweg5, J. Wulf8, J. Ye15, Yanxi Zhang1, T. Zhu1 
TL;DR: In this article, a search for weakly interacting massive particles (WIMPs) using 278.8 days of data collected with the XENON1T experiment at LNGS is reported.
Abstract: We report on a search for weakly interacting massive particles (WIMPs) using 278.8 days of data collected with the XENON1T experiment at LNGS. XENON1T utilizes a liquid xenon time projection chamber with a fiducial mass of (1.30±0.01) ton, resulting in a 1.0 ton yr exposure. The energy region of interest, [1.4,10.6] keVee ([4.9,40.9] keVnr), exhibits an ultralow electron recoil background rate of [82-3+5(syst)±3(stat)] events/(ton yr keVee). No significant excess over background is found, and a profile likelihood analysis parametrized in spatial and energy dimensions excludes new parameter space for the WIMP-nucleon spin-independent elastic scatter cross section for WIMP masses above 6 GeV/c2, with a minimum of 4.1×10-47 cm2 at 30 GeV/c2 and a 90% confidence level.

1,808 citations


Journal ArticleDOI
David Capper1, David Capper2, David Capper3, David T.W. Jones2  +168 moreInstitutions (54)
22 Mar 2018-Nature
TL;DR: This work presents a comprehensive approach for the DNA methylation-based classification of central nervous system tumours across all entities and age groups, and shows that the availability of this method may have a substantial impact on diagnostic precision compared to standard methods.
Abstract: Accurate pathological diagnosis is crucial for optimal management of patients with cancer. For the approximately 100 known tumour types of the central nervous system, standardization of the diagnostic process has been shown to be particularly challenging-with substantial inter-observer variability in the histopathological diagnosis of many tumour types. Here we present a comprehensive approach for the DNA methylation-based classification of central nervous system tumours across all entities and age groups, and demonstrate its application in a routine diagnostic setting. We show that the availability of this method may have a substantial impact on diagnostic precision compared to standard methods, resulting in a change of diagnosis in up to 12% of prospective cases. For broader accessibility, we have designed a free online classifier tool, the use of which does not require any additional onsite data processing. Our results provide a blueprint for the generation of machine-learning-based tumour classifiers across other cancer entities, with the potential to fundamentally transform tumour pathology.

1,620 citations


Journal ArticleDOI
23 Jan 2018-JAMA
TL;DR: A group of 24 multidisciplinary experts used a systematic review of articles on existing reporting guidelines and methods, a 3-round Delphi process, a consensus meeting, pilot testing, and iterative refinement to develop the PRISMA diagnostic test accuracy guideline.
Abstract: Importance Systematic reviews of diagnostic test accuracy synthesize data from primary diagnostic studies that have evaluated the accuracy of 1 or more index tests against a reference standard, provide estimates of test performance, allow comparisons of the accuracy of different tests, and facilitate the identification of sources of variability in test accuracy. Objective To develop the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) diagnostic test accuracy guideline as a stand-alone extension of the PRISMA statement. Modifications to the PRISMA statement reflect the specific requirements for reporting of systematic reviews and meta-analyses of diagnostic test accuracy studies and the abstracts for these reviews. Design Established standards from the Enhancing the Quality and Transparency of Health Research (EQUATOR) Network were followed for the development of the guideline. The original PRISMA statement was used as a framework on which to modify and add items. A group of 24 multidisciplinary experts used a systematic review of articles on existing reporting guidelines and methods, a 3-round Delphi process, a consensus meeting, pilot testing, and iterative refinement to develop the PRISMA diagnostic test accuracy guideline. The final version of the PRISMA diagnostic test accuracy guideline checklist was approved by the group. Findings The systematic review (produced 64 items) and the Delphi process (provided feedback on 7 proposed items; 1 item was later split into 2 items) identified 71 potentially relevant items for consideration. The Delphi process reduced these to 60 items that were discussed at the consensus meeting. Following the meeting, pilot testing and iterative feedback were used to generate the 27-item PRISMA diagnostic test accuracy checklist. To reflect specific or optimal contemporary systematic review methods for diagnostic test accuracy, 8 of the 27 original PRISMA items were left unchanged, 17 were modified, 2 were added, and 2 were omitted. Conclusions and Relevance The 27-item PRISMA diagnostic test accuracy checklist provides specific guidance for reporting of systematic reviews. The PRISMA diagnostic test accuracy guideline can facilitate the transparent reporting of reviews, and may assist in the evaluation of validity and applicability, enhance replicability of reviews, and make the results from systematic reviews of diagnostic test accuracy studies more useful.

1,616 citations


Journal ArticleDOI
Daniel J. Benjamin1, James O. Berger2, Magnus Johannesson1, Magnus Johannesson3, Brian A. Nosek4, Brian A. Nosek5, Eric-Jan Wagenmakers6, Richard A. Berk7, Kenneth A. Bollen8, Björn Brembs9, Lawrence D. Brown7, Colin F. Camerer10, David Cesarini11, David Cesarini12, Christopher D. Chambers13, Merlise A. Clyde2, Thomas D. Cook14, Thomas D. Cook15, Paul De Boeck16, Zoltan Dienes17, Anna Dreber3, Kenny Easwaran18, Charles Efferson19, Ernst Fehr20, Fiona Fidler21, Andy P. Field17, Malcolm R. Forster22, Edward I. George7, Richard Gonzalez23, Steven N. Goodman24, Edwin J. Green25, Donald P. Green26, Anthony G. Greenwald27, Jarrod D. Hadfield28, Larry V. Hedges15, Leonhard Held20, Teck-Hua Ho29, Herbert Hoijtink30, Daniel J. Hruschka31, Kosuke Imai32, Guido W. Imbens24, John P. A. Ioannidis24, Minjeong Jeon33, James Holland Jones34, Michael Kirchler35, David Laibson36, John A. List37, Roderick J. A. Little23, Arthur Lupia23, Edouard Machery38, Scott E. Maxwell39, Michael A. McCarthy21, Don A. Moore40, Stephen L. Morgan41, Marcus R. Munafò42, Shinichi Nakagawa43, Brendan Nyhan44, Timothy H. Parker45, Luis R. Pericchi46, Marco Perugini47, Jeffrey N. Rouder48, Judith Rousseau49, Victoria Savalei50, Felix D. Schönbrodt51, Thomas Sellke52, Betsy Sinclair53, Dustin Tingley36, Trisha Van Zandt16, Simine Vazire54, Duncan J. Watts55, Christopher Winship36, Robert L. Wolpert2, Yu Xie32, Cristobal Young24, Jonathan Zinman44, Valen E. Johnson18, Valen E. Johnson1 
University of Southern California1, Duke University2, Stockholm School of Economics3, University of Virginia4, Center for Open Science5, University of Amsterdam6, University of Pennsylvania7, University of North Carolina at Chapel Hill8, University of Regensburg9, California Institute of Technology10, Research Institute of Industrial Economics11, New York University12, Cardiff University13, Mathematica Policy Research14, Northwestern University15, Ohio State University16, University of Sussex17, Texas A&M University18, Royal Holloway, University of London19, University of Zurich20, University of Melbourne21, University of Wisconsin-Madison22, University of Michigan23, Stanford University24, Rutgers University25, Columbia University26, University of Washington27, University of Edinburgh28, National University of Singapore29, Utrecht University30, Arizona State University31, Princeton University32, University of California, Los Angeles33, Imperial College London34, University of Innsbruck35, Harvard University36, University of Chicago37, University of Pittsburgh38, University of Notre Dame39, University of California, Berkeley40, Johns Hopkins University41, University of Bristol42, University of New South Wales43, Dartmouth College44, Whitman College45, University of Puerto Rico46, University of Milan47, University of California, Irvine48, Paris Dauphine University49, University of British Columbia50, Ludwig Maximilian University of Munich51, Purdue University52, Washington University in St. Louis53, University of California, Davis54, Microsoft55
TL;DR: The default P-value threshold for statistical significance is proposed to be changed from 0.05 to 0.005 for claims of new discoveries in order to reduce uncertainty in the number of discoveries.
Abstract: We propose to change the default P-value threshold for statistical significance from 0.05 to 0.005 for claims of new discoveries.

1,586 citations


Journal ArticleDOI
TL;DR: In this article, the authors introduce the current state-of-the-art of network estimation and propose two novel statistical methods: the correlation stability coefficient and the bootstrapped difference test for edge-weights and centrality indices.
Abstract: The usage of psychological networks that conceptualize behavior as a complex interplay of psychological and other components has gained increasing popularity in various research fields. While prior publications have tackled the topics of estimating and interpreting such networks, little work has been conducted to check how accurate (i.e., prone to sampling variation) networks are estimated, and how stable (i.e., interpretation remains similar with less observations) inferences from the network structure (such as centrality indices) are. In this tutorial paper, we aim to introduce the reader to this field and tackle the problem of accuracy under sampling variation. We first introduce the current state-of-the-art of network estimation. Second, we provide a rationale why researchers should investigate the accuracy of psychological networks. Third, we describe how bootstrap routines can be used to (A) assess the accuracy of estimated network connections, (B) investigate the stability of centrality indices, and (C) test whether network connections and centrality estimates for different variables differ from each other. We introduce two novel statistical methods: for (B) the correlation stability coefficient, and for (C) the bootstrapped difference test for edge-weights and centrality indices. We conducted and present simulation studies to assess the performance of both methods. Finally, we developed the free R-package bootnet that allows for estimating psychological networks in a generalized framework in addition to the proposed bootstrap methods. We showcase bootnet in a tutorial, accompanied by R syntax, in which we analyze a dataset of 359 women with posttraumatic stress disorder available online.

1,584 citations


Journal ArticleDOI
TL;DR: Treatment with cabozantinib resulted in longer overall survival and progression‐free survival than placebo among patients with previously treated advanced hepatocellular carcinoma, and the rate of high‐grade adverse events in the cabozaninib group was approximately twice that observed in the placebo group.
Abstract: Background Cabozantinib inhibits tyrosine kinases, including vascular endothelial growth factor receptors 1, 2, and 3, MET, and AXL, which are implicated in the progression of hepatocellul...

Journal ArticleDOI
22 Jun 2018-Science
TL;DR: It is demonstrated that, in the general population, the personality trait neuroticism is significantly correlated with almost every psychiatric disorder and migraine, and it is shown that both psychiatric and neurological disorders have robust correlations with cognitive and personality measures.
Abstract: Disorders of the brain can exhibit considerable epidemiological comorbidity and often share symptoms, provoking debate about their etiologic overlap. We quantified the genetic sharing of 25 brain disorders from genome-wide association studies of 265,218 patients and 784,643 control participants and assessed their relationship to 17 phenotypes from 1,191,588 individuals. Psychiatric disorders share common variant risk, whereas neurological disorders appear more distinct from one another and from the psychiatric disorders. We also identified significant sharing between disorders and a number of brain phenotypes, including cognitive measures. Further, we conducted simulations to explore how statistical power, diagnostic misclassification, and phenotypic heterogeneity affect genetic correlations. These results highlight the importance of common genetic variation as a risk factor for brain disorders and the value of heritability-based methods in understanding their etiology.


Journal ArticleDOI
23 Aug 2018-Cell
TL;DR: The advances in ILC biology over the past decade are distill the advances to refine the nomenclature of ILCs and highlight the importance of I LCs in tissue homeostasis, morphogenesis, metabolism, repair, and regeneration.

Journal ArticleDOI
Mary F. Feitosa1, Aldi T. Kraja1, Daniel I. Chasman2, Yun J. Sung1  +296 moreInstitutions (86)
18 Jun 2018-PLOS ONE
TL;DR: In insights into the role of alcohol consumption in the genetic architecture of hypertension, a large two-stage investigation incorporating joint testing of main genetic effects and single nucleotide variant (SNV)-alcohol consumption interactions is conducted.
Abstract: Heavy alcohol consumption is an established risk factor for hypertension; the mechanism by which alcohol consumption impact blood pressure (BP) regulation remains unknown. We hypothesized that a genome-wide association study accounting for gene-alcohol consumption interaction for BP might identify additional BP loci and contribute to the understanding of alcohol-related BP regulation. We conducted a large two-stage investigation incorporating joint testing of main genetic effects and single nucleotide variant (SNV)-alcohol consumption interactions. In Stage 1, genome-wide discovery meta-analyses in ≈131K individuals across several ancestry groups yielded 3,514 SNVs (245 loci) with suggestive evidence of association (P < 1.0 x 10-5). In Stage 2, these SNVs were tested for independent external replication in ≈440K individuals across multiple ancestries. We identified and replicated (at Bonferroni correction threshold) five novel BP loci (380 SNVs in 21 genes) and 49 previously reported BP loci (2,159 SNVs in 109 genes) in European ancestry, and in multi-ancestry meta-analyses (P < 5.0 x 10-8). For African ancestry samples, we detected 18 potentially novel BP loci (P < 5.0 x 10-8) in Stage 1 that warrant further replication. Additionally, correlated meta-analysis identified eight novel BP loci (11 genes). Several genes in these loci (e.g., PINX1, GATA4, BLK, FTO and GABBR2) have been previously reported to be associated with alcohol consumption. These findings provide insights into the role of alcohol consumption in the genetic architecture of hypertension.

Journal ArticleDOI
TL;DR: How technological advances have enabled the detection of neurofilament proteins in the blood is considered, and how these proteins consequently have the potential to be easily measured biomarkers of neuroaxonal injury in various neurological conditions are discussed.
Abstract: Neuroaxonal damage is the pathological substrate of permanent disability in various neurological disorders. Reliable quantification and longitudinal follow-up of such damage are important for assessing disease activity, monitoring treatment responses, facilitating treatment development and determining prognosis. The neurofilament proteins have promise in this context because their levels rise upon neuroaxonal damage not only in the cerebrospinal fluid (CSF) but also in blood, and they indicate neuroaxonal injury independent of causal pathways. First-generation (immunoblot) and second-generation (enzyme-linked immunosorbent assay) neurofilament assays had limited sensitivity. Third-generation (electrochemiluminescence) and particularly fourth-generation (single-molecule array) assays enable the reliable measurement of neurofilaments throughout the range of concentrations found in blood samples. This technological advancement has paved the way to investigate neurofilaments in a range of neurological disorders. Here, we review what is known about the structure and function of neurofilaments, discuss analytical aspects and knowledge of age-dependent normal ranges of neurofilaments and provide a comprehensive overview of studies on neurofilament light chain as a marker of axonal injury in different neurological disorders, including multiple sclerosis, neurodegenerative dementia, stroke, traumatic brain injury, amyotrophic lateral sclerosis and Parkinson disease. We also consider work needed to explore the value of this axonal damage marker in managing neurological diseases in daily practice.

Journal ArticleDOI
TL;DR: This part of this series introduces JASP (http://www.jasp-stats.org), an open-source, cross-platform, user-friendly graphical software package that allows users to carry out Bayesian hypothesis tests for standard statistical problems.
Abstract: Bayesian hypothesis testing presents an attractive alternative to p value hypothesis testing. Part I of this series outlined several advantages of Bayesian hypothesis testing, including the ability to quantify evidence and the ability to monitor and update this evidence as data come in, without the need to know the intention with which the data were collected. Despite these and other practical advantages, Bayesian hypothesis tests are still reported relatively rarely. An important impediment to the widespread adoption of Bayesian tests is arguably the lack of user-friendly software for the run-of-the-mill statistical problems that confront psychologists for the analysis of almost every experiment: the t-test, ANOVA, correlation, regression, and contingency tables. In Part II of this series we introduce JASP (http://www.jasp-stats.org), an open-source, cross-platform, user-friendly graphical software package that allows users to carry out Bayesian hypothesis tests for standard statistical problems. JASP is based in part on the Bayesian analyses implemented in Morey and Rouder’s BayesFactor package for R. Armed with JASP, the practical advantages of Bayesian hypothesis testing are only a mouse click away.

Proceedings Article
03 Jul 2018
TL;DR: In this paper, the authors show that the overestimation bias persists in an actor-critic setting and propose novel mechanisms to minimize its effects on both the actor and the critic.
Abstract: In value-based reinforcement learning methods such as deep Q-learning, function approximation errors are known to lead to overestimated value estimates and suboptimal policies. We show that this problem persists in an actor-critic setting and propose novel mechanisms to minimize its effects on both the actor and the critic. Our algorithm builds on Double Q-learning, by taking the minimum value between a pair of critics to limit overestimation. We draw the connection between target networks and overestimation bias, and suggest delaying policy updates to reduce per-update error and further improve performance. We evaluate our method on the suite of OpenAI gym tasks, outperforming the state of the art in every environment tested.

Journal ArticleDOI
TL;DR: Ten prominent advantages of the Bayesian approach are outlined, and several objections to Bayesian hypothesis testing are countered.
Abstract: Bayesian parameter estimation and Bayesian hypothesis testing present attractive alternatives to classical inference using confidence intervals and p values. In part I of this series we outline ten prominent advantages of the Bayesian approach. Many of these advantages translate to concrete opportunities for pragmatic researchers. For instance, Bayesian hypothesis testing allows researchers to quantify evidence and monitor its progression as data come in, without needing to know the intention with which the data were collected. We end by countering several objections to Bayesian hypothesis testing. Part II of this series discusses JASP, a free and open source software program that makes it easy to conduct Bayesian estimation and testing for a range of popular statistical scenarios (Wagenmakers et al. this issue).

Journal ArticleDOI
TL;DR: Among patients with stage III epithelial ovarian cancer, the addition of hyperthermic intraperitoneal chemotherapy (HIPEC) to interval cytoreductive surgery resulted in longer recurrence‐free survival and overall survival than surgery alone and did not result in higher rates of side effects.
Abstract: Background Treatment of newly diagnosed advanced-stage ovarian cancer typically involves cytoreductive surgery and systemic chemotherapy. We conducted a trial to investigate whether the addition of hyperthermic intraperitoneal chemotherapy (HIPEC) to interval cytoreductive surgery would improve outcomes among patients who were receiving neoadjuvant chemotherapy for stage III epithelial ovarian cancer. Methods In a multicenter, open-label, phase 3 trial, we randomly assigned 245 patients who had at least stable disease after three cycles of carboplatin (area under the curve of 5 to 6 mg per milliliter per minute) and paclitaxel (175 mg per square meter of body-surface area) to undergo interval cytoreductive surgery either with or without administration of HIPEC with cisplatin (100 mg per square meter). Randomization was performed at the time of surgery in cases in which surgery that would result in no visible disease (complete cytoreduction) or surgery after which one or more residual tumors measu...

Journal ArticleDOI
03 Feb 2018-Gut
TL;DR: Future GERD management strategies should focus on defining individual patient phenotypes based on the level of refluxate exposure, mechanism of refux, efficacy of clearance, underlying anatomy of the oesophagogastric junction and psychometrics defining symptomatic presentations.
Abstract: Clinical history, questionnaire data and response to antisecretory therapy are insufficient to make a conclusive diagnosis of GERD in isolation, but are of value in determining need for further investigation. Conclusive evidence for reflux on oesophageal testing include advanced grade erosive oesophagitis (LA grades C and D), long-segment Barrett’s mucosa or peptic strictures on endoscopy or distal oesophageal acid exposure time (AET) >6% on ambulatory pH or pH-impedance monitoring. A normal endoscopy does not exclude GERD, but provides supportive evidence refuting GERD in conjunction with distal AET

Journal ArticleDOI
TL;DR: The Modules for Experiments in Stellar Astrophysics (MESA) software instrument as discussed by the authors has been updated with the capability to handle floating point exceptions and stellar model optimization, as well as four new software tools.
Abstract: We update the capabilities of the software instrument Modules for Experiments in Stellar Astrophysics (MESA) and enhance its ease of use and availability. Our new approach to locating convective boundaries is consistent with the physics of convection, and yields reliable values of the convective-core mass during both hydrogen- and helium-burning phases. Stars with become white dwarfs and cool to the point where the electrons are degenerate and the ions are strongly coupled, a realm now available to study with MESA due to improved treatments of element diffusion, latent heat release, and blending of equations of state. Studies of the final fates of massive stars are extended in MESA by our addition of an approximate Riemann solver that captures shocks and conserves energy to high accuracy during dynamic epochs. We also introduce a 1D capability for modeling the effects of Rayleigh–Taylor instabilities that, in combination with the coupling to a public version of the radiation transfer instrument, creates new avenues for exploring Type II supernova properties. These capabilities are exhibited with exploratory models of pair-instability supernovae, pulsational pair-instability supernovae, and the formation of stellar-mass black holes. The applicability of MESA is now widened by the capability to import multidimensional hydrodynamic models into MESA. We close by introducing software modules for handling floating point exceptions and stellar model optimization, as well as four new software tools— , -Docker, , and mesastar.org—to enhance MESA's education and research impact.

Journal ArticleDOI
01 Aug 2018-Nature
TL;DR: ‘pulse-seq’ is developed, combining scRNA-seq and lineage tracing, to show that tuft, neuroendocrine and ionocyte cells are continually and directly replenished by basal progenitor cells, establishing a new cellular narrative for airways disease.
Abstract: The airways of the lung are the primary sites of disease in asthma and cystic fibrosis. Here we study the cellular composition and hierarchy of the mouse tracheal epithelium by single-cell RNA-sequencing (scRNA-seq) and in vivo lineage tracing. We identify a rare cell type, the Foxi1+ pulmonary ionocyte; functional variations in club cells based on their location; a distinct cell type in high turnover squamous epithelial structures that we term ‘hillocks’; and disease-relevant subsets of tuft and goblet cells. We developed ‘pulse-seq’, combining scRNA-seq and lineage tracing, to show that tuft, neuroendocrine and ionocyte cells are continually and directly replenished by basal progenitor cells. Ionocytes are the major source of transcripts of the cystic fibrosis transmembrane conductance regulator in both mouse (Cftr) and human (CFTR). Knockout of Foxi1 in mouse ionocytes causes loss of Cftr expression and disrupts airway fluid and mucus physiology, phenotypes that are characteristic of cystic fibrosis. By associating cell-type-specific expression programs with key disease genes, we establish a new cellular narrative for airways disease.

Journal ArticleDOI
TL;DR: It is found that peer beliefs of replicability are strongly related to replicable, suggesting that the research community could predict which results would replicate and that failures to replicate were not the result of chance alone.
Abstract: Being able to replicate scientific findings is crucial for scientific progress. We replicate 21 systematically selected experimental studies in the social sciences published in Nature and Science between 2010 and 2015. The replications follow analysis plans reviewed by the original authors and pre-registered prior to the replications. The replications are high powered, with sample sizes on average about five times higher than in the original studies. We find a significant effect in the same direction as the original study for 13 (62%) studies, and the effect size of the replications is on average about 50% of the original effect size. Replicability varies between 12 (57%) and 14 (67%) studies for complementary replicability indicators. Consistent with these results, the estimated true-positive rate is 67% in a Bayesian analysis. The relative effect size of true positives is estimated to be 71%, suggesting that both false positives and inflated effect sizes of true positives contribute to imperfect reproducibility. Furthermore, we find that peer beliefs of replicability are strongly related to replicability, suggesting that the research community could predict which results would replicate and that failures to replicate were not the result of chance alone.

Journal ArticleDOI
TL;DR: Thorough literature search about diagnostic criteria for acute cholecystitis, new and strong evidence that had been released from 2013 to 2017 was not found with serious and important issues about using TG13 diagnostic criteria of acute CholecyStitis, and the TG13 severity grading has been validated in numerous studies.
Abstract: Although the diagnostic and severity grading criteria on the 2013 Tokyo Guidelines (TG13) are used worldwide as the primary standard for management of acute cholangitis (AC), they need to be validated through implementation and assessment in actual clinical practice. Here, we conduct a systematic review of the literature to validate the TG13 diagnostic and severity grading criteria for AC and propose TG18 criteria. While there is little evidence evaluating the TG13 criteria, they were validated through a large-scale case series study in Japan and Taiwan. Analyzing big data from this study confirmed that the diagnostic rate of AC based on the TG13 diagnostic criteria was higher than that based on the TG07 criteria, and that 30-day mortality in patients with a higher severity based on the TG13 severity grading criteria was significantly higher. Furthermore, a comparison of patients treated with early or urgent biliary drainage versus patients not treated this way showed no difference in 30-day mortality among patients with Grade I or Grade III AC, but significantly lower 30-day mortality in patients with Grade II AC who were treated with early or urgent biliary drainage. This suggests that the TG13 severity grading criteria can be used to identify Grade II patients whose prognoses may be improved through biliary drainage. The TG13 severity grading criteria may therefore be useful as an indicator for biliary drainage as well as a predictive factor when assessing the patient's prognosis. The TG13 diagnostic and severity grading criteria for AC can provide results quickly, are minimally invasive for the patients, and are inexpensive. We recommend that the TG13 criteria be adopted in the TG18 guidelines and used as standard practice in the clinical setting. Free full articles and mobile app of TG18 are available at: http://www.jshbps.jp/modules/en/index.php?content_id=47. Related clinical questions and references are also included.

Proceedings Article
01 Jan 2018
TL;DR: In this article, a collection of non-negative stochastic gates, which collectively determine which weights to set to zero, is proposed to prune the network during training by encouraging weights to become exactly zero.
Abstract: We propose a practical method for L0 norm regularization for neural networks:pruning the network during training by encouraging weights to become exactly zero. Such regularization is interesting since (1) it can greatly speed up training and inference, and (2) it can improve generalization. AIC and BIC, well-known model selection criteria, are special cases of L0 regularization. However, since the L0 norm of weights is non-differentiable, we cannot incorporate it directly as a regularization term in the objective function. We propose a solution through the inclusion of a collection of non-negative stochastic gates, which collectively determine which weights to set to zero. We show that, somewhat surprisingly,for certain distributions over the gates, the expected L0 regularized objective is differentiable with respect to the distribution parameters. We further propose the hard concrete distribution for the gates, which is obtained by “stretching” a binary concrete distribution and then transforming its samples with a hard-sigmoid. The parameters of the distribution over the gates can then be jointly optimized with the original network parameters. As a result our method allows for straightforward and efficient learning of model structures with stochastic gradient descent and allows for conditional computation in a principled way. We perform various experiments to demonstrate the effectiveness of the resulting approach and regularizer.

Journal ArticleDOI
TL;DR: A mathematical expression is derived to compute PrediXcan results using summary data, and the effects of gene expression variation on human phenotypes in 44 GTEx tissues and >100 phenotypes are investigated.
Abstract: Scalable, integrative methods to understand mechanisms that link genetic variants with phenotypes are needed. Here we derive a mathematical expression to compute PrediXcan (a gene mapping approach) results using summary data (S-PrediXcan) and show its accuracy and general robustness to misspecified reference sets. We apply this framework to 44 GTEx tissues and 100+ phenotypes from GWAS and meta-analysis studies, creating a growing public catalog of associations that seeks to capture the effects of gene expression variation on human phenotypes. Replication in an independent cohort is shown. Most of the associations are tissue specific, suggesting context specificity of the trait etiology. Colocalized significant associations in unexpected tissues underscore the need for an agnostic scanning of multiple contexts to improve our ability to detect causal regulatory mechanisms. Monogenic disease genes are enriched among significant associations for related traits, suggesting that smaller alterations of these genes may cause a spectrum of milder phenotypes.

Journal ArticleDOI
TL;DR: This review provides an in-depth survey of relevant Z-schemes from past to present, with particular focus on mechanistic breakthroughs, and highlights current state of the art systems which are at the forefront of the field.
Abstract: Visible light-driven water splitting using cheap and robust photocatalysts is one of the most exciting ways to produce clean and renewable energy for future generations. Cutting edge research within the field focuses on so-called “Z-scheme” systems, which are inspired by the photosystem II–photosystem I (PSII/PSI) coupling from natural photosynthesis. A Z-scheme system comprises two photocatalysts and generates two sets of charge carriers, splitting water into its constituent parts, hydrogen and oxygen, at separate locations. This is not only more efficient than using a single photocatalyst, but practically it could also be safer. Researchers within the field are constantly aiming to bring systems toward industrial level efficiencies by maximizing light absorption of the materials, engineering more stable redox couples, and also searching for new hydrogen and oxygen evolution cocatalysts. This review provides an in-depth survey of relevant Z-schemes from past to present, with particular focus on mechanist...

Journal ArticleDOI
26 Oct 2018-Science
TL;DR: It is found that a metal-organic framework containing iron-peroxo sites bound ethane more strongly than ethylene and could be used to separate the gases at ambient conditions and demonstrate the potential of Fe2(O2)(dobdc) for this important industrial separation with a low energy cost under ambient conditions.
Abstract: The separation of ethane from its corresponding ethylene is an important, challenging, and energy-intensive process in the chemical industry. Here we report a microporous metal-organic framework, iron(III) peroxide 2,5-dioxido-1,4-benzenedicarboxylate [Fe 2 (O 2 )(dobdc) (dobdc 4− : 2,5-dioxido-1,4-benzenedicarboxylate)], with iron (Fe)–peroxo sites for the preferential binding of ethane over ethylene and thus highly selective separation of C 2 H 6 /C 2 H 4 . Neutron powder diffraction studies and theoretical calculations demonstrate the key role of Fe-peroxo sites for the recognition of ethane. The high performance of Fe 2 (O 2 )(dobdc) for the ethane/ethylene separation has been validated by gas sorption isotherms, ideal adsorbed solution theory calculations, and simulated and experimental breakthrough curves. Through a fixed-bed column packed with this porous material, polymer-grade ethylene (99.99% pure) can be straightforwardly produced from ethane/ethylene mixtures during the first adsorption cycle, demonstrating the potential of Fe 2 (O 2 )(dobdc) for this important industrial separation with a low energy cost under ambient conditions.

Journal ArticleDOI
TL;DR: In this paper, the authors provide a broader historical perspective on the observational discoveries and theoretical arguments that led the scientific community to adopt dark matter as an essential part of the standard cosmological model.
Abstract: Although dark matter is a central element of modern cosmology, the history of how it became accepted as part of the dominant paradigm is often ignored or condensed into an anecdotal account focused around the work of a few pioneering scientists. The aim of this review is to provide a broader historical perspective on the observational discoveries and the theoretical arguments that led the scientific community to adopt dark matter as an essential part of the standard cosmological model.