scispace - formally typeset
Search or ask a question

Showing papers by "Pierre-and-Marie-Curie University published in 2019"


Journal ArticleDOI
TL;DR: The epidemiology, treatment and management of the various immune-related adverse events that can occur in patients receiving immune-checkpoint inhibitors are described.
Abstract: Immune-checkpoint inhibitors (ICIs), including anti-cytotoxic T lymphocyte antigen 4 (CTLA-4), anti-programmed cell death 1 (PD-1) and anti-programmed cell death 1 ligand 1 (PD-L1) antibodies, are arguably the most important development in cancer therapy over the past decade. The indications for these agents continue to expand across malignancies and disease settings, thus reshaping many of the previous standard-of-care approaches and bringing new hope to patients. One of the costs of these advances is the emergence of a new spectrum of immune-related adverse events (irAEs), which are often distinctly different from the classical chemotherapy-related toxicities. Owing to the growing use of ICIs in oncology, clinicians will increasingly be confronted with common but also rare irAEs; hence, awareness needs to be raised regarding the clinical presentation, diagnosis and management of these toxicities. In this Review, we provide an overview of the various types of irAEs that have emerged to date. We discuss the epidemiology of these events and their kinetics, risk factors, subtypes and pathophysiology, as well as new insights regarding screening and surveillance strategies. We also highlight the most important aspects of the management of irAEs.

1,032 citations


Journal ArticleDOI
Željko Ivezić1, Steven M. Kahn2, J. Anthony Tyson3, Bob Abel4  +332 moreInstitutions (55)
TL;DR: The Large Synoptic Survey Telescope (LSST) as discussed by the authors is a large, wide-field ground-based system designed to obtain repeated images covering the sky visible from Cerro Pachon in northern Chile.
Abstract: We describe here the most ambitious survey currently planned in the optical, the Large Synoptic Survey Telescope (LSST). The LSST design is driven by four main science themes: probing dark energy and dark matter, taking an inventory of the solar system, exploring the transient optical sky, and mapping the Milky Way. LSST will be a large, wide-field ground-based system designed to obtain repeated images covering the sky visible from Cerro Pachon in northern Chile. The telescope will have an 8.4 m (6.5 m effective) primary mirror, a 9.6 deg2 field of view, a 3.2-gigapixel camera, and six filters (ugrizy) covering the wavelength range 320–1050 nm. The project is in the construction phase and will begin regular survey operations by 2022. About 90% of the observing time will be devoted to a deep-wide-fast survey mode that will uniformly observe a 18,000 deg2 region about 800 times (summed over all six bands) during the anticipated 10 yr of operations and will yield a co-added map to r ~ 27.5. These data will result in databases including about 32 trillion observations of 20 billion galaxies and a similar number of stars, and they will serve the majority of the primary science programs. The remaining 10% of the observing time will be allocated to special projects such as Very Deep and Very Fast time domain surveys, whose details are currently under discussion. We illustrate how the LSST science drivers led to these choices of system parameters, and we describe the expected data products and their characteristics.

921 citations



Journal ArticleDOI
TL;DR: It is confirmed that eukaryotes form at least two domains, the loss of monophyly in the Excavata, robust support for the Haptista and Cryptista, and suggested primer sets for DNA sequences from environmental samples that are effective for each clade are provided.
Abstract: This revision of the classification of eukaryotes follows that of Adl et al., 2012 [J. Euk. Microbiol. 59(5)] and retains an emphasis on protists. Changes since have improved the resolution of many ...

750 citations


Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott2, T. D. Abbott, Fausto Acernese3  +1157 moreInstitutions (70)
TL;DR: In this paper, the authors improved initial estimates of the binary's properties, including component masses, spins, and tidal parameters, using the known source location, improved modeling, and recalibrated Virgo data.
Abstract: On August 17, 2017, the Advanced LIGO and Advanced Virgo gravitational-wave detectors observed a low-mass compact binary inspiral. The initial sky localization of the source of the gravitational-wave signal, GW170817, allowed electromagnetic observatories to identify NGC 4993 as the host galaxy. In this work, we improve initial estimates of the binary's properties, including component masses, spins, and tidal parameters, using the known source location, improved modeling, and recalibrated Virgo data. We extend the range of gravitational-wave frequencies considered down to 23 Hz, compared to 30 Hz in the initial analysis. We also compare results inferred using several signal models, which are more accurate and incorporate additional physical effects as compared to the initial analysis. We improve the localization of the gravitational-wave source to a 90% credible region of 16 deg2. We find tighter constraints on the masses, spins, and tidal parameters, and continue to find no evidence for nonzero component spins. The component masses are inferred to lie between 1.00 and 1.89 M when allowing for large component spins, and to lie between 1.16 and 1.60 M (with a total mass 2.73-0.01+0.04 M) when the spins are restricted to be within the range observed in Galactic binary neutron stars. Using a precessing model and allowing for large component spins, we constrain the dimensionless spins of the components to be less than 0.50 for the primary and 0.61 for the secondary. Under minimal assumptions about the nature of the compact objects, our constraints for the tidal deformability parameter Λ are (0,630) when we allow for large component spins, and 300-230+420 (using a 90% highest posterior density interval) when restricting the magnitude of the component spins, ruling out several equation-of-state models at the 90% credible level. Finally, with LIGO and GEO600 data, we use a Bayesian analysis to place upper limits on the amplitude and spectral energy density of a possible postmerger signal.

715 citations



Journal ArticleDOI
TL;DR: The Copernicus Atmosphere Monitoring Service (CAMS) dataset is the latest global reanalysis dataset of atmospheric composition produced by the European Centre for Medium-Range Weather Forecasts (ECMWF), consisting of three-dimensional time-consistent atmospheric composition fields, including aerosols and chemical species as discussed by the authors.
Abstract: . The Copernicus Atmosphere Monitoring Service (CAMS) reanalysis is the latest global reanalysis dataset of atmospheric composition produced by the European Centre for Medium-Range Weather Forecasts (ECMWF), consisting of three-dimensional time-consistent atmospheric composition fields, including aerosols and chemical species. The dataset currently covers the period 2003–2016 and will be extended in the future by adding 1 year each year. A reanalysis for greenhouse gases is being produced separately. The CAMS reanalysis builds on the experience gained during the production of the earlier Monitoring Atmospheric Composition and Climate (MACC) reanalysis and CAMS interim reanalysis. Satellite retrievals of total column CO; tropospheric column NO2 ; aerosol optical depth (AOD); and total column, partial column and profile ozone retrievals were assimilated for the CAMS reanalysis with ECMWF's Integrated Forecasting System. The new reanalysis has an increased horizontal resolution of about 80 km and provides more chemical species at a better temporal resolution (3-hourly analysis fields, 3-hourly forecast fields and hourly surface forecast fields) than the previously produced CAMS interim reanalysis. The CAMS reanalysis has smaller biases compared with most of the independent ozone, carbon monoxide, nitrogen dioxide and aerosol optical depth observations used for validation in this paper than the previous two reanalyses and is much improved and more consistent in time, especially compared to the MACC reanalysis. The CAMS reanalysis is a dataset that can be used to compute climatologies, study trends, evaluate models, benchmark other reanalyses or serve as boundary conditions for regional models for past periods.

450 citations


Journal ArticleDOI
Elena Aprile1, Jelle Aalbers2, F. Agostini3, M. Alfonsi4, L. Althueser5, F. D. Amaro6, V. C. Antochi2, E. Angelino7, F. Arneodo8, D. Barge2, Laura Baudis9, Boris Bauermeister2, L. Bellagamba3, M. L. Benabderrahmane8, T. Berger10, P. A. Breur11, April S. Brown9, Ethan Brown10, S. Bruenner12, Giacomo Bruno8, Ran Budnik13, C. Capelli9, João Cardoso6, D. Cichon12, D. Coderre14, Auke-Pieter Colijn11, Jan Conrad2, Jean-Pierre Cussonneau15, M. P. Decowski11, P. de Perio1, A. Depoian16, P. Di Gangi3, A. Di Giovanni8, Sara Diglio15, A. Elykov14, G. Eurin12, J. Fei17, A. D. Ferella2, A. Fieguth5, W. Fulgione7, P. Gaemers11, A. Gallo Rosso, Michelle Galloway9, F. Gao1, M. Garbini3, L. Grandi18, Z. Greene1, C. Hasterok12, C. Hils4, E. Hogenbirk11, J. Howlett1, M. Iacovacci, R. Itay13, F. Joerg12, Shingo Kazama19, A. Kish9, Masanori Kobayashi1, G. Koltman13, A. Kopec16, H. Landsman13, R. F. Lang16, L. Levinson13, Qing Lin1, Sebastian Lindemann14, Manfred Lindner12, F. Lombardi17, F. Lombardi6, J. A. M. Lopes6, E. López Fune20, C. Macolino21, J. Mahlstedt2, A. Manfredini13, A. Manfredini9, Fabrizio Marignetti, T. Marrodán Undagoitia12, Julien Masbou15, S. Mastroianni, M. Messina8, K. Micheneau15, Kate C. Miller18, A. Molinario, K. Morå2, Y. Mosbacher13, M. Murra5, J. Naganoma22, Kaixuan Ni17, Uwe Oberlack4, K. Odgers10, J. Palacio15, Bart Pelssers2, R. Peres9, J. Pienaar18, V. Pizzella12, Guillaume Plante1, R. Podviianiuk, J. Qin16, H. Qiu13, D. Ramírez García14, S. Reichard9, B. Riedel18, A. Rocchetti14, N. Rupp12, J.M.F. dos Santos6, Gabriella Sartorelli3, N. Šarčević14, M. Scheibelhut4, S. Schindler4, J. Schreiner12, D. Schulte5, Marc Schumann14, L. Scotto Lavina20, M. Selvi3, P. Shagin22, E. Shockley18, Manuel Gameiro da Silva6, H. Simgen12, C. Therreau15, Dominique Thers15, F. Toschi14, Gian Carlo Trinchero7, C. Tunnell22, N. Upole18, M. Vargas5, G. Volta9, O. Wack12, Hongwei Wang23, Yuehuan Wei17, Ch. Weinheimer5, D. Wenz4, C. Wittweg5, J. Wulf9, J. Ye17, Yanxi Zhang1, T. Zhu1, J. P. Zopounidis20 
TL;DR: Constraints on light dark matter (DM) models using ionization signals in the XENON1T experiment are reported, and no DM or CEvNS detection may be claimed because the authors cannot model all of their backgrounds.
Abstract: We report constraints on light dark matter (DM) models using ionization signals in the XENON1T experiment. We mitigate backgrounds with strong event selections, rather than requiring a scintillation signal, leaving an effective exposure of (22±3) tonne day. Above ∼0.4 keVee, we observe 30 MeV/c2, and absorption of dark photons and axionlike particles for mχ within 0.186–1 keV/c2.

412 citations


Journal ArticleDOI
Elena Aprile1, Jelle Aalbers2, F. Agostini3, M. Alfonsi4, L. Althueser5, F. D. Amaro6, M. Anthony1, V. C. Antochi2, F. Arneodo7, Laura Baudis8, Boris Bauermeister2, M. L. Benabderrahmane7, T. Berger9, P. A. Breur10, April S. Brown8, Ethan Brown9, S. Bruenner11, Giacomo Bruno7, Ran Budnik12, C. Capelli8, João Cardoso6, D. Cichon11, D. Coderre13, Auke-Pieter Colijn10, Jan Conrad2, Jean-Pierre Cussonneau14, M. P. Decowski10, P. de Perio1, P. Di Gangi3, A. Di Giovanni7, Sara Diglio14, A. Elykov13, G. Eurin11, J. Fei15, A. D. Ferella2, A. Fieguth5, W. Fulgione, A. Gallo Rosso, Michelle Galloway8, F. Gao1, M. Garbini3, L. Grandi16, Z. Greene1, C. Hasterok11, E. Hogenbirk10, J. Howlett1, M. Iacovacci, R. Itay12, F. Joerg11, Shingo Kazama17, A. Kish8, G. Koltman12, A. Kopec18, H. Landsman12, R. F. Lang18, L. Levinson12, Qing Lin1, Sebastian Lindemann13, Manfred Lindner11, F. Lombardi15, J. A. M. Lopes6, E. López Fune19, C. Macolino20, J. Mahlstedt2, A. Manfredini8, Fabrizio Marignetti, T. Marrodán Undagoitia11, Julien Masbou14, D. Masson18, S. Mastroianni, M. Messina7, K. Micheneau14, Kate C. Miller16, A. Molinario, K. Morå2, Y. Mosbacher12, M. Murra5, J. Naganoma, Kaixuan Ni15, Uwe Oberlack4, K. Odgers9, Bart Pelssers2, F. Piastra8, J. Pienaar16, V. Pizzella11, Guillaume Plante1, R. Podviianiuk, N. Priel12, H. Qiu12, D. Ramírez García13, S. Reichard8, B. Riedel16, A. Rizzo1, A. Rocchetti13, N. Rupp11, J.M.F. dos Santos6, Gabriella Sartorelli3, N. Šarčević13, M. Scheibelhut4, S. Schindler4, J. Schreiner11, D. Schulte5, Marc Schumann13, L. Scotto Lavina19, M. Selvi3, P. Shagin21, E. Shockley16, Manuel Gameiro da Silva6, H. Simgen11, C. Therreau14, Dominique Thers14, F. Toschi13, Gian Carlo Trinchero, C. Tunnell16, N. Upole16, M. Vargas5, O. Wack11, Hongwei Wang22, Zirui Wang, Yuehuan Wei15, Ch. Weinheimer5, D. Wenz4, C. Wittweg5, J. Wulf8, Z. Xu15, J. Ye15, Yanxi Zhang1, T. Zhu1, J. P. Zopounidis19 
TL;DR: The analysis uses the full ton year exposure of XENON1T to constrain the spin-dependent proton-only and neutron-only cases and sets exclusion limits on the WIMP-nucleon interactions.
Abstract: We report the first experimental results on spin-dependent elastic weakly interacting massive particle (WIMP) nucleon scattering from the XENON1T dark matter search experiment. The analysis uses the full ton year exposure of XENON1T to constrain the spin-dependent proton-only and neutron-only cases. No significant signal excess is observed, and a profile likelihood ratio analysis is used to set exclusion limits on the WIMP-nucleon interactions. This includes the most stringent constraint to date on the WIMP-neutron cross section, with a minimum of 6.3×10-42 cm2 at 30 GeV/c2 and 90% confidence level. The results are compared with those from collider searches and used to exclude new parameter space in an isoscalar theory with an axial-vector mediator.

241 citations


Journal ArticleDOI
TL;DR: Divergence times as additional criterion in ranking provide additional evidence to resolve taxonomic problems in the Basidiomycota taxonomic system, and also provide a better understanding of their phylogeny and evolution.
Abstract: The Basidiomycota constitutes a major phylum of the kingdom Fungi and is second in species numbers to the Ascomycota. The present work provides an overview of all validly published, currently used basidiomycete genera to date in a single document. An outline of all genera of Basidiomycota is provided, which includes 1928 currently used genera names, with 1263 synonyms, which are distributed in 241 families, 68 orders, 18 classes and four subphyla. We provide brief notes for each accepted genus including information on classification, number of accepted species, type species, life mode, habitat, distribution, and sequence information. Furthermore, three phylogenetic analyses with combined LSU, SSU, 5.8s, rpb1, rpb2, and ef1 datasets for the subphyla Agaricomycotina, Pucciniomycotina and Ustilaginomycotina are conducted, respectively. Divergence time estimates are provided to the family level with 632 species from 62 orders, 168 families and 605 genera. Our study indicates that the divergence times of the subphyla in Basidiomycota are 406–430 Mya, classes are 211–383 Mya, and orders are 99–323 Mya, which are largely consistent with previous studies. In this study, all phylogenetically supported families were dated, with the families of Agaricomycotina diverging from 27–178 Mya, Pucciniomycotina from 85–222 Mya, and Ustilaginomycotina from 79–177 Mya. Divergence times as additional criterion in ranking provide additional evidence to resolve taxonomic problems in the Basidiomycota taxonomic system, and also provide a better understanding of their phylogeny and evolution.

233 citations


Journal ArticleDOI
Dean Roemmich1, Matthew H. Alford1, Hervé Claustre, Kenneth S. Johnson2, Brian A. King3, James N. Moum4, Peter R. Oke, W. Brechner Owens5, Sylvie Pouliquen6, Sarah G. Purkey7, Megan Scanderbeg1, Toshio Suga8, Susan Wijffels9, N. V. Zilberman1, Dorothee C. E. Bakker10, Molly O. Baringer11, Mathieu Belbeoch, Henry C. Bittig, Emmanuel Boss, Paulo H. R. Calil, Fiona Carse12, Thierry Carval6, Fei Chai13, Diarmuid Ó. Conchubhair14, Fabrizio D'Ortenzio, Giorgio Dall'Olmo4, Damien Desbruyères, Katja Fennel15, Ilker Fer16, Raffaele Ferrari17, Gael Forget17, Howard J. Freeland18, Tetsuichi Fujiki19, Marion Gehlen, Blair J. W. Greenan20, Robert Hallberg21, Toshiyuki Hibiya22, Shigeki Hosoda19, Steven R. Jayne5, Markus Jochum, Gregory C. Johnson, KiRyong Kang23, Nicolas Kolodziejczyk, Arne Körtzinger, Pierre-Yves Le Traon, Yueng-Djern Lenn24, Guillaume Maze, Kjell Arne Mork, Tamaryn Morris25, Takeyoshi Nagai26, Jonathan D. Nash4, Alberto C. Naveira Garabato3, Are Olsen16, Rama Rao E. Pattabhi27, Satya Prakash, Stephen C. Riser28, Catherine Schmechtig29, Claudia Schmid11, Emily L. Shroyer4, Andreas Sterl30, Philip Sutton31, Lynne D. Talley1, Toste Tanhua32, Virginie Thierry6, Sandy J. Thomalla, John M. Toole5, Ariel Troisi, Thomas W. Trull33, Jon Turton12, Pedro Vélez-Belchí, Waldemar Walczowski34, Haili Wang35, Rik Wanninkhof11, Amy F. Waterhouse1, Stephanie Waterman36, Andrew J. Watson, Cara Wilson21, Annie P. S. Wong28, Jianping Xu37, Ichiro Yasuda22 
TL;DR: The objective is to create a fully global, top-to-bottom, dynamically complete, and multidisciplinary Argo Program that will integrate seamlessly with satellite and with other in situ elements of the Global Ocean Observing System.
Abstract: The Argo Program has been implemented and sustained for almost two decades, as a global array of about 4000 profiling floats Argo provides continuous observations of ocean temperature and salinity versus pressure, from the sea surface to 2000 dbar The successful installation of the Argo array and its innovative data management system arose opportunistically from the combination of great scientific need and technological innovation Through the data system, Argo provides fundamental physical observations with broad societally-valuable applications, built on the cost-efficient and robust technologies of autonomous profiling floats Following recent advances in platform and sensor technologies, even greater opportunity exists now than 20 years ago to (i) improve Argo’s global coverage and value beyond the original design, (ii) extend Argo to span the full ocean depth, (iii) add biogeochemical sensors for improved understanding of oceanic cycles of carbon, nutrients, and ecosystems, and (iv) consider experimental sensors that might be included in the future, for example to document the spatial and temporal patterns of ocean mixing For Core Argo and each of these enhancements, the past, present, and future progression along a path from experimental deployments to regional pilot arrays to global implementation is described The objective is to create a fully global, top-to-bottom, dynamically complete, and multidisciplinary Argo Program that will integrate seamlessly with satellite and with other in situ elements of the Global Ocean Observing System (Legler et al, 2015) The integrated system will deliver operational reanalysis and forecasting capability, and assessment of the state and variability of the climate system with respect to physical, biogeochemical, and ecosystems parameters It will enable basic research of unprecedented breadth and magnitude, and a wealth of ocean-education and outreach opportunities

Journal ArticleDOI
TL;DR: A new line of evidence is added that the biogeographic origin (evolutionary history) of a species is a determining factor of its potential to cause disruptive environmental impacts.
Abstract: Native plants and animals can rapidly become superabundant and dominate ecosystems, leading some people to claim that native species are no less likely than alien species to cause environmental damage such as biodiversity loss. We compared how frequently alien species and native species have been implicated as drivers of recent extinctions in a comprehensive global database, the 2017 IUCN Red List. Alien species were considered to be a contributing cause of 25% of plant extinctions and 33% of animal extinctions, whereas native species were implicated in less than 3% and 5% of animal and plant extinctions, respectively. When listed as a putative driver of recent extinctions, native species were more often associated with co-occurring drivers than were alien species. Our results add a new line of evidence that the biogeographic origin (evolutionary history) of a species is a determining factor of its potential to cause disruptive environmental impacts.

Journal ArticleDOI
Elena Aprile1, Jelle Aalbers2, F. Agostini3, M. Alfonsi4, L. Althueser5, F. D. Amaro6, V. C. Antochi2, E. Angelino7, F. Arneodo8, D. Barge2, Laura Baudis9, Boris Bauermeister2, L. Bellagamba3, M. L. Benabderrahmane8, T. Berger10, P. A. Breur11, April S. Brown9, Ethan Brown10, S. Bruenner12, Giacomo Bruno8, Ran Budnik13, C. Capelli9, João Cardoso6, D. Cichon12, D. Coderre14, Auke-Pieter Colijn11, Jan Conrad2, Jean-Pierre Cussonneau15, M. P. Decowski11, P. de Perio1, A. Depoian16, P. Di Gangi3, A. Di Giovanni8, Sara Diglio15, A. Elykov14, G. Eurin12, J. Fei17, A. D. Ferella2, A. Fieguth5, W. Fulgione7, P. Gaemers11, A. Gallo Rosso, Michelle Galloway9, F. Gao1, M. Garbini3, L. Grandi18, Z. Greene1, C. Hasterok12, C. Hils4, E. Hogenbirk11, J. Howlett1, M. Iacovacci, R. Itay13, F. Joerg12, Shingo Kazama19, A. Kish9, M. Kobayashi1, G. Koltman13, A. Kopec16, H. Landsman13, R. F. Lang16, L. Levinson13, Qing Lin1, Sebastian Lindemann14, Manfred Lindner12, F. Lombardi6, J. A. M. Lopes6, E. López Fune20, C. Macolino21, Jörn Mahlstedt2, M. Manenti8, A. Manfredini13, A. Manfredini9, Fabrizio Marignetti, T. Marrodán Undagoitia12, Julien Masbou15, S. Mastroianni, M. Messina8, K. Micheneau15, Kate C. Miller18, A. Molinario, K. Morå2, Y. Mosbacher13, M. Murra5, J. Naganoma, Kaixuan Ni17, Uwe Oberlack4, K. Odgers10, J. Palacio15, Bart Pelssers2, R. Peres9, J. Pienaar18, V. Pizzella12, Guillaume Plante1, R. Podviianiuk, J. Qin16, H. Qiu13, D. Ramírez García14, S. Reichard9, B. Riedel18, A. Rocchetti14, N. Rupp12, J.M.F. dos Santos6, Gabriella Sartorelli3, N. Šarčević14, M. Scheibelhut4, S. Schindler4, J. Schreiner12, D. Schulte5, Marc Schumann14, L. Scotto Lavina20, M. Selvi3, P. Shagin22, E. Shockley18, Manuel Gameiro da Silva6, H. Simgen12, C. Therreau15, Dominique Thers15, F. Toschi14, Gian Carlo Trinchero7, C. Tunnell22, N. Upole18, M. Vargas5, G. Volta9, O. Wack12, Hongwei Wang23, Yuehuan Wei17, Ch. Weinheimer5, D. Wenz4, C. Wittweg5, J. Wulf9, J. Ye17, Yanxi Zhang1, T. Zhu1, J. P. Zopounidis20 
TL;DR: A probe of low-mass dark matter with masses down to about 85 MeV/c^{2} is reported on by looking for electronic recoils induced by the Migdal effect and bremsstrahlung using data from the XENON1T experiment, and exploiting an approach that uses ionization signals only allows for a lower detection threshold.
Abstract: Direct dark matter detection experiments based on a liquid xenon target are leading the search for dark matter particles with masses above ∼5 GeV/c2, but have limited sensitivity to lighter masses because of the small momentum transfer in dark matter-nucleus elastic scattering. However, there is an irreducible contribution from inelastic processes accompanying the elastic scattering, which leads to the excitation and ionization of the recoiling atom (the Migdal effect) or the emission of a bremsstrahlung photon. In this Letter, we report on a probe of low-mass dark matter with masses down to about 85 MeV/c2 by looking for electronic recoils induced by the Migdal effect and bremsstrahlung using data from the XENON1T experiment. Besides the approach of detecting both scintillation and ionization signals, we exploit an approach that uses ionization signals only, which allows for a lower detection threshold. This analysis significantly enhances the sensitivity of XENON1T to light dark matter previously beyond its reach.

Journal ArticleDOI
Marcelle Soares-Santos1, Antonella Palmese2, W. G. Hartley3, J. Annis2  +1285 moreInstitutions (156)
TL;DR: In this article, a multi-messenger measurement of the Hubble constant H 0 using the binary-black-hole merger GW170814 as a standard siren, combined with a photometric redshift catalog from the Dark Energy Survey (DES), is presented.
Abstract: We present a multi-messenger measurement of the Hubble constant H 0 using the binary–black-hole merger GW170814 as a standard siren, combined with a photometric redshift catalog from the Dark Energy Survey (DES). The luminosity distance is obtained from the gravitational wave signal detected by the Laser Interferometer Gravitational-Wave Observatory (LIGO)/Virgo Collaboration (LVC) on 2017 August 14, and the redshift information is provided by the DES Year 3 data. Black hole mergers such as GW170814 are expected to lack bright electromagnetic emission to uniquely identify their host galaxies and build an object-by-object Hubble diagram. However, they are suitable for a statistical measurement, provided that a galaxy catalog of adequate depth and redshift completion is available. Here we present the first Hubble parameter measurement using a black hole merger. Our analysis results in ${H}_{0}={75}_{-32}^{+40}\,\mathrm{km}\,{{\rm{s}}}^{-1}\,{\mathrm{Mpc}}^{-1}$, which is consistent with both SN Ia and cosmic microwave background measurements of the Hubble constant. The quoted 68% credible region comprises 60% of the uniform prior range [20, 140] km s−1 Mpc−1, and it depends on the assumed prior range. If we take a broader prior of [10, 220] km s−1 Mpc−1, we find ${H}_{0}={78}_{-24}^{+96}\,\mathrm{km}\,{{\rm{s}}}^{-1}\,{\mathrm{Mpc}}^{-1}$ (57% of the prior range). Although a weak constraint on the Hubble constant from a single event is expected using the dark siren method, a multifold increase in the LVC event rate is anticipated in the coming years and combinations of many sirens will lead to improved constraints on H 0.

Journal ArticleDOI
TL;DR: Testing for human immunodeficiency virus resistance in drug-naive individuals and in patients in whom antiretroviral treatment is failing, and the appreciation of the role of testing, are crucial to the prevention and management of failure of ART.
Abstract: Background Contemporary antiretroviral therapies (ART) and management strategies have diminished both human immunodeficiency virus (HIV) treatment failure and the acquired resistance to drugs in resource-rich regions, but transmission of drug-resistant viruses has not similarly decreased. In low- and middle-income regions, ART roll-out has improved outcomes, but has resulted in increasing acquired and transmitted resistances. Our objective was to review resistance to ART drugs and methods to detect it, and to provide updated recommendations for testing and monitoring for drug resistance in HIV-infected individuals. Methods A volunteer panel of experts appointed by the International Antiviral (formerly AIDS) Society-USA reviewed relevant peer-reviewed data that were published or presented at scientific conferences. Recommendations were rated according to the strength of the recommendation and quality of the evidence, and reached by full panel consensus. Results Resistance testing remains a cornerstone of ART. It is recommended in newly-diagnosed individuals and in patients in whom ART has failed. Testing for transmitted integrase strand-transfer inhibitor resistance is currently not recommended, but this may change as more resistance emerges with widespread use. Sanger-based and next-generation sequencing approaches are each suited for genotypic testing. Testing for minority variants harboring drug resistance may only be considered if treatments depend on a first-generation nonnucleoside analogue reverse transcriptase inhibitor. Different HIV-1 subtypes do not need special considerations regarding resistance testing. Conclusions Testing for HIV drug resistance in drug-naive individuals and in patients in whom antiretroviral drugs are failing, and the appreciation of the role of testing, are crucial to the prevention and management of failure of ART.

Journal ArticleDOI
TL;DR: A majority of models underestimate the extremeness of impacts in important sectors such as agriculture, terrestrial ecosystems, and heat-related human mortality, while impacts on water resources and hydropower are overestimated in some river basins; and the spread across models is often large.
Abstract: Global impact models represent process-level understanding of how natural and human systems may be affected by climate change. Their projections are used in integrated assessments of climate change. Here we test, for the first time, systematically across many important systems, how well such impact models capture the impacts of extreme climate conditions. Using the 2003 European heat wave and drought as a historical analogue for comparable events in the future, we find that a majority of models underestimate the extremeness of impacts in important sectors such as agriculture, terrestrial ecosystems, and heat-related human mortality, while impacts on water resources and hydropower are overestimated in some river basins; and the spread across models is often large. This has important implications for economic assessments of climate change impacts that rely on these models. It also means that societal risks from future extreme events may be greater than previously thought.

Journal ArticleDOI
TL;DR: A diagnostic algorithm is proposed through which a clinically relevant MCA can be suspected and MCAS can subsequently be documented or excluded and should help guide the investigating care providers to consider the 2 principal diagnoses that may underlie MCAS, namely, severe allergy and systemic mastocytosis accompanied by severe MCA.

Journal ArticleDOI
01 Jan 2019-Allergy
TL;DR: These guidelines aim to give precise definitions and provide the background needed for doctors to correctly classify cutaneous drug hypersensitivity reactions (CDHR).
Abstract: Drug hypersensitivity reactions (DHRs) are common, and the skin is by far the most frequently involved organ with a broad spectrum of reaction types. The diagnosis of cutaneous DHRs (CDHR) may be difficult because of multiple differential diagnoses. A correct classification is important for the correct diagnosis and management. With these guidelines, we aim to give precise definitions and provide the background needed for doctors to correctly classify CDHR.

Journal ArticleDOI
TL;DR: This work reviews the most significant approaches to quantum verification and compares them in terms of structure, complexity and required resources and comments on the use of cryptographic techniques which, for many of the presented protocols, has proven extremely useful in performing verification.
Abstract: Quantum computers promise to efficiently solve not only problems believed to be intractable for classical computers, but also problems for which verifying the solution is also considered intractable. This raises the question of how one can check whether quantum computers are indeed producing correct results. This task, known as quantum verification, has been highlighted as a significant challenge on the road to scalable quantum computing technology. We review the most significant approaches to quantum verification and compare them in terms of structure, complexity and required resources. We also comment on the use of cryptographic techniques which, for many of the presented protocols, has proven extremely useful in performing verification. Finally, we discuss issues related to fault tolerance, experimental implementations and the outlook for future protocols.


Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, Sheelu Abraham3  +1222 moreInstitutions (135)
TL;DR: In this article, the results of an all-sky search for continuous gravitational waves (CWs), which can be produced by fast spinning neutron stars with an asymmetry around their rotation axis, were presented.
Abstract: We present results of an all-sky search for continuous gravitational waves (CWs), which can be produced by fast spinning neutron stars with an asymmetry around their rotation axis, using data from the second observing run of the Advanced LIGO detectors. Three different semicoherent methods are used to search in a gravitational-wave frequency band from 20 to 1922 Hz and a first frequency derivative from -1×10-8 to 2×10-9 Hz/s. None of these searches has found clear evidence for a CW signal, so upper limits on the gravitational-wave strain amplitude are calculated, which for this broad range in parameter space are the most sensitive ever achieved.

Journal ArticleDOI
TL;DR: The TFL appears to be a real alternative to the Ho:YAG laser and become a true game-changer in laser lithotripsy, and further studies are needed to broaden the understanding of the TFL, and comprehend the full implications and benefits of this new technology, as well its limitations.
Abstract: The Holmium:yttrium-aluminum-garnet (Ho:YAG) laser has been the gold-standard for laser lithotripsy over the last 20 years. However, recent reports about a new prototype thulium fiber laser (TFL) lithotripter have revealed impressive levels of performance. We therefore decided to systematically review the reality and expectations for this new TFL technology. This review was registered in the PROSPERO registry (CRD42019128695). A PubMed search was performed for papers including specific terms relevant to this systematic review published between the years 2015 and 2019, including already accepted but not yet published papers. Additionally, the medical sections of ScienceDirect, Wiley, SpringerLink, Mary Ann Liebert publishers, and Google Scholar were also searched for peer-reviewed abstract presentations. All relevant studies and data identified in the bibliographic search were selected, categorized, and summarized. The authors adhered to PRISMA guidelines for this review. The TFL emits laser radiation at a wavelength of 1,940 nm, and has an optical penetration depth in water about four-times shorter than the Ho:YAG laser. This results in four-times lower stone ablation thresholds, as well as lower tissue ablation thresholds. As the TFL uses electronically-modulated laser diodes, it offers the most comprehensive and flexible range of laser parameters among laser lithotripters, with pulse frequencies up to 2,200 Hz, very low to very high pulse energies (0.005-6 J), short to very long-pulse durations (200 µs up to 12 ms), and a total power level up to 55 W. The stone ablation efficiency is up to four-times that of the Ho:YAG laser for similar laser parameters, with associated implications for speed and operating time. When using dusting settings, the TFL outperforms the Ho:YAG laser in dust quantity and quality, producing much finer particles. Retropulsion is also significantly reduced and sometimes even absent with the TFL. The TFL can use small laser fibers (as small as 50 µm core), with resulting advantages in irrigation, scope deflection, retropulsion reduction, and (in)direct effects on accessibility, visibility, efficiency, and surgical time, as well as offering future miniaturization possibilities. Similar to the Ho:YAG laser, the TFL can also be used for soft tissue applications such as prostate enucleation (ThuFLEP). The TFL machine itself is seven times smaller and eight times lighter than a high-power Ho:YAG laser system, and consumes nine times less energy. Maintenance is expected to be very low due to the durability of its components. The safety profile is also better in many aspects, i.e., for patients, instruments, and surgeons. The advantages of the TFL over the Ho:YAG laser are simply too extensive to be ignored. The TFL appears to be a real alternative to the Ho:YAG laser and become a true game-changer in laser lithotripsy. Due to its novelty, further studies are needed to broaden our understanding of the TFL, and comprehend the full implications and benefits of this new technology, as well its limitations.

Journal Article
TL;DR: The 2019 edition of the IAS-USA drug resistance mutations list updates the Figure to assist practitioners in identifying key mutations associated with resistance to antiretroviral drugs, and therefore, in making clinical decisions regarding antireTroviral therapy.
Abstract: The 2019 edition of the IAS-USA drug resistance mutations list updates the Figure last published in January 2017. The mutations listed are those that have been identified by specific criteria for evidence and drugs described. The Figure is designed to assist practitioners in identifying key mutations associated with resistance to antiretroviral drugs, and therefore, in making clinical decisions regarding antiretroviral therapy.

Journal ArticleDOI
TL;DR: It is demonstrated that tumor-cell expression of the alarmin IL-33 was necessary and sufficient for eosinophil-mediated anti-tumor responses and that this mechanism contributed to the efficacy of checkpoint-inhibitor therapy.
Abstract: Post-translational modification of chemokines mediated by the dipeptidyl peptidase DPP4 (CD26) has been shown to negatively regulate lymphocyte trafficking, and its inhibition enhances T cell migration and tumor immunity by preserving functional chemokine CXCL10. By extending those initial findings to pre-clinical models of hepatocellular carcinoma and breast cancer, we discovered a distinct mechanism by which inhibition of DPP4 improves anti-tumor responses. Administration of the DPP4 inhibitor sitagliptin resulted in higher concentrations of the chemokine CCL11 and increased migration of eosinophils into solid tumors. Enhanced tumor control was preserved in mice lacking lymphocytes and was ablated after depletion of eosinophils or treatment with degranulation inhibitors. We further demonstrated that tumor-cell expression of the alarmin IL-33 was necessary and sufficient for eosinophil-mediated anti-tumor responses and that this mechanism contributed to the efficacy of checkpoint-inhibitor therapy. These findings provide insight into IL-33- and eosinophil-mediated tumor control, revealed when endogenous mechanisms of DPP4 immunoregulation are inhibited. Eosinophils have been described mainly in allergy settings but are increasingly appreciated as being involved in other aspects of immunity. Albert and colleagues use a clinically approved inhibitor of the dipeptidyl peptidase DPP4 to facilitate the recruitment of eosinophils to mouse tumors, where they are essential in tumor destruction.

Journal ArticleDOI
01 Oct 2019-Allergy
TL;DR: The aims of this position paper were to provide recommendations for the investigation of immediate‐type perioperative hypersensitivity reactions and to provide practical information that can assist clinicians in planning and carrying out investigations.
Abstract: Perioperative immediate hypersensitivity reactions are rare. Subsequent allergy investigation is complicated by multiple simultaneous drug exposures, the use of drugs with potent effects and the many differential diagnoses to hypersensitivity in the perioperative setting. The approach to the investigation of these complex reactions is not standardized, and it is becoming increasingly apparent that collaboration between experts in the field of allergy/immunology/dermatology and anaesthesiology is needed to provide the best possible care for these patients. The EAACI task force behind this position paper has therefore combined the expertise of allergists, immunologists and anaesthesiologists. The aims of this position paper were to provide recommendations for the investigation of immediate-type perioperative hypersensitivity reactions and to provide practical information that can assist clinicians in planning and carrying out investigations.

Journal ArticleDOI
TL;DR: This work proposes the first methodology to infer dynamic Origin-Destination flows by transport modes using mobile network data e.g., Call Detail Records, and generates time variant road and rail passenger flows for the complete region.
Abstract: Fast urbanization generates increasing amounts of travel flows, urging the need for efficient transport planning policies. In parallel, mobile phone data have emerged as the largest mobility data source, but are not yet integrated to transport planning models. Currently, transport authorities are lacking a global picture of daily passenger flows on multimodal transport networks. In this work, we propose the first methodology to infer dynamic Origin-Destination flows by transport modes using mobile network data e.g., Call Detail Records. For this study, we pre-process 360 million trajectories for more than 2 million devices from the Greater Paris as our case study region. The model combines mobile network geolocation with transport network geospatial data, travel survey, census and travel card data. The transport modes of mobile network trajectories are identified through a two-steps semi-supervised learning algorithm. The later involves clustering of mobile network areas and Bayesian inference to generate transport probabilities for trajectories. After attributing the mode with highest probability to each trajectory, we construct Origin-Destination matrices by transport mode. Flows are up-scaled to the total population using state-of-the-art expansion factors. The model generates time variant road and rail passenger flows for the complete region. From our results, we observe different mobility patterns for road and rail modes and between Paris and its suburbs. The resulting transport flows are extensively validated against the travel survey and the travel card data for different spatial scales.


Journal ArticleDOI
TL;DR: A clear association between ICI use and increased diagnosis of Ma2-PNS is shown and Physicians need to be aware that ICIs can trigger Ma2 -PNS because clinical presentation can be challenging.
Abstract: Objective To report the induction of anti–Ma2 antibody–associated paraneoplastic neurologic syndrome (Ma2-PNS) in 6 patients after treatment with immune checkpoint inhibitors (ICIs). We also analyzed (1) patient clinical features compared with a cohort of 44 patients who developed Ma2-PNS without receiving ICI treatment and (2) the frequency of neuronal antibody detection before and after ICI implementation. Methods Retrospective nationwide study of all patients with Ma2-PNS developed during ICI treatment between 2017 and 2018. Results Our series of patients included 5 men and 1 woman (median age, 63 years). The patients were receiving nivolumab (n = 3), pembrolizumab (n = 2), or a combination of nivolumab and ipilimumab (n = 1) for treatment of neoplasms that included lung (n = 4) and kidney (n = 1) cancers and pleural mesothelioma (n = 1). Clinical syndromes comprised a combination of limbic encephalitis and diencephalitis (n = 3), isolated limbic encephalitis (n = 2), and a syndrome characterized by ophthalmoplegia and head drop (n = 1). No significant clinical difference was observed between our 6 patients and the overall cohort of Ma2-PNS cases. Post-ICI Ma2-PNS accounted for 35% of the total 17 Ma2-PNS diagnosed in our center over the 2017–2018 biennium. Eight cases had been detected in the preceding biennium 2015–2016, corresponding to a 112% increase of Ma2-PNS frequency since the implementation of ICIs in France. Despite ICI withdrawal and immunotherapy, 4/6 patients died, and the remaining 2 showed a moderate to severe disability. Conclusions We show a clear association between ICI use and increased diagnosis of Ma2-PNS. Physicians need to be aware that ICIs can trigger Ma2-PNS because clinical presentation can be challenging.

Journal ArticleDOI
TL;DR: A future vision of ocean best practices is laid out and how the Ocean Best Practices System (OBPS) will contribute to improving ocean observing in the decade to come is shown.
Abstract: The oceans play a key role in global issues such as climate change, food security, and human health. Given their vast dimensions and internal complexity, efficient monitoring and predicting of the planet’s ocean must be a collaborative effort of both regional and global scale. A first and foremost requirement for such collaborative ocean observing is the need to follow well-defined and reproducible methods across activities: from strategies for structuring observing systems, sensor deployment and usage, and the generation of data and information products, to ethical and governance aspects when executing ocean observing. To meet the urgent, planet-wide challenges we face, methods across all aspects of ocean observing should be broadly adopted by the ocean community and, where appropriate, should evolve into “Ocean Best Practices.” While many groups have created best practices, they are scattered across the Web or buried in local repositories and many have yet to be digitized. To reduce this fragmentation, we introduce a new open access, permanent, digital repository of best practices documentation (oceanbestpractices.org) that is part of the Ocean Best Practices System (OBPS). The new OBPS provides an opportunity space for the centralized and coordinated improvement of ocean observing methods. The OBPS repository employs user-friendly software to significantly improve discovery and access to methods. The software includes advanced semantic technologies for search capabilities to enhance repository operations. In addition to the repository, the OBPS also includes a peer reviewed journal research topic, a forum for community discussion and a training activity for use of best practices. Together, these components serve to realize a core objective of the OBPS, which is to enable the ocean community to create superior methods for every activity in ocean observing from research to operations to applications that are agreed upon and broadly adopted across communities. Using selected ocean observing examples, we show how the OBPS supports this objective. This paper lays out a future vision of ocean best practices and how OBPS will contribute to improving ocean observing in the decade to come.

Journal ArticleDOI
TL;DR: The paper collects the answers of the authors to the following questions: Is the lack of precision in the definition of many chemical concepts one of the reasons for the coexistence of many partition schemes?
Abstract: The paper collects the answers of the authors to the following questions : Is the lack of precision in the definition of many chemical concepts one of the reasons for the coexistence of many partition schemes? Does the adoption of a given partition scheme imply a set of more precise definitions of the underlying chemical concepts? How can one use the results of a partition scheme to improve the clarity of definitions of concepts? Are partition schemes subject to scientific Darwinism? If so, what is the influence of a community's sociological pressure in the “natural selection” process? To what extent does/can/should investigated systems influence the choice of a particular partition scheme? Do we need more focused chemical validation of Energy Decomposition Analysis (EDA) methodology and descriptors/terms in general? Is there any interest in developing common benchmarks and test sets for cross‐validation of methods? Is it possible to contemplate a unified partition scheme (let us call it the “standard model” of partitioning), that is proper for all applications in chemistry, in the foreseeable future or even in principle? In the end, science is about experiments and the real world. Can one, therefore, use any experiment or experimental data be used to favor one partition scheme over another?