Showing papers by "Pierre-and-Marie-Curie University published in 2019"
••
TL;DR: The epidemiology, treatment and management of the various immune-related adverse events that can occur in patients receiving immune-checkpoint inhibitors are described.
Abstract: Immune-checkpoint inhibitors (ICIs), including anti-cytotoxic T lymphocyte antigen 4 (CTLA-4), anti-programmed cell death 1 (PD-1) and anti-programmed cell death 1 ligand 1 (PD-L1) antibodies, are arguably the most important development in cancer therapy over the past decade. The indications for these agents continue to expand across malignancies and disease settings, thus reshaping many of the previous standard-of-care approaches and bringing new hope to patients. One of the costs of these advances is the emergence of a new spectrum of immune-related adverse events (irAEs), which are often distinctly different from the classical chemotherapy-related toxicities. Owing to the growing use of ICIs in oncology, clinicians will increasingly be confronted with common but also rare irAEs; hence, awareness needs to be raised regarding the clinical presentation, diagnosis and management of these toxicities. In this Review, we provide an overview of the various types of irAEs that have emerged to date. We discuss the epidemiology of these events and their kinetics, risk factors, subtypes and pathophysiology, as well as new insights regarding screening and surveillance strategies. We also highlight the most important aspects of the management of irAEs.
1,032 citations
••
TL;DR: The Large Synoptic Survey Telescope (LSST) as discussed by the authors is a large, wide-field ground-based system designed to obtain repeated images covering the sky visible from Cerro Pachon in northern Chile.
Abstract: We describe here the most ambitious survey currently planned in the optical, the Large Synoptic Survey Telescope (LSST). The LSST design is driven by four main science themes: probing dark energy and dark matter, taking an inventory of the solar system, exploring the transient optical sky, and mapping the Milky Way. LSST will be a large, wide-field ground-based system designed to obtain repeated images covering the sky visible from Cerro Pachon in northern Chile. The telescope will have an 8.4 m (6.5 m effective) primary mirror, a 9.6 deg2 field of view, a 3.2-gigapixel camera, and six filters (ugrizy) covering the wavelength range 320–1050 nm. The project is in the construction phase and will begin regular survey operations by 2022. About 90% of the observing time will be devoted to a deep-wide-fast survey mode that will uniformly observe a 18,000 deg2 region about 800 times (summed over all six bands) during the anticipated 10 yr of operations and will yield a co-added map to r ~ 27.5. These data will result in databases including about 32 trillion observations of 20 billion galaxies and a similar number of stars, and they will serve the majority of the primary science programs. The remaining 10% of the observing time will be allocated to special projects such as Very Deep and Very Fast time domain surveys, whose details are currently under discussion. We illustrate how the LSST science drivers led to these choices of system parameters, and we describe the expected data products and their characteristics.
921 citations
••
Charles University in Prague1, Medical University of Vienna2, University of Regensburg3, Pierre-and-Marie-Curie University4, Royal Surrey County Hospital5, Autonomous University of Barcelona6, Netherlands Cancer Institute7, University of Paris8, European Association of Urology9, Medical University of Graz10, First Faculty of Medicine, Charles University in Prague11, Royal Free London NHS Foundation Trust12, University of Rennes13
TL;DR: The European Association of Urology Non-muscle-invasive Bladder Cancer (NMIBC) Panel has released an updated version of their guidelines, which contains information on classification, risk factors, diagnosis, prognostic factors, and treatment of NMIBC.
897 citations
••
University of Saskatchewan1, Centre for Environment, Fisheries and Aquaculture Science2, Natural History Museum3, University of Rhode Island4, Academy of Sciences of the Czech Republic5, Sewanee: The University of the South6, National Institutes of Health7, Saint Petersburg State University8, University of Salzburg9, Centre national de la recherche scientifique10, Mississippi State University11, Science for Life Laboratory12, Uppsala University13, Charles University in Prague14, Spanish National Research Council15, University of Duisburg-Essen16, Kaiserslautern University of Technology17, University of Oslo18, Dalhousie University19, Pierre-and-Marie-Curie University20, American Museum of Natural History21, University of Michigan22, University of Warsaw23, University of São Paulo24, University of Paris25, University of British Columbia26, University of Guelph27, Royal Botanic Garden Edinburgh28, Kyungpook National University29, University of Geneva30, University of Alabama31, Pompeu Fabra University32, Edinburgh Napier University33, University of Arkansas34, Hosei University35, Oklahoma State University–Stillwater36, Chinese Academy of Sciences37
TL;DR: It is confirmed that eukaryotes form at least two domains, the loss of monophyly in the Excavata, robust support for the Haptista and Cryptista, and suggested primer sets for DNA sequences from environmental samples that are effective for each clade are provided.
Abstract: This revision of the classification of eukaryotes follows that of Adl et al., 2012 [J. Euk. Microbiol. 59(5)] and retains an emphasis on protists. Changes since have improved the resolution of many ...
750 citations
••
TL;DR: In this paper, the authors improved initial estimates of the binary's properties, including component masses, spins, and tidal parameters, using the known source location, improved modeling, and recalibrated Virgo data.
Abstract: On August 17, 2017, the Advanced LIGO and Advanced Virgo gravitational-wave detectors observed a low-mass compact binary inspiral. The initial sky localization of the source of the gravitational-wave signal, GW170817, allowed electromagnetic observatories to identify NGC 4993 as the host galaxy. In this work, we improve initial estimates of the binary's properties, including component masses, spins, and tidal parameters, using the known source location, improved modeling, and recalibrated Virgo data. We extend the range of gravitational-wave frequencies considered down to 23 Hz, compared to 30 Hz in the initial analysis. We also compare results inferred using several signal models, which are more accurate and incorporate additional physical effects as compared to the initial analysis. We improve the localization of the gravitational-wave source to a 90% credible region of 16 deg2. We find tighter constraints on the masses, spins, and tidal parameters, and continue to find no evidence for nonzero component spins. The component masses are inferred to lie between 1.00 and 1.89 M when allowing for large component spins, and to lie between 1.16 and 1.60 M (with a total mass 2.73-0.01+0.04 M) when the spins are restricted to be within the range observed in Galactic binary neutron stars. Using a precessing model and allowing for large component spins, we constrain the dimensionless spins of the components to be less than 0.50 for the primary and 0.61 for the secondary. Under minimal assumptions about the nature of the compact objects, our constraints for the tidal deformability parameter Λ are (0,630) when we allow for large component spins, and 300-230+420 (using a 90% highest posterior density interval) when restricting the magnitude of the component spins, ruling out several equation-of-state models at the 90% credible level. Finally, with LIGO and GEO600 data, we use a Bayesian analysis to place upper limits on the amplitude and spectral energy density of a possible postmerger signal.
715 citations
••
University of Hong Kong1, University of California, Los Angeles2, National Taiwan University3, Kindai University4, Yonsei University5, Memorial Sloan Kettering Cancer Center6, Pierre-and-Marie-Curie University7, University of California, San Francisco8, Bristol-Myers Squibb9, University of Navarra10
TL;DR: Though the primary endpoint of OS did not achieve statistical significance vs SOR, NIVO showed clinically meaningful improvements in OS, ORR, and CR rate as 1L treatment for aHCC and demonstrated a favorable safety profile consistent with previous reports.
533 citations
••
TL;DR: The Copernicus Atmosphere Monitoring Service (CAMS) dataset is the latest global reanalysis dataset of atmospheric composition produced by the European Centre for Medium-Range Weather Forecasts (ECMWF), consisting of three-dimensional time-consistent atmospheric composition fields, including aerosols and chemical species as discussed by the authors.
Abstract: . The Copernicus Atmosphere Monitoring Service (CAMS)
reanalysis is the latest global reanalysis dataset of atmospheric
composition produced by the European Centre for Medium-Range Weather
Forecasts (ECMWF), consisting of three-dimensional time-consistent atmospheric
composition fields, including aerosols and chemical species. The dataset
currently covers the period 2003–2016 and will be extended in the future by
adding 1 year each year. A reanalysis for greenhouse gases is being
produced separately. The CAMS reanalysis builds on the experience gained
during the production of the earlier Monitoring Atmospheric Composition and
Climate (MACC) reanalysis and CAMS interim reanalysis. Satellite retrievals
of total column CO; tropospheric column NO2 ; aerosol optical depth (AOD); and
total column, partial column and profile ozone retrievals were assimilated
for the CAMS reanalysis with ECMWF's Integrated Forecasting System. The new
reanalysis has an increased horizontal resolution of about 80 km and
provides more chemical species at a better temporal resolution (3-hourly
analysis fields, 3-hourly forecast fields and hourly surface forecast
fields) than the previously produced CAMS interim reanalysis. The CAMS
reanalysis has smaller biases compared with most of the independent ozone,
carbon monoxide, nitrogen dioxide and aerosol optical depth observations
used for validation in this paper than the previous two reanalyses and is
much improved and more consistent in time, especially compared to the MACC
reanalysis. The CAMS reanalysis is a dataset that can be used to compute
climatologies, study trends, evaluate models, benchmark other reanalyses or
serve as boundary conditions for regional models for past periods.
450 citations
••
Columbia University1, Stockholm University2, University of Bologna3, University of Mainz4, University of Münster5, University of Coimbra6, University of Turin7, New York University Abu Dhabi8, University of Zurich9, Rensselaer Polytechnic Institute10, University of Amsterdam11, Max Planck Society12, Weizmann Institute of Science13, University of Freiburg14, University of Nantes15, Purdue University16, University of California, San Diego17, University of Chicago18, Nagoya University19, Pierre-and-Marie-Curie University20, Université Paris-Saclay21, Rice University22, University of California, Los Angeles23
TL;DR: Constraints on light dark matter (DM) models using ionization signals in the XENON1T experiment are reported, and no DM or CEvNS detection may be claimed because the authors cannot model all of their backgrounds.
Abstract: We report constraints on light dark matter (DM) models using ionization signals in the XENON1T experiment. We mitigate backgrounds with strong event selections, rather than requiring a scintillation signal, leaving an effective exposure of (22±3) tonne day. Above ∼0.4 keVee, we observe 30 MeV/c2, and absorption of dark photons and axionlike particles for mχ within 0.186–1 keV/c2.
412 citations
••
Columbia University1, Stockholm University2, University of Bologna3, University of Mainz4, University of Münster5, University of Coimbra6, New York University Abu Dhabi7, University of Zurich8, Rensselaer Polytechnic Institute9, University of Amsterdam10, Max Planck Society11, Weizmann Institute of Science12, University of Freiburg13, University of Nantes14, University of California, San Diego15, University of Chicago16, Nagoya University17, Purdue University18, Pierre-and-Marie-Curie University19, Université Paris-Saclay20, Rice University21, University of California, Los Angeles22
TL;DR: The analysis uses the full ton year exposure of XENON1T to constrain the spin-dependent proton-only and neutron-only cases and sets exclusion limits on the WIMP-nucleon interactions.
Abstract: We report the first experimental results on spin-dependent elastic weakly interacting massive particle (WIMP) nucleon scattering from the XENON1T dark matter search experiment. The analysis uses the full ton year exposure of XENON1T to constrain the spin-dependent proton-only and neutron-only cases. No significant signal excess is observed, and a profile likelihood ratio analysis is used to set exclusion limits on the WIMP-nucleon interactions. This includes the most stringent constraint to date on the WIMP-neutron cross section, with a minimum of 6.3×10-42 cm2 at 30 GeV/c2 and 90% confidence level. The results are compared with those from collider searches and used to exclude new parameter space in an isoscalar theory with an axial-vector mediator.
241 citations
••
Chiang Mai University1, Mae Fah Luang University2, Chinese Academy of Sciences3, Ruhr University Bochum4, DSM5, Landcare Research6, Botanic Garden Meise7, University of Tsukuba8, University of Toronto9, New York Botanical Garden10, University of Agriculture, Faisalabad11, Russian Academy of Sciences12, Pierre-and-Marie-Curie University13, Beijing Forestry University14, Ghent University15, University of Amsterdam16, Federal University of Bahia17, Université catholique de Louvain18, Eötvös Loránd University19, West Bengal State University20, University of Miami21, Iranian Research Organization for Science and Technology22, Universidade Federal de Santa Catarina23, Federal University of Pernambuco24, University of Salamanca25, Sewanee: The University of the South26, Purdue University27, University of Pennsylvania28, Hachinohe Institute of Technology29, Clark University30, Seoul National University31, São Paulo Federal Institute of Education, Science and Technology32, Royal Ontario Museum33, University of Gothenburg34, National Museum of Natural History35, American Museum of Natural History36, Federal University of Rio Grande do Norte37, Universidade Federal de Santa Maria38, Instituto Politécnico Nacional39, University of Turin40, Federal University of Paraíba41, University of Tübingen42, Southwest Forestry University43, Royal Botanic Gardens44
TL;DR: Divergence times as additional criterion in ranking provide additional evidence to resolve taxonomic problems in the Basidiomycota taxonomic system, and also provide a better understanding of their phylogeny and evolution.
Abstract: The Basidiomycota constitutes a major phylum of the kingdom Fungi and is second in species numbers to the Ascomycota. The present work provides an overview of all validly published, currently used basidiomycete genera to date in a single document. An outline of all genera of Basidiomycota is provided, which includes 1928 currently used genera names, with 1263 synonyms, which are distributed in 241 families, 68 orders, 18 classes and four subphyla. We provide brief notes for each accepted genus including information on classification, number of accepted species, type species, life mode, habitat, distribution, and sequence information. Furthermore, three phylogenetic analyses with combined LSU, SSU, 5.8s, rpb1, rpb2, and ef1 datasets for the subphyla Agaricomycotina, Pucciniomycotina and Ustilaginomycotina are conducted, respectively. Divergence time estimates are provided to the family level with 632 species from 62 orders, 168 families and 605 genera. Our study indicates that the divergence times of the subphyla in Basidiomycota are 406–430 Mya, classes are 211–383 Mya, and orders are 99–323 Mya, which are largely consistent with previous studies. In this study, all phylogenetically supported families were dated, with the families of Agaricomycotina diverging from 27–178 Mya, Pucciniomycotina from 85–222 Mya, and Ustilaginomycotina from 79–177 Mya. Divergence times as additional criterion in ranking provide additional evidence to resolve taxonomic problems in the Basidiomycota taxonomic system, and also provide a better understanding of their phylogeny and evolution.
233 citations
••
Scripps Institution of Oceanography1, Monterey Bay Aquarium Research Institute2, National Oceanography Centre3, Oregon State University4, Woods Hole Oceanographic Institution5, IFREMER6, University of California7, Tohoku University8, CSIRO Marine and Atmospheric Research9, University of East Anglia10, Atlantic Oceanographic and Meteorological Laboratory11, Met Office12, Ontario Ministry of Natural Resources13, Marine Institute of Memorial University of Newfoundland14, Halifax15, Bjerknes Centre for Climate Research16, Massachusetts Institute of Technology17, Fisheries and Oceans Canada18, Japan Agency for Marine-Earth Science and Technology19, Bedford Institute of Oceanography20, National Oceanic and Atmospheric Administration21, University of Tokyo22, Korea Meteorological Administration23, Bangor University24, South African Weather Service25, Tokyo University of Marine Science and Technology26, Indian National Centre for Ocean Information Services27, University of Washington28, Pierre-and-Marie-Curie University29, Royal Netherlands Meteorological Institute30, National Institute of Water and Atmospheric Research31, Leibniz Institute for Neurobiology32, Cooperative Research Centre33, Polish Academy of Sciences34, Xiamen University35, University of British Columbia36, McMaster University37
TL;DR: The objective is to create a fully global, top-to-bottom, dynamically complete, and multidisciplinary Argo Program that will integrate seamlessly with satellite and with other in situ elements of the Global Ocean Observing System.
Abstract: The Argo Program has been implemented and sustained for almost two decades, as a global array of about 4000 profiling floats Argo provides continuous observations of ocean temperature and salinity versus pressure, from the sea surface to 2000 dbar The successful installation of the Argo array and its innovative data management system arose opportunistically from the combination of great scientific need and technological innovation Through the data system, Argo provides fundamental physical observations with broad societally-valuable applications, built on the cost-efficient and robust technologies of autonomous profiling floats Following recent advances in platform and sensor technologies, even greater opportunity exists now than 20 years ago to (i) improve Argo’s global coverage and value beyond the original design, (ii) extend Argo to span the full ocean depth, (iii) add biogeochemical sensors for improved understanding of oceanic cycles of carbon, nutrients, and ecosystems, and (iv) consider experimental sensors that might be included in the future, for example to document the spatial and temporal patterns of ocean mixing For Core Argo and each of these enhancements, the past, present, and future progression along a path from experimental deployments to regional pilot arrays to global implementation is described The objective is to create a fully global, top-to-bottom, dynamically complete, and multidisciplinary Argo Program that will integrate seamlessly with satellite and with other in situ elements of the Global Ocean Observing System (Legler et al, 2015) The integrated system will deliver operational reanalysis and forecasting capability, and assessment of the state and variability of the climate system with respect to physical, biogeochemical, and ecosystems parameters It will enable basic research of unprecedented breadth and magnitude, and a wealth of ocean-education and outreach opportunities
••
TL;DR: A new line of evidence is added that the biogeographic origin (evolutionary history) of a species is a determining factor of its potential to cause disruptive environmental impacts.
Abstract: Native plants and animals can rapidly become superabundant and dominate ecosystems, leading some people to claim that native species are no less likely than alien species to cause environmental damage such as biodiversity loss. We compared how frequently alien species and native species have been implicated as drivers of recent extinctions in a comprehensive global database, the 2017 IUCN Red List. Alien species were considered to be a contributing cause of 25% of plant extinctions and 33% of animal extinctions, whereas native species were implicated in less than 3% and 5% of animal and plant extinctions, respectively. When listed as a putative driver of recent extinctions, native species were more often associated with co-occurring drivers than were alien species. Our results add a new line of evidence that the biogeographic origin (evolutionary history) of a species is a determining factor of its potential to cause disruptive environmental impacts.
••
Columbia University1, Stockholm University2, University of Bologna3, University of Mainz4, University of Münster5, University of Coimbra6, University of Turin7, New York University Abu Dhabi8, University of Zurich9, Rensselaer Polytechnic Institute10, University of Amsterdam11, Max Planck Society12, Weizmann Institute of Science13, University of Freiburg14, University of Nantes15, Purdue University16, University of California, San Diego17, University of Chicago18, Nagoya University19, Pierre-and-Marie-Curie University20, Université Paris-Saclay21, Rice University22, University of California, Los Angeles23
TL;DR: A probe of low-mass dark matter with masses down to about 85 MeV/c^{2} is reported on by looking for electronic recoils induced by the Migdal effect and bremsstrahlung using data from the XENON1T experiment, and exploiting an approach that uses ionization signals only allows for a lower detection threshold.
Abstract: Direct dark matter detection experiments based on a liquid xenon target are leading the search for dark matter particles with masses above ∼5 GeV/c2, but have limited sensitivity to lighter masses because of the small momentum transfer in dark matter-nucleus elastic scattering. However, there is an irreducible contribution from inelastic processes accompanying the elastic scattering, which leads to the excitation and ionization of the recoiling atom (the Migdal effect) or the emission of a bremsstrahlung photon. In this Letter, we report on a probe of low-mass dark matter with masses down to about 85 MeV/c2 by looking for electronic recoils induced by the Migdal effect and bremsstrahlung using data from the XENON1T experiment. Besides the approach of detecting both scintillation and ionization signals, we exploit an approach that uses ionization signals only, which allows for a lower detection threshold. This analysis significantly enhances the sensitivity of XENON1T to light dark matter previously beyond its reach.
••
Marcelle Soares-Santos1, Antonella Palmese2, W. G. Hartley3, J. Annis2 +1285 more•Institutions (156)
TL;DR: In this article, a multi-messenger measurement of the Hubble constant H 0 using the binary-black-hole merger GW170814 as a standard siren, combined with a photometric redshift catalog from the Dark Energy Survey (DES), is presented.
Abstract: We present a multi-messenger measurement of the Hubble constant H 0 using the binary–black-hole merger GW170814 as a standard siren, combined with a photometric redshift catalog from the Dark Energy Survey (DES). The luminosity distance is obtained from the gravitational wave signal detected by the Laser Interferometer Gravitational-Wave Observatory (LIGO)/Virgo Collaboration (LVC) on 2017 August 14, and the redshift information is provided by the DES Year 3 data. Black hole mergers such as GW170814 are expected to lack bright electromagnetic emission to uniquely identify their host galaxies and build an object-by-object Hubble diagram. However, they are suitable for a statistical measurement, provided that a galaxy catalog of adequate depth and redshift completion is available. Here we present the first Hubble parameter measurement using a black hole merger. Our analysis results in ${H}_{0}={75}_{-32}^{+40}\,\mathrm{km}\,{{\rm{s}}}^{-1}\,{\mathrm{Mpc}}^{-1}$, which is consistent with both SN Ia and cosmic microwave background measurements of the Hubble constant. The quoted 68% credible region comprises 60% of the uniform prior range [20, 140] km s−1 Mpc−1, and it depends on the assumed prior range. If we take a broader prior of [10, 220] km s−1 Mpc−1, we find ${H}_{0}={78}_{-24}^{+96}\,\mathrm{km}\,{{\rm{s}}}^{-1}\,{\mathrm{Mpc}}^{-1}$ (57% of the prior range). Although a weak constraint on the Hubble constant from a single event is expected using the dark siren method, a multifold increase in the LVC event rate is anticipated in the coming years and combinations of many sirens will lead to improved constraints on H 0.
••
TL;DR: Testing for human immunodeficiency virus resistance in drug-naive individuals and in patients in whom antiretroviral treatment is failing, and the appreciation of the role of testing, are crucial to the prevention and management of failure of ART.
Abstract: Background Contemporary antiretroviral therapies (ART) and management strategies have diminished both human immunodeficiency virus (HIV) treatment failure and the acquired resistance to drugs in resource-rich regions, but transmission of drug-resistant viruses has not similarly decreased. In low- and middle-income regions, ART roll-out has improved outcomes, but has resulted in increasing acquired and transmitted resistances. Our objective was to review resistance to ART drugs and methods to detect it, and to provide updated recommendations for testing and monitoring for drug resistance in HIV-infected individuals. Methods A volunteer panel of experts appointed by the International Antiviral (formerly AIDS) Society-USA reviewed relevant peer-reviewed data that were published or presented at scientific conferences. Recommendations were rated according to the strength of the recommendation and quality of the evidence, and reached by full panel consensus. Results Resistance testing remains a cornerstone of ART. It is recommended in newly-diagnosed individuals and in patients in whom ART has failed. Testing for transmitted integrase strand-transfer inhibitor resistance is currently not recommended, but this may change as more resistance emerges with widespread use. Sanger-based and next-generation sequencing approaches are each suited for genotypic testing. Testing for minority variants harboring drug resistance may only be considered if treatments depend on a first-generation nonnucleoside analogue reverse transcriptase inhibitor. Different HIV-1 subtypes do not need special considerations regarding resistance testing. Conclusions Testing for HIV drug resistance in drug-naive individuals and in patients in whom antiretroviral drugs are failing, and the appreciation of the role of testing, are crucial to the prevention and management of failure of ART.
••
Potsdam Institute for Climate Impact Research1, University of Nottingham2, East China Normal University3, Centre national de la recherche scientifique4, University of Chicago5, University of Liège6, Pablo de Olavide University7, Dalhousie University8, ETH Zurich9, Wageningen University and Research Centre10, International Institute for Applied Systems Analysis11, University of Giessen12, Université du Québec à Montréal13, McGill University14, Spanish National Research Council15, Humboldt University of Berlin16, University of British Columbia17, University of South Carolina18, University of Cambridge19, National Institute for Environmental Studies20, University of Tokyo21, National Center for Atmospheric Research22, Imperial College London23, Goethe University Frankfurt24, Max Planck Society25, Stockholm University26, Michigan State University27, University of Birmingham28, National Agriculture and Food Research Organization29, University of Natural Resources and Life Sciences, Vienna30, University of Mainz31, Chinese Academy of Sciences32, Auburn University33, World Conservation Monitoring Centre34, Peking University35, Pierre-and-Marie-Curie University36
TL;DR: A majority of models underestimate the extremeness of impacts in important sectors such as agriculture, terrestrial ecosystems, and heat-related human mortality, while impacts on water resources and hydropower are overestimated in some river basins; and the spread across models is often large.
Abstract: Global impact models represent process-level understanding of how natural and human systems may be affected by climate change. Their projections are used in integrated assessments of climate change. Here we test, for the first time, systematically across many important systems, how well such impact models capture the impacts of extreme climate conditions. Using the 2003 European heat wave and drought as a historical analogue for comparable events in the future, we find that a majority of models underestimate the extremeness of impacts in important sectors such as agriculture, terrestrial ecosystems, and heat-related human mortality, while impacts on water resources and hydropower are overestimated in some river basins; and the spread across models is often large. This has important implications for economic assessments of climate change impacts that rely on these models. It also means that societal risks from future extreme events may be greater than previously thought.
••
Medical University of Vienna1, University of Michigan2, Technische Universität München3, Gdańsk Medical University4, Charité5, University of Groningen6, Mayo Clinic7, Paracelsus Private Medical University of Salzburg8, Heidelberg University9, University Medical Center Groningen10, Paris Descartes University11, Stanford University12, Odense University Hospital13, University of Salamanca14, Ludwig Maximilian University of Munich15, University of Salerno16, Pierre-and-Marie-Curie University17, Virginia Commonwealth University18, National Institutes of Health19
TL;DR: A diagnostic algorithm is proposed through which a clinically relevant MCA can be suspected and MCAS can subsequently be documented or excluded and should help guide the investigating care providers to consider the 2 principal diagnoses that may underlie MCAS, namely, severe allergy and systemic mastocytosis accompanied by severe MCA.
••
TL;DR: These guidelines aim to give precise definitions and provide the background needed for doctors to correctly classify cutaneous drug hypersensitivity reactions (CDHR).
Abstract: Drug hypersensitivity reactions (DHRs) are common, and the skin is by far the most frequently involved organ with a broad spectrum of reaction types. The diagnosis of cutaneous DHRs (CDHR) may be difficult because of multiple differential diagnoses. A correct classification is important for the correct diagnosis and management. With these guidelines, we aim to give precise definitions and provide the background needed for doctors to correctly classify CDHR.
••
TL;DR: This work reviews the most significant approaches to quantum verification and compares them in terms of structure, complexity and required resources and comments on the use of cryptographic techniques which, for many of the presented protocols, has proven extremely useful in performing verification.
Abstract: Quantum computers promise to efficiently solve not only problems believed to be intractable for classical computers, but also problems for which verifying the solution is also considered intractable. This raises the question of how one can check whether quantum computers are indeed producing correct results. This task, known as quantum verification, has been highlighted as a significant challenge on the road to scalable quantum computing technology. We review the most significant approaches to quantum verification and compare them in terms of structure, complexity and required resources. We also comment on the use of cryptographic techniques which, for many of the presented protocols, has proven extremely useful in performing verification. Finally, we discuss issues related to fault tolerance, experimental implementations and the outlook for future protocols.
••
TL;DR: The R2 regimen showed significant activity in R/R PCNSL and PVRL patients and support assessments of the efficacy of R2 combined with methotrexate-based chemotherapy as a first-line treatment of PC NSL.
••
TL;DR: In this article, the results of an all-sky search for continuous gravitational waves (CWs), which can be produced by fast spinning neutron stars with an asymmetry around their rotation axis, were presented.
Abstract: We present results of an all-sky search for continuous gravitational waves (CWs), which can be produced by fast spinning neutron stars with an asymmetry around their rotation axis, using data from the second observing run of the Advanced LIGO detectors. Three different semicoherent methods are used to search in a gravitational-wave frequency band from 20 to 1922 Hz and a first frequency derivative from -1×10-8 to 2×10-9 Hz/s. None of these searches has found clear evidence for a CW signal, so upper limits on the gravitational-wave strain amplitude are calculated, which for this broad range in parameter space are the most sensitive ever achieved.
••
TL;DR: The TFL appears to be a real alternative to the Ho:YAG laser and become a true game-changer in laser lithotripsy, and further studies are needed to broaden the understanding of the TFL, and comprehend the full implications and benefits of this new technology, as well its limitations.
Abstract: The Holmium:yttrium-aluminum-garnet (Ho:YAG) laser has been the gold-standard for laser lithotripsy over the last 20 years. However, recent reports about a new prototype thulium fiber laser (TFL) lithotripter have revealed impressive levels of performance. We therefore decided to systematically review the reality and expectations for this new TFL technology. This review was registered in the PROSPERO registry (CRD42019128695). A PubMed search was performed for papers including specific terms relevant to this systematic review published between the years 2015 and 2019, including already accepted but not yet published papers. Additionally, the medical sections of ScienceDirect, Wiley, SpringerLink, Mary Ann Liebert publishers, and Google Scholar were also searched for peer-reviewed abstract presentations. All relevant studies and data identified in the bibliographic search were selected, categorized, and summarized. The authors adhered to PRISMA guidelines for this review. The TFL emits laser radiation at a wavelength of 1,940 nm, and has an optical penetration depth in water about four-times shorter than the Ho:YAG laser. This results in four-times lower stone ablation thresholds, as well as lower tissue ablation thresholds. As the TFL uses electronically-modulated laser diodes, it offers the most comprehensive and flexible range of laser parameters among laser lithotripters, with pulse frequencies up to 2,200 Hz, very low to very high pulse energies (0.005-6 J), short to very long-pulse durations (200 µs up to 12 ms), and a total power level up to 55 W. The stone ablation efficiency is up to four-times that of the Ho:YAG laser for similar laser parameters, with associated implications for speed and operating time. When using dusting settings, the TFL outperforms the Ho:YAG laser in dust quantity and quality, producing much finer particles. Retropulsion is also significantly reduced and sometimes even absent with the TFL. The TFL can use small laser fibers (as small as 50 µm core), with resulting advantages in irrigation, scope deflection, retropulsion reduction, and (in)direct effects on accessibility, visibility, efficiency, and surgical time, as well as offering future miniaturization possibilities. Similar to the Ho:YAG laser, the TFL can also be used for soft tissue applications such as prostate enucleation (ThuFLEP). The TFL machine itself is seven times smaller and eight times lighter than a high-power Ho:YAG laser system, and consumes nine times less energy. Maintenance is expected to be very low due to the durability of its components. The safety profile is also better in many aspects, i.e., for patients, instruments, and surgeons. The advantages of the TFL over the Ho:YAG laser are simply too extensive to be ignored. The TFL appears to be a real alternative to the Ho:YAG laser and become a true game-changer in laser lithotripsy. Due to its novelty, further studies are needed to broaden our understanding of the TFL, and comprehend the full implications and benefits of this new technology, as well its limitations.
•
TL;DR: The 2019 edition of the IAS-USA drug resistance mutations list updates the Figure to assist practitioners in identifying key mutations associated with resistance to antiretroviral drugs, and therefore, in making clinical decisions regarding antireTroviral therapy.
Abstract: The 2019 edition of the IAS-USA drug resistance mutations list updates the Figure last published in January 2017. The mutations listed are those that have been identified by specific criteria for evidence and drugs described. The Figure is designed to assist practitioners in identifying key mutations associated with resistance to antiretroviral drugs, and therefore, in making clinical decisions regarding antiretroviral therapy.
••
TL;DR: It is demonstrated that tumor-cell expression of the alarmin IL-33 was necessary and sufficient for eosinophil-mediated anti-tumor responses and that this mechanism contributed to the efficacy of checkpoint-inhibitor therapy.
Abstract: Post-translational modification of chemokines mediated by the dipeptidyl peptidase DPP4 (CD26) has been shown to negatively regulate lymphocyte trafficking, and its inhibition enhances T cell migration and tumor immunity by preserving functional chemokine CXCL10. By extending those initial findings to pre-clinical models of hepatocellular carcinoma and breast cancer, we discovered a distinct mechanism by which inhibition of DPP4 improves anti-tumor responses. Administration of the DPP4 inhibitor sitagliptin resulted in higher concentrations of the chemokine CCL11 and increased migration of eosinophils into solid tumors. Enhanced tumor control was preserved in mice lacking lymphocytes and was ablated after depletion of eosinophils or treatment with degranulation inhibitors. We further demonstrated that tumor-cell expression of the alarmin IL-33 was necessary and sufficient for eosinophil-mediated anti-tumor responses and that this mechanism contributed to the efficacy of checkpoint-inhibitor therapy. These findings provide insight into IL-33- and eosinophil-mediated tumor control, revealed when endogenous mechanisms of DPP4 immunoregulation are inhibited. Eosinophils have been described mainly in allergy settings but are increasingly appreciated as being involved in other aspects of immunity. Albert and colleagues use a clinically approved inhibitor of the dipeptidyl peptidase DPP4 to facilitate the recruitment of eosinophils to mouse tumors, where they are essential in tumor destruction.
••
TL;DR: The aims of this position paper were to provide recommendations for the investigation of immediate‐type perioperative hypersensitivity reactions and to provide practical information that can assist clinicians in planning and carrying out investigations.
Abstract: Perioperative immediate hypersensitivity reactions are rare. Subsequent allergy investigation is complicated by multiple simultaneous drug exposures, the use of drugs with potent effects and the many differential diagnoses to hypersensitivity in the perioperative setting. The approach to the investigation of these complex reactions is not standardized, and it is becoming increasingly apparent that collaboration between experts in the field of allergy/immunology/dermatology and anaesthesiology is needed to provide the best possible care for these patients. The EAACI task force behind this position paper has therefore combined the expertise of allergists, immunologists and anaesthesiologists. The aims of this position paper were to provide recommendations for the investigation of immediate-type perioperative hypersensitivity reactions and to provide practical information that can assist clinicians in planning and carrying out investigations.
••
TL;DR: This work proposes the first methodology to infer dynamic Origin-Destination flows by transport modes using mobile network data e.g., Call Detail Records, and generates time variant road and rail passenger flows for the complete region.
Abstract: Fast urbanization generates increasing amounts of travel flows, urging the need for efficient transport planning policies. In parallel, mobile phone data have emerged as the largest mobility data source, but are not yet integrated to transport planning models. Currently, transport authorities are lacking a global picture of daily passenger flows on multimodal transport networks. In this work, we propose the first methodology to infer dynamic Origin-Destination flows by transport modes using mobile network data e.g., Call Detail Records. For this study, we pre-process 360 million trajectories for more than 2 million devices from the Greater Paris as our case study region. The model combines mobile network geolocation with transport network geospatial data, travel survey, census and travel card data. The transport modes of mobile network trajectories are identified through a two-steps semi-supervised learning algorithm. The later involves clustering of mobile network areas and Bayesian inference to generate transport probabilities for trajectories. After attributing the mode with highest probability to each trajectory, we construct Origin-Destination matrices by transport mode. Flows are up-scaled to the total population using state-of-the-art expansion factors. The model generates time variant road and rail passenger flows for the complete region. From our results, we observe different mobility patterns for road and rail modes and between Paris and its suburbs. The resulting transport flows are extensively validated against the travel survey and the travel card data for different spatial scales.
••
TL;DR: There are signs of late response not sufficient to justify the development of crizotinib in patients with ROS1-translocated tumours, but the continued targeting of c-MET with innovative therapies appears justified.
••
TL;DR: A clear association between ICI use and increased diagnosis of Ma2-PNS is shown and Physicians need to be aware that ICIs can trigger Ma2 -PNS because clinical presentation can be challenging.
Abstract: Objective To report the induction of anti–Ma2 antibody–associated paraneoplastic neurologic syndrome (Ma2-PNS) in 6 patients after treatment with immune checkpoint inhibitors (ICIs). We also analyzed (1) patient clinical features compared with a cohort of 44 patients who developed Ma2-PNS without receiving ICI treatment and (2) the frequency of neuronal antibody detection before and after ICI implementation. Methods Retrospective nationwide study of all patients with Ma2-PNS developed during ICI treatment between 2017 and 2018. Results Our series of patients included 5 men and 1 woman (median age, 63 years). The patients were receiving nivolumab (n = 3), pembrolizumab (n = 2), or a combination of nivolumab and ipilimumab (n = 1) for treatment of neoplasms that included lung (n = 4) and kidney (n = 1) cancers and pleural mesothelioma (n = 1). Clinical syndromes comprised a combination of limbic encephalitis and diencephalitis (n = 3), isolated limbic encephalitis (n = 2), and a syndrome characterized by ophthalmoplegia and head drop (n = 1). No significant clinical difference was observed between our 6 patients and the overall cohort of Ma2-PNS cases. Post-ICI Ma2-PNS accounted for 35% of the total 17 Ma2-PNS diagnosed in our center over the 2017–2018 biennium. Eight cases had been detected in the preceding biennium 2015–2016, corresponding to a 112% increase of Ma2-PNS frequency since the implementation of ICIs in France. Despite ICI withdrawal and immunotherapy, 4/6 patients died, and the remaining 2 showed a moderate to severe disability. Conclusions We show a clear association between ICI use and increased diagnosis of Ma2-PNS. Physicians need to be aware that ICIs can trigger Ma2-PNS because clinical presentation can be challenging.
••
Centre national de la recherche scientifique1, Leibniz Institute of Marine Sciences2, Alfred Wegener Institute for Polar and Marine Research3, Marine Institute of Memorial University of Newfoundland4, University of South Florida5, Woods Hole Oceanographic Institution6, UNESCO7, Ocean Networks Canada8, Leibniz Institute for Baltic Sea Research9, IFREMER10, National Oceanic and Atmospheric Administration11, Institut de recherche pour le développement12, National Research Council13, National Oceanography Centre14, Oceanic Platform of the Canary Islands15, National Institute of Geophysics and Volcanology16, Natural Environment Research Council17, University of Tasmania18, Autonomous University of Barcelona19, University Corporation for Atmospheric Research20, University of Bologna21, Geoscience Australia22, Bjerknes Centre for Climate Research23, Chesapeake Biological Laboratory24, Polish Academy of Sciences25, Pierre-and-Marie-Curie University26, University of California, San Diego27, University of Bremen28, Dalhousie University29
TL;DR: A future vision of ocean best practices is laid out and how the Ocean Best Practices System (OBPS) will contribute to improving ocean observing in the decade to come is shown.
Abstract: The oceans play a key role in global issues such as climate change, food security, and human health. Given their vast dimensions and internal complexity, efficient monitoring and predicting of the planet’s ocean must be a collaborative effort of both regional and global scale. A first and foremost requirement for such collaborative ocean observing is the need to follow well-defined and reproducible methods across activities: from strategies for structuring observing systems, sensor deployment and usage, and the generation of data and information products, to ethical and governance aspects when executing ocean observing. To meet the urgent, planet-wide challenges we face, methods across all aspects of ocean observing should be broadly adopted by the ocean community and, where appropriate, should evolve into “Ocean Best Practices.” While many groups have created best practices, they are scattered across the Web or buried in local repositories and many have yet to be digitized. To reduce this fragmentation, we introduce a new open access, permanent, digital repository of best practices documentation (oceanbestpractices.org) that is part of the Ocean Best Practices System (OBPS). The new OBPS provides an opportunity space for the centralized and coordinated improvement of ocean observing methods. The OBPS repository employs user-friendly software to significantly improve discovery and access to methods. The software includes advanced semantic technologies for search capabilities to enhance repository operations. In addition to the repository, the OBPS also includes a peer reviewed journal research topic, a forum for community discussion and a training activity for use of best practices. Together, these components serve to realize a core objective of the OBPS, which is to enable the ocean community to create superior methods for every activity in ocean observing from research to operations to applications that are agreed upon and broadly adopted across communities. Using selected ocean observing examples, we show how the OBPS supports this objective. This paper lays out a future vision of ocean best practices and how OBPS will contribute to improving ocean observing in the decade to come.
••
James I University1, McMaster University2, University of Lyon3, University of Szczecin4, University of Paris5, University of Liverpool6, University of Marburg7, Queen's University8, University of Rouen9, University of Oviedo10, Donostia International Physics Center11, Ikerbasque12, Hungarian Academy of Sciences13, Queen Mary University of London14, Western Michigan University15, University of Manchester16, Chalmers University of Technology17, Tsinghua University18, University of Siegen19, Shahid Beheshti University20, University of Delaware21, University of Wisconsin-Madison22, Pierre-and-Marie-Curie University23
TL;DR: The paper collects the answers of the authors to the following questions: Is the lack of precision in the definition of many chemical concepts one of the reasons for the coexistence of many partition schemes?
Abstract: The paper collects the answers of the authors to the following questions :
Is the lack of precision in the definition of many chemical concepts one of the reasons for the coexistence of many partition schemes?
Does the adoption of a given partition scheme imply a set of more precise definitions of the underlying chemical concepts?
How can one use the results of a partition scheme to improve the clarity of definitions of concepts?
Are partition schemes subject to scientific Darwinism? If so, what is the influence of a community's sociological pressure in the “natural selection” process?
To what extent does/can/should investigated systems influence the choice of a particular partition scheme?
Do we need more focused chemical validation of Energy Decomposition Analysis (EDA) methodology and descriptors/terms in general?
Is there any interest in developing common benchmarks and test sets for cross‐validation of methods?
Is it possible to contemplate a unified partition scheme (let us call it the “standard model” of partitioning), that is proper for all applications in chemistry, in the foreseeable future or even in principle?
In the end, science is about experiments and the real world. Can one, therefore, use any experiment or experimental data be used to favor one partition scheme over another?