Showing papers by "University of Paris published in 2018"
••
Clotilde Théry1, Kenneth W. Witwer2, Elena Aikawa3, María José Alcaraz4 +414 more•Institutions (209)
TL;DR: The MISEV2018 guidelines include tables and outlines of suggested protocols and steps to follow to document specific EV-associated functional activities, and a checklist is provided with summaries of key points.
Abstract: The last decade has seen a sharp increase in the number of scientific publications describing physiological and pathological functions of extracellular vesicles (EVs), a collective term covering various subtypes of cell-released, membranous structures, called exosomes, microvesicles, microparticles, ectosomes, oncosomes, apoptotic bodies, and many other names. However, specific issues arise when working with these entities, whose size and amount often make them difficult to obtain as relatively pure preparations, and to characterize properly. The International Society for Extracellular Vesicles (ISEV) proposed Minimal Information for Studies of Extracellular Vesicles (“MISEV”) guidelines for the field in 2014. We now update these “MISEV2014” guidelines based on evolution of the collective knowledge in the last four years. An important point to consider is that ascribing a specific function to EVs in general, or to subtypes of EVs, requires reporting of specific information beyond mere description of function in a crude, potentially contaminated, and heterogeneous preparation. For example, claims that exosomes are endowed with exquisite and specific activities remain difficult to support experimentally, given our still limited knowledge of their specific molecular machineries of biogenesis and release, as compared with other biophysically similar EVs. The MISEV2018 guidelines include tables and outlines of suggested protocols and steps to follow to document specific EV-associated functional activities. Finally, a checklist is provided with summaries of key points.
5,988 citations
••
Lorenzo Galluzzi1, Lorenzo Galluzzi2, Ilio Vitale3, Stuart A. Aaronson4 +183 more•Institutions (111)
TL;DR: The Nomenclature Committee on Cell Death (NCCD) has formulated guidelines for the definition and interpretation of cell death from morphological, biochemical, and functional perspectives.
Abstract: Over the past decade, the Nomenclature Committee on Cell Death (NCCD) has formulated guidelines for the definition and interpretation of cell death from morphological, biochemical, and functional perspectives. Since the field continues to expand and novel mechanisms that orchestrate multiple cell death pathways are unveiled, we propose an updated classification of cell death subroutines focusing on mechanistic and essential (as opposed to correlative and dispensable) aspects of the process. As we provide molecularly oriented definitions of terms including intrinsic apoptosis, extrinsic apoptosis, mitochondrial permeability transition (MPT)-driven necrosis, necroptosis, ferroptosis, pyroptosis, parthanatos, entotic cell death, NETotic cell death, lysosome-dependent cell death, autophagy-dependent cell death, immunogenic cell death, cellular senescence, and mitotic catastrophe, we discuss the utility of neologisms that refer to highly specialized instances of these processes. The mission of the NCCD is to provide a widely accepted nomenclature on cell death in support of the continued development of the field.
3,301 citations
••
University of Pennsylvania1, University of Texas Southwestern Medical Center2, University of Oslo3, Boston Children's Hospital4, University of Utah5, Université de Montréal6, Goethe University Frankfurt7, University of Minnesota8, Children's Mercy Hospital9, Emory University10, Ghent University11, Kyoto University12, Stanford University13, Duke University14, Oregon Health & Science University15, University of Michigan16, Medical University of Vienna17, University of Paris18, Royal Children's Hospital19, University of Milan20, University of Toronto21, Novartis22, University of Southern California23
TL;DR: In this global study of CAR T‐cell therapy, a single infusion of tisagenlecleucel provided durable remission with long‐term persistence in pediatric and young adult patients with relapsed or refractory B‐cell ALL, with transient high‐grade toxic effects.
Abstract: Background In a single-center phase 1–2a study, the anti-CD19 chimeric antigen receptor (CAR) T-cell therapy tisagenlecleucel produced high rates of complete remission and was associated with serious but mainly reversible toxic effects in children and young adults with relapsed or refractory B-cell acute lymphoblastic leukemia (ALL) Methods We conducted a phase 2, single-cohort, 25-center, global study of tisagenlecleucel in pediatric and young adult patients with CD19+ relapsed or refractory B-cell ALL The primary end point was the overall remission rate (the rate of complete remission or complete remission with incomplete hematologic recovery) within 3 months Results For this planned analysis, 75 patients received an infusion of tisagenlecleucel and could be evaluated for efficacy The overall remission rate within 3 months was 81%, with all patients who had a response to treatment found to be negative for minimal residual disease, as assessed by means of flow cytometry The rates of event-f
3,237 citations
••
TL;DR: In this paper, the cosmological parameter results from the final full-mission Planck measurements of the CMB anisotropies were presented, with good consistency with the standard spatially-flat 6-parameter CDM cosmology having a power-law spectrum of adiabatic scalar perturbations from polarization, temperature, and lensing separately and in combination.
Abstract: We present cosmological parameter results from the final full-mission Planck measurements of the CMB anisotropies. We find good consistency with the standard spatially-flat 6-parameter $\Lambda$CDM cosmology having a power-law spectrum of adiabatic scalar perturbations (denoted "base $\Lambda$CDM" in this paper), from polarization, temperature, and lensing, separately and in combination. A combined analysis gives dark matter density $\Omega_c h^2 = 0.120\pm 0.001$, baryon density $\Omega_b h^2 = 0.0224\pm 0.0001$, scalar spectral index $n_s = 0.965\pm 0.004$, and optical depth $\tau = 0.054\pm 0.007$ (in this abstract we quote $68\,\%$ confidence regions on measured parameters and $95\,\%$ on upper limits). The angular acoustic scale is measured to $0.03\,\%$ precision, with $100\theta_*=1.0411\pm 0.0003$. These results are only weakly dependent on the cosmological model and remain stable, with somewhat increased errors, in many commonly considered extensions. Assuming the base-$\Lambda$CDM cosmology, the inferred late-Universe parameters are: Hubble constant $H_0 = (67.4\pm 0.5)$km/s/Mpc; matter density parameter $\Omega_m = 0.315\pm 0.007$; and matter fluctuation amplitude $\sigma_8 = 0.811\pm 0.006$. We find no compelling evidence for extensions to the base-$\Lambda$CDM model. Combining with BAO we constrain the effective extra relativistic degrees of freedom to be $N_{\rm eff} = 2.99\pm 0.17$, and the neutrino mass is tightly constrained to $\sum m_
u< 0.12$eV. The CMB spectra continue to prefer higher lensing amplitudes than predicted in base -$\Lambda$CDM at over $2\,\sigma$, which pulls some parameters that affect the lensing amplitude away from the base-$\Lambda$CDM model; however, this is not supported by the lensing reconstruction or (in models that also change the background geometry) BAO data. (Abridged)
3,077 citations
••
Lund University1, European Space Agency2, Dresden University of Technology3, Heidelberg University4, Telespazio5, University of Barcelona6, University of Edinburgh7, University of Cambridge8, University of Paris9, Serco Group10, INAF11, University of Bern12, University of Bordeaux13, University of Turin14, European Space Research and Technology Centre15, University of Padua16, Centre national de la recherche scientifique17, Max Planck Society18, University of Geneva19, Chinese Academy of Sciences20, Las Cumbres Observatory Global Telescope Network21, Liverpool John Moores University22, Altec Lansing23, Leiden University24
TL;DR: In this article, the authors describe the input data, models, and processing used for the astrometric content of Gaia DR2, and the validation of these results performed within the ASTR task.
Abstract: Context. Gaia Data Release 2 (Gaia DR2) contains results for 1693 million sources in the magnitude range 3 to 21 based on observations collected by the European Space Agency Gaia satellite during the first 22 months of its operational phase.Aims. We describe the input data, models, and processing used for the astrometric content of Gaia DR2, and the validation of these resultsperformed within the astrometry task.Methods. Some 320 billion centroid positions from the pre-processed astrometric CCD observations were used to estimate the five astrometric parameters (positions, parallaxes, and proper motions) for 1332 million sources, and approximate positions at the reference epoch J2015.5 for an additional 361 million mostly faint sources. These data were calculated in two steps. First, the satellite attitude and the astrometric calibration parameters of the CCDs were obtained in an astrometric global iterative solution for 16 million selected sources, using about 1% of the input data. This primary solution was tied to the extragalactic International Celestial Reference System (ICRS) by means of quasars. The resulting attitude and calibration were then used to calculate the astrometric parameters of all the sources. Special validation solutions were used to characterise the random and systematic errors in parallax and proper motion.Results. For the sources with five-parameter astrometric solutions, the median uncertainty in parallax and position at the reference epoch J2015.5 is about 0.04 mas for bright (G = 17 mag, and 0.7 masat G = 20 mag. In the proper motion components the corresponding uncertainties are 0.05, 0.2, and 1.2 mas yr−1 , respectively.The optical reference frame defined by Gaia DR2 is aligned with ICRS and is non-rotating with respect to the quasars to within 0.15 mas yr−1 . From the quasars and validation solutions we estimate that systematics in the parallaxes depending on position, magnitude, and colour are generally below 0.1 mas, but the parallaxes are on the whole too small by about 0.03 mas. Significant spatial correlations of up to 0.04 mas in parallax and 0.07 mas yr−1 in proper motion are seen on small ( DR2 astrometry are given in the appendices.
1,836 citations
••
TL;DR: This analysis expands upon previous analyses by working under the hypothesis that both bodies were neutron stars that are described by the same equation of state and have spins within the range observed in Galactic binary neutron stars.
Abstract: On 17 August 2017, the LIGO and Virgo observatories made the first direct detection of gravitational waves from the coalescence of a neutron star binary system. The detection of this gravitational-wave signal, GW170817, offers a novel opportunity to directly probe the properties of matter at the extreme conditions found in the interior of these stars. The initial, minimal-assumption analysis of the LIGO and Virgo data placed constraints on the tidal effects of the coalescing bodies, which were then translated to constraints on neutron star radii. Here, we expand upon previous analyses by working under the hypothesis that both bodies were neutron stars that are described by the same equation of state and have spins within the range observed in Galactic binary neutron stars. Our analysis employs two methods: the use of equation-of-state-insensitive relations between various macroscopic properties of the neutron stars and the use of an efficient parametrization of the defining function pðρÞ of the equation of state itself. From the LIGO and Virgo data alone and the first method, we measure the two neutron star radii as R1 ¼ 10.8 þ2.0 −1.7 km for the heavier star and R2 ¼ 10.7 þ2.1 −1.5 km for the lighter star at the 90% credible level. If we additionally require that the equation of state supports neutron stars with masses larger than 1.97 M⊙ as required from electromagnetic observations and employ the equation-of-state parametrization, we further constrain R1 ¼ 11.9 þ1.4 −1.4 km and R2 ¼ 11.9 þ1.4 −1.4 km at the 90% credible level. Finally, we obtain constraints on pðρÞ at supranuclear densities, with pressure at twice nuclear saturation density measured at 3.5 þ2.7 −1.7 × 1034 dyn cm−2 at the 90% level.
1,595 citations
••
University of East Anglia1, University of Exeter2, Alfred Wegener Institute for Polar and Marine Research3, Max Planck Society4, Ludwig Maximilian University of Munich5, Commonwealth Scientific and Industrial Research Organisation6, Karlsruhe Institute of Technology7, Atlantic Oceanographic and Meteorological Laboratory8, Cooperative Institute for Marine and Atmospheric Studies9, École Normale Supérieure10, Centre national de la recherche scientifique11, University of Maryland, College Park12, University of Virginia13, Flanders Marine Institute14, Oak Ridge National Laboratory15, Woods Hole Research Center16, University of Illinois at Urbana–Champaign17, Geophysical Institute, University of Bergen18, Met Office19, University of California, San Diego20, Utrecht University21, Netherlands Environmental Assessment Agency22, University of Paris23, Oeschger Centre for Climate Change Research24, Tsinghua University25, National Center for Atmospheric Research26, Institute of Arctic and Alpine Research27, National Institute for Environmental Studies28, Hobart Corporation29, Cooperative Research Centre30, Japan Agency for Marine-Earth Science and Technology31, Wageningen University and Research Centre32, University of Groningen33, Bjerknes Centre for Climate Research34, Goddard Space Flight Center35, Leibniz Institute for Baltic Sea Research36, Princeton University37, Leibniz Institute of Marine Sciences38, National Oceanic and Atmospheric Administration39, Auburn University40, Food and Agriculture Organization41, VU University Amsterdam42
TL;DR: In this article, the authors describe data sets and methodology to quantify the five major components of the global carbon budget and their uncertainties, including emissions from land use and land-use change data and bookkeeping models.
Abstract: . Accurate assessment of anthropogenic carbon dioxide
( CO2 ) emissions and their redistribution among the atmosphere,
ocean, and terrestrial biosphere – the “global carbon budget” – is
important to better understand the global carbon cycle, support the
development of climate policies, and project future climate change. Here we
describe data sets and methodology to quantify the five major components of
the global carbon budget and their uncertainties. Fossil CO2
emissions ( EFF ) are based on energy statistics and cement
production data, while emissions from land use and land-use change ( ELUC ),
mainly deforestation, are based on land use and land-use change data and
bookkeeping models. Atmospheric CO2 concentration is measured
directly and its growth rate ( GATM ) is computed from the annual
changes in concentration. The ocean CO2 sink ( SOCEAN )
and terrestrial CO2 sink ( SLAND ) are estimated with
global process models constrained by observations. The resulting carbon
budget imbalance ( BIM ), the difference between the estimated
total emissions and the estimated changes in the atmosphere, ocean, and
terrestrial biosphere, is a measure of imperfect data and understanding of
the contemporary carbon cycle. All uncertainties are reported as ±1σ . For the last decade available (2008–2017), EFF was
9.4±0.5 GtC yr −1 , ELUC 1.5±0.7 GtC yr −1 , GATM 4.7±0.02 GtC yr −1 ,
SOCEAN 2.4±0.5 GtC yr −1 , and SLAND 3.2±0.8 GtC yr −1 , with a budget imbalance BIM of
0.5 GtC yr −1 indicating overestimated emissions and/or underestimated
sinks. For the year 2017 alone, the growth in EFF was about 1.6 %
and emissions increased to 9.9±0.5 GtC yr −1 . Also for 2017,
ELUC was 1.4±0.7 GtC yr −1 , GATM was 4.6±0.2 GtC yr −1 , SOCEAN was 2.5±0.5 GtC yr −1 , and SLAND was 3.8±0.8 GtC yr −1 ,
with a BIM of 0.3 GtC. The global atmospheric
CO2 concentration reached 405.0±0.1 ppm averaged over 2017.
For 2018, preliminary data for the first 6–9 months indicate a renewed
growth in EFF of + 2.7 % (range of 1.8 % to 3.7 %) based
on national emission projections for China, the US, the EU, and India and
projections of gross domestic product corrected for recent changes in the
carbon intensity of the economy for the rest of the world. The analysis
presented here shows that the mean and trend in the five components of the
global carbon budget are consistently estimated over the period of 1959–2017,
but discrepancies of up to 1 GtC yr −1 persist for the representation
of semi-decadal variability in CO2 fluxes. A detailed comparison
among individual estimates and the introduction of a broad range of
observations show (1) no consensus in the mean and trend in land-use change
emissions, (2) a persistent low agreement among the different methods on
the magnitude of the land CO2 flux in the northern extra-tropics,
and (3) an apparent underestimation of the CO2 variability by ocean
models, originating outside the tropics. This living data update documents
changes in the methods and data sets used in this new global carbon budget
and the progress in understanding the global carbon cycle compared with
previous publications of this data set (Le Quere et al., 2018, 2016,
2015a, b, 2014, 2013). All results presented here can be downloaded from
https://doi.org/10.18160/GCP-2018 .
1,458 citations
••
TL;DR: Among patients with very severe ARDS, 60‐day mortality was not significantly lower with ECMO than with a strategy of conventional mechanical ventilation that included ECMO as rescue therapy, and fewer cases of ischemic stroke.
Abstract: Background The efficacy of venovenous extracorporeal membrane oxygenation (ECMO) in patients with severe acute respiratory distress syndrome (ARDS) remains controversial. Methods In an int...
1,435 citations
••
TL;DR: In the largest evaluation of fatal ICI-associated toxic effects published to date to the authors' knowledge, early onset of death with varied causes and frequencies depending on therapeutic regimen is observed.
Abstract: Importance Immune checkpoint inhibitors (ICIs) are now a mainstay of cancer treatment. Although rare, fulminant and fatal toxic effects may complicate these otherwise transformative therapies; characterizing these events requires integration of global data. Objective To determine the spectrum, timing, and clinical features of fatal ICI-associated toxic effects. Design, Setting, and Participants We retrospectively queried a World Health Organization (WHO) pharmacovigilance database (Vigilyze) comprising more than 16 000 000 adverse drug reactions, and records from 7 academic centers. We performed a meta-analysis of published trials of anti–programmed death-1/ligand-1 (PD-1/PD-L1) and anti–cytotoxic T lymphocyte antigen-4 (CTLA-4) to evaluate their incidence using data from large academic medical centers, global WHO pharmacovigilance data, and all published ICI clinical trials of patients with cancer treated with ICIs internationally. Exposures Anti–CTLA-4 (ipilimumab or tremelimumab), anti–PD-1 (nivolumab, pembrolizumab), or anti–PD-L1 (atezolizumab, avelumab, durvalumab). Main Outcomes and Measures Timing, spectrum, outcomes, and incidence of ICI-associated toxic effects. Results Internationally, 613 fatal ICI toxic events were reported from 2009 through January 2018 in Vigilyze. The spectrum differed widely between regimens: in a total of 193 anti–CTLA-4 deaths, most were usually from colitis (135 [70%]), whereas anti–PD-1/PD-L1–related fatalities were often from pneumonitis (333 [35%]), hepatitis (115 [22%]), and neurotoxic effects (50 [15%]). Combination PD-1/CTLA-4 deaths were frequently from colitis (32 [37%]) and myocarditis (22 [25%]). Fatal toxic effects typically occurred early after therapy initiation for combination therapy, anti–PD-1, and ipilimumab monotherapy (median 14.5, 40, and 40 days, respectively). Myocarditis had the highest fatality rate (52 [39.7%] of 131 reported cases), whereas endocrine events and colitis had only 2% to 5% reported fatalities; 10% to 17% of other organ-system toxic effects reported had fatal outcomes. Retrospective review of 3545 patients treated with ICIs from 7 academic centers revealed 0.6% fatality rates; cardiac and neurologic events were especially prominent (43%). Median time from symptom onset to death was 32 days. A meta-analysis of 112 trials involving 19 217 patients showed toxicity-related fatality rates of 0.36% (anti–PD-1), 0.38% (anti–PD-L1), 1.08% (anti–CTLA-4), and 1.23% (PD-1/PD-L1 plus CTLA-4). Conclusions and Relevance In the largest evaluation of fatal ICI-associated toxic effects published to date to our knowledge, we observed early onset of death with varied causes and frequencies depending on therapeutic regimen. Clinicians across disciplines should be aware of these uncommon lethal complications.
1,378 citations
••
18 Jun 2018TL;DR: Deep Ordinal Regression Network (DORN) as discussed by the authors discretizes depth and recast depth network learning as an ordinal regression problem by training the network using an ordinary regression loss, which achieves much higher accuracy and faster convergence in synch.
Abstract: Monocular depth estimation, which plays a crucial role in understanding 3D scene geometry, is an ill-posed problem. Recent methods have gained significant improvement by exploring image-level information and hierarchical features from deep convolutional neural networks (DCNNs). These methods model depth estimation as a regression problem and train the regression networks by minimizing mean squared error, which suffers from slow convergence and unsatisfactory local solutions. Besides, existing depth estimation networks employ repeated spatial pooling operations, resulting in undesirable low-resolution feature maps. To obtain high-resolution depth maps, skip-connections or multilayer deconvolution networks are required, which complicates network training and consumes much more computations. To eliminate or at least largely reduce these problems, we introduce a spacing-increasing discretization (SID) strategy to discretize depth and recast depth network learning as an ordinal regression problem. By training the network using an ordinary regression loss, our method achieves much higher accuracy and faster convergence in synch. Furthermore, we adopt a multi-scale network structure which avoids unnecessary spatial pooling and captures multi-scale information in parallel. The proposed deep ordinal regression network (DORN) achieves state-of-the-art results on three challenging benchmarks, i.e., KITTI [16], Make3D [49], and NYU Depth v2 [41], and outperforms existing methods by a large margin.
1,358 citations
••
Verneri Anttila1, Verneri Anttila2, Brendan Bulik-Sullivan1, Brendan Bulik-Sullivan2 +717 more•Institutions (270)
TL;DR: It is demonstrated that, in the general population, the personality trait neuroticism is significantly correlated with almost every psychiatric disorder and migraine, and it is shown that both psychiatric and neurological disorders have robust correlations with cognitive and personality measures.
Abstract: Disorders of the brain can exhibit considerable epidemiological comorbidity and often share symptoms, provoking debate about their etiologic overlap. We quantified the genetic sharing of 25 brain disorders from genome-wide association studies of 265,218 patients and 784,643 control participants and assessed their relationship to 17 phenotypes from 1,191,588 individuals. Psychiatric disorders share common variant risk, whereas neurological disorders appear more distinct from one another and from the psychiatric disorders. We also identified significant sharing between disorders and a number of brain phenotypes, including cognitive measures. Further, we conducted simulations to explore how statistical power, diagnostic misclassification, and phenotypic heterogeneity affect genetic correlations. These results highlight the importance of common genetic variation as a risk factor for brain disorders and the value of heritability-based methods in understanding their etiology.
••
TL;DR: A comprehensive overview of the modern classification algorithms used in EEG-based BCIs is provided, the principles of these methods and guidelines on when and how to use them are presented, and a number of challenges to further advance EEG classification in BCI are identified.
Abstract: Objective: Most current Electroencephalography (EEG)-based Brain-Computer Interfaces (BCIs) are based on machine learning algorithms. There is a large diversity of classifier types that are used in this field, as described in our 2007 review paper. Now, approximately 10 years after this review publication, many new algorithms have been developed and tested to classify EEG signals in BCIs. The time is therefore ripe for an updated review of EEG classification algorithms for BCIs. Approach: We surveyed the BCI and machine learning literature from 2007 to 2017 to identify the new classification approaches that have been investigated to design BCIs. We synthesize these studies in order to present such algorithms, to report how they were used for BCIs, what were the outcomes, and to identify their pros and cons. Main results: We found that the recently designed classification algorithms for EEG-based BCIs can be divided into four main categories: adaptive classifiers, matrix and tensor classifiers, transfer learning and deep learning, plus a few other miscellaneous classifiers. Among these, adaptive classifiers were demonstrated to be generally superior to static ones, even with unsupervised adaptation. Transfer learning can also prove useful although the benefits of transfer learning remain unpredictable. Riemannian geometry-based methods have reached state-of-the-art performances on multiple BCI problems and deserve to be explored more thoroughly, along with tensor-based methods. Shrinkage linear discriminant analysis and random forests also appear particularly useful for small training samples settings. On the other hand, deep learning methods have not yet shown convincing improvement over state-of-the-art BCI methods. Significance: This paper provides a comprehensive overview of the modern classification algorithms used in EEG-based BCIs, presents the principles of these Review of Classification Algorithms for EEG-based BCI 2 methods and guidelines on when and how to use them. It also identifies a number of challenges to further advance EEG classification in BCI.
••
18 Jun 2018TL;DR: It is argued that the organization of 3D point clouds can be efficiently captured by a structure called superpoint graph (SPG), derived from a partition of the scanned scene into geometrically homogeneous elements.
Abstract: We propose a novel deep learning-based framework to tackle the challenge of semantic segmentation of large-scale point clouds of millions of points. We argue that the organization of 3D point clouds can be efficiently captured by a structure called superpoint graph (SPG), derived from a partition of the scanned scene into geometrically homogeneous elements. SPGs offer a compact yet rich representation of contextual relationships between object parts, which is then exploited by a graph convolutional network. Our framework sets a new state of the art for segmenting outdoor LiDAR scans (+11.9 and +8.8 mIoU points for both Semantic3D test sets), as well as indoor scans (+12.4 mIoU points for the S3DIS dataset).
•
15 Feb 2018TL;DR: It is shown that a bilingual dictionary can be built between two languages without using any parallel corpora, by aligning monolingual word embedding spaces in an unsupervised way.
••
Bela Abolfathi1, D. S. Aguado2, Gabriela Aguilar3, Carlos Allende Prieto2 +361 more•Institutions (94)
TL;DR: SDSS-IV is the fourth generation of the Sloan Digital Sky Survey and has been in operation since 2014 July. as discussed by the authors describes the second data release from this phase, and the 14th from SDSS overall (making this Data Release Fourteen or DR14).
Abstract: The fourth generation of the Sloan Digital Sky Survey (SDSS-IV) has been in operation since 2014 July. This paper describes the second data release from this phase, and the 14th from SDSS overall (making this Data Release Fourteen or DR14). This release makes the data taken by SDSS-IV in its first two years of operation (2014-2016 July) public. Like all previous SDSS releases, DR14 is cumulative, including the most recent reductions and calibrations of all data taken by SDSS since the first phase began operations in 2000. New in DR14 is the first public release of data from the extended Baryon Oscillation Spectroscopic Survey; the first data from the second phase of the Apache Point Observatory (APO) Galactic Evolution Experiment (APOGEE-2), including stellar parameter estimates from an innovative data-driven machine-learning algorithm known as "The Cannon"; and almost twice as many data cubes from the Mapping Nearby Galaxies at APO (MaNGA) survey as were in the previous release (N = 2812 in total). This paper describes the location and format of the publicly available data from the SDSS-IV surveys. We provide references to the important technical papers describing how these data have been taken (both targeting and observation details) and processed for scientific use. The SDSS web site (www.sdss.org) has been updated for this release and provides links to data downloads, as well as tutorials and examples of data use. SDSS-IV is planning to continue to collect astronomical data until 2020 and will be followed by SDSS-V.
••
TL;DR: A major ocean plastic accumulation zone formed in subtropical waters between California and Hawaii: The Great Pacific Garbage Patch is characterised and quantified, suggesting that ocean plastic pollution within the GPGP is increasing exponentially and at a faster rate than in surrounding waters.
Abstract: Ocean plastic can persist in sea surface waters, eventually accumulating in remote areas of the world’s oceans. Here we characterise and quantify a major ocean plastic accumulation zone formed in subtropical waters between California and Hawaii: The Great Pacific Garbage Patch (GPGP). Our model, calibrated with data from multi-vessel and aircraft surveys, predicted at least 79 (45–129) thousand tonnes of ocean plastic are floating inside an area of 1.6 million km2; a figure four to sixteen times higher than previously reported. We explain this difference through the use of more robust methods to quantify larger debris. Over three-quarters of the GPGP mass was carried by debris larger than 5 cm and at least 46% was comprised of fishing nets. Microplastics accounted for 8% of the total mass but 94% of the estimated 1.8 (1.1–3.6) trillion pieces floating in the area. Plastic collected during our study has specific characteristics such as small surface-to-volume ratio, indicating that only certain types of debris have the capacity to persist and accumulate at the surface of the GPGP. Finally, our results suggest that ocean plastic pollution within the GPGP is increasing exponentially and at a faster rate than in surrounding waters.
••
TL;DR: CRISPOR tries to provide a comprehensive solution from selection, cloning and expression of guide RNA as well as providing primers needed for testing guide activity and potential off-targets.
Abstract: CRISPOR.org is a web tool for genome editing experiments with the CRISPR-Cas9 system. It finds guide RNAs in an input sequence and ranks them according to different scores that evaluate potential off-targets in the genome of interest and predict on-target activity. The list of genomes is continuously expanded, with more 150 genomes added in the last two years. CRISPOR tries to provide a comprehensive solution from selection, cloning and expression of guide RNA as well as providing primers needed for testing guide activity and potential off-targets. Recent developments include batch design for genome-wide CRISPR and saturation screens, creating custom oligonucleotides for guide cloning and the design of next generation sequencing primers to test for off-target mutations. CRISPOR is available from http://crispor.org, including the full source code of the website and a stand-alone, command-line version.
••
Hacettepe University1, Boston Children's Hospital2, Katholieke Universiteit Leuven3, University of Bologna4, Radboud University Nijmegen5, University of Aberdeen6, European Respiratory Society7, Claude Bernard University Lyon 18, Cardiff University9, University Hospital of Lausanne10, University of Queensland11, Ghent University12, University of Paris13, Istituto Giannina Gaslini14, Post Graduate Institute of Medical Education and Research15, Carlos III Health Institute16, National and Kapodistrian University of Athens17, University of Rennes18, University Hospital Heidelberg19, University College London20, Goethe University Frankfurt21, Catholic University of the Sacred Heart22, McGill University23
TL;DR: Treatment duration for aspergillosis is strongly recommended based on clinical improvement, degree of immunosuppression and response on imaging, and in refractory disease, where a personalized approach considering reversal of predisposing factors, switching drug class and surgical intervention is also strongly recommended.
••
TL;DR: Tarascon and Assat as mentioned in this paper discuss the underlying science that triggers a reversible and stable anionic redox activity and highlight its practical limitations and outline possible approaches for improving such materials and designing new ones.
Abstract: Our increasing dependence on lithium-ion batteries for energy storage calls for continual improvements in the performance of their positive electrodes, which have so far relied solely on cationic redox of transition-metal ions for driving the electrochemical reactions. Great hopes have recently been placed on the emergence of anionic redox—a transformational approach for designing positive electrodes as it leads to a near-doubling of capacity. But questions have been raised about the fundamental origins of anionic redox and whether its full potential can be realized in applications. In this Review, we discuss the underlying science that triggers a reversible and stable anionic redox activity. Furthermore, we highlight its practical limitations and outline possible approaches for improving such materials and designing new ones. We also summarize their chances for market implementation in the face of the competing nickel-based layered cathodes that are prevalent today. The discovery of anionic redox chemistry in Li-rich cathode materials provides much hope for enhancing battery performance. Tarascon and Assat analyse the underlying science behind anionic redox and discuss its practical limitations as well as the routes to overcome the application barriers.
•
15 Feb 2018TL;DR: This work proposes a model that takes sentences from monolingual corpora in two different languages and maps them into the same latent space and effectively learns to translate without using any labeled data.
Abstract: Machine translation has recently achieved impressive performance thanks to recent advances in deep learning and the availability of large-scale parallel corpora. There have been numerous attempts to extend these successes to low-resource language pairs, yet requiring tens of thousands of parallel sentences. In this work, we take this research direction to the extreme and investigate whether it is possible to learn to translate even without any parallel data. We propose a model that takes sentences from monolingual corpora in two different languages and maps them into the same latent space. By learning to reconstruct in both languages from this shared feature space, the model effectively learns to translate without using any labeled data. We demonstrate our model on two widely used datasets and two language pairs, reporting BLEU scores of 32.8 and 15.1 on the Multi30k and WMT English-French datasets, without using even a single parallel sentence at training time.
••
TL;DR: In this article, the authors reviewed open literature publications on refractory high entropy alloys (RHEAs) and refractor complex concentrated alloys(RCCAs) in the period from 2010 to the end of January 2018.
Abstract: Open literature publications, in the period from 2010 to the end of January 2018, on refractory high entropy alloys (RHEAs) and refractory complex concentrated alloys (RCCAs) are reviewed. While RHEAs, by original definition, are alloys consisting of five or more principal elements with the concentration of each of these elements between 5 and 35 at.%, RCCAs can contain three or more principal elements and the element concentration can be greater than 35%. The 151 reported RHEAs/RCCAs are analyzed based on their composition, processing methods, microstructures, and phases. Mechanical properties, strengthening and deformation mechanisms, oxidation, and corrosion behavior, as well as tribology, of RHEA/RCCAs are summarized. Unique properties of some of these alloys make them promising candidates for high temperature applications beyond Ni-based superalloys and/or conventional refractory alloys. Methods of development and exploration, future directions of research and development, and potential applications of RHEAs are discussed.
••
Cedars-Sinai Medical Center1, Paris Descartes University2, University of Paris3, Imperial College London4, Autonomous University of Barcelona5, Westmead Hospital6, University of Alberta7, Harvard University8, Montefiore Medical Center9, Johns Hopkins University School of Medicine10, Geneva College11, Mayo Clinic12, University of Manitoba13, Johns Hopkins University14, University of Alabama15, Katholieke Universiteit Leuven16, University of North Carolina at Chapel Hill17, University of Pittsburgh18
TL;DR: The Banff ABMR criteria are updated and paves the way for the Banff scheme to be part of an integrative approach for defining surrogate endpoints in next‐generation clinical trials.
••
TL;DR: The recent trends in the field of various Electrochemical Advanced Oxidation Processes (EAOPs) used for removing dyes from water medium are provided to indicate that EAOPs constitute a promising technology for the treatment of the dye contaminated effluents.
••
Aix-Marseille University1, Spanish National Research Council2, University of Paris3, Sciences Po4, Technical University of Madrid5, The Cyprus Institute6, University of Salento7, Central Maine Community College8, University of Barcelona9, University of Haifa10, City University of Hong Kong11, University of Giessen12
TL;DR: In this paper, a dedicated effort to synthesize existing scientific knowledge across disciplines is underway and aims to provide a better understanding of the combined risks posed in the Mediterranean Basin, where fewer systematic observations schemes and impact models are based.
Abstract: Recent accelerated climate change has exacerbated existing environmental problems in the Mediterranean Basin that are caused by the combination of changes in land use, increasing pollution and declining biodiversity. For five broad and interconnected impact domains (water, ecosystems, food, health and security), current change and future scenarios consistently point to significant and increasing risks during the coming decades. Policies for the sustainable development of Mediterranean countries need to mitigate these risks and consider adaptation options, but currently lack adequate information — particularly for the most vulnerable southern Mediterranean societies, where fewer systematic observations schemes and impact models are based. A dedicated effort to synthesize existing scientific knowledge across disciplines is underway and aims to provide a better understanding of the combined risks posed.
••
TL;DR: A radiomic signature predictive of immunotherapy response by combining contrast-enhanced CT images and RNA-seq genomic data from tumour biopsies to assess CD8 cell tumour infiltration was developed and validated.
Abstract: Summary Background Because responses of patients with cancer to immunotherapy can vary in success, innovative predictors of response to treatment are urgently needed to improve treatment outcomes. We aimed to develop and independently validate a radiomics-based biomarker of tumour-infiltrating CD8 cells in patients included in phase 1 trials of anti-programmed cell death protein (PD)-1 or anti-programmed cell death ligand 1 (PD-L1) monotherapy. We also aimed to evaluate the association between the biomarker, and tumour immune phenotype and clinical outcomes of these patients. Methods In this retrospective multicohort study, we used four independent cohorts of patients with advanced solid tumours to develop and validate a radiomic signature predictive of immunotherapy response by combining contrast-enhanced CT images and RNA-seq genomic data from tumour biopsies to assess CD8 cell tumour infiltration. To develop the radiomic signature of CD8 cells, we used the CT images and RNA sequencing data of 135 patients with advanced solid malignant tumours who had been enrolled into the MOSCATO trial between May 1, 2012, and March 31, 2016, in France (training set). The genomic data, which are based on the CD8B gene, were used to estimate the abundance of CD8 cells in the samples and data were then aligned with the images to generate the radiomic signatures. The concordance of the radiomic signature (primary endpoint) was validated in a Cancer Genome Atlas [TGCA] database dataset including 119 patients who had available baseline preoperative imaging data and corresponding transcriptomic data on June 30, 2017. From 84 input variables used for the machine-learning method (78 radiomic features, five location variables, and one technical variable), a radiomics-based predictor of the CD8 cell expression signature was built by use of machine learning (elastic-net regularised regression method). Two other independent cohorts of patients with advanced solid tumours were used to evaluate this predictor. The immune phenotype internal cohort (n=100), were randomly selected from the Gustave Roussy Cancer Campus database of patient medical records based on previously described, extreme tumour-immune phenotypes: immune-inflamed (with dense CD8 cell infiltration) or immune-desert (with low CD8 cell infiltration), irrespective of treatment delivered; these data were used to analyse the correlation of the immune phenotype with this biomarker. Finally, the immunotherapy-treated dataset (n=137) of patients recruited from Dec 1, 2011, to Jan 31, 2014, at the Gustave Roussy Cancer Campus, who had been treated with anti-PD-1 and anti-PD-L1 monotherapy in phase 1 trials, was used to assess the predictive value of this biomarker in terms of clinical outcome. Findings We developed a radiomic signature for CD8 cells that included eight variables, which was validated with the gene expression signature of CD8 cells in the TCGA dataset (area under the curve [AUC]=0·67; 95% CI 0·57–0·77; p=0·0019). In the cohort with assumed immune phenotypes, the signature was also able to discriminate inflamed tumours from immune-desert tumours (0·76; 0·66–0·86; p Interpretation The radiomic signature of CD8 cells was validated in three independent cohorts. This imaging predictor provided a promising way to predict the immune phenotype of tumours and to infer clinical outcomes for patients with cancer who had been treated with anti-PD-1 and PD-L1. Our imaging biomarker could be useful in estimating CD8 cell count and predicting clinical outcomes of patients treated with immunotherapy, when validated by further prospective randomised trials. Funding Fondation pour la Recherche Medicale, and SIRIC-SOCRATE 2.0, French Society of Radiation Oncology.
••
TL;DR: In this paper, the authors provide guidelines on how to use parallaxes more efficiently to estimate distances by using Bayesian methods, and provide examples that show more generally how to combine proper motions and paralaxes and the handling of covariances in the uncertainties.
Abstract: Context. The second Gaia data release (Gaia DR2) provides precise five-parameter astrometric data (positions, proper motions, and parallaxes) for an unprecedented number of sources (more than 1.3 billion, mostly stars). This new wealth of data will enable the undertaking of statistical analysis of many astrophysical problems that were previously infeasible for lack of reliable astrometry, and in particular because of the lack of parallaxes. However, the use of this wealth of astrometric data comes with a specific challenge: how can the astrophysical parameters of interest be properly inferred from these data?Aims. The main focus of this paper, but not the only focus, is the issue of the estimation of distances from parallaxes, possibly combined with other information. We start with a critical review of the methods traditionally used to obtain distances from parallaxes and their shortcomings. Then we provide guidelines on how to use parallaxes more efficiently to estimate distances by using Bayesian methods. In particular we also show that negative parallaxes, or parallaxes with relatively large uncertainties still contain valuable information. Finally, we provide examples that show more generally how to use astrometric data for parameter estimation, including the combination of proper motions and parallaxes and the handling of covariances in the uncertainties.Methods. The paper contains examples based on simulated Gaia data to illustrate the problems and the solutions proposed. Furthermore, the developments and methods proposed in the paper are linked to a set of tutorials included in the Gaia archive documentation that provide practical examples and a good starting point for the application of the recommendations to actual problems. In all cases the source code for the analysis methods is provided.Results. Our main recommendation is to always treat the derivation of (astro-)physical parameters from astrometric data, in particular when parallaxes are involved, as an inference problem which should preferably be handled with a full Bayesian approach.Conclusions. Gaia will provide fundamental data for many fields of astronomy. Further data releases will provide more data, and more precise data. Nevertheless, to fully use the potential it will always be necessary to pay careful attention to the statistical treatment of parallaxes and proper motions. The purpose of this paper is to help astronomers find the correct approach.
••
TL;DR: VigiBase, WHO's global database of individual case safety reports, was used to compare cardiovascular adverse event reporting in patients who received ICIs (ICI subgroup) with this reporting in the full database and the association between ICIs and cardiovascular adverse events was evaluated.
Abstract: Background Immune checkpoint inhibitors (ICIs) have substantially improved clinical outcomes in multiple cancer types and are increasingly being used in early disease settings and in combinations of different immunotherapies. However, ICIs can also cause severe or fatal immune-related adverse-events (irAEs). We aimed to identify and characterise cardiovascular irAEs that are significantly associated with ICIs. Methods In this observational, retrospective, pharmacovigilance study, we used VigiBase, WHO's global database of individual case safety reports, to compare cardiovascular adverse event reporting in patients who received ICIs (ICI subgroup) with this reporting in the full database. This study included all cardiovascular irAEs classified by group queries according to the Medical Dictionary for Regulatory Activities, between inception on Nov 14, 1967, and Jan 2, 2018. We evaluated the association between ICIs and cardiovascular adverse events using the reporting odds ratio (ROR) and the information component (IC). IC is an indicator value for disproportionate Bayesian reporting that compares observed and expected values to find associations between drugs and adverse events. IC025 is the lower end of the IC 95% credibility interval, and an IC025 value of more than zero is deemed significant. This study is registered with ClinicalTrials.gov, number NCT03387540. Findings We identified 31 321 adverse events reported in patients who received ICIs and 16 343 451 adverse events reported in patients treated with any drugs (full database) in VigiBase. Compared with the full database, ICI treatment was associated with higher reporting of myocarditis (5515 reports for the full database vs 122 for ICIs, ROR 11·21 [95% CI 9·36-13·43]; IC025 3·20), pericardial diseases (12 800 vs 95, 3·80 [3·08-4·62]; IC025 1·63), and vasculitis (33 289 vs 82, 1·56 [1·25-1·94]; IC025 0·03), including temporal arteritis (696 vs 18, 12·99 [8·12-20·77]; IC025 2·59) and polymyalgia rheumatica (1709 vs 16, 5·13 [3·13-8·40]; IC025 1·33). Pericardial diseases were reported more often in patients with lung cancer (49 [56%] of 87 patients), whereas myocarditis (42 [41%] of 103 patients) and vasculitis (42 [60%] of 70 patients) were more commonly reported in patients with melanoma (χ2 test for overall subgroup comparison, p 80%), with death occurring in 61 (50%) of 122 myocarditis cases, 20 (21%) of 95 pericardial disease cases, and five (6%) of 82 vasculitis cases (χ2 test for overall comparison between pericardial diseases, myocarditis, and vasculitis, p Interpretation Treatment with ICIs can lead to severe and disabling inflammatory cardiovascular irAEs soon after commencement of therapy. In addition to life-threatening myocarditis, these toxicities include pericardial diseases and temporal arteritis with a risk of blindness. These events should be considered in patient care and in combination clinical trial designs (ie, combinations of different immunotherapies as well as immunotherapies and chemotherapy). Funding The Cancer Institut Thematique Multi-Organisme of the French National Alliance for Life and Health Sciences (AVIESAN) Plan Cancer 2014-2019; US National Cancer Institute, National Institutes of Health; the James C. Bradford Jr. Melanoma Fund; and the Melanoma Research Foundation.
••
TL;DR: An in-depth analysis of the majority of the deep neural networks (DNNs) proposed in the state of the art for image recognition, with a complete view of what solutions have been explored so far and in which research directions are worth exploring in the future.
Abstract: This paper presents an in-depth analysis of the majority of the deep neural networks (DNNs) proposed in the state of the art for image recognition. For each DNN, multiple performance indices are observed, such as recognition accuracy, model complexity, computational complexity, memory usage, and inference time. The behavior of such performance indices and some combinations of them are analyzed and discussed. To measure the indices, we experiment the use of DNNs on two different computer architectures, a workstation equipped with a NVIDIA Titan X Pascal, and an embedded system based on a NVIDIA Jetson TX1 board. This experimentation allows a direct comparison between DNNs running on machines with very different computational capacities. This paper is useful for researchers to have a complete view of what solutions have been explored so far and in which research directions are worth exploring in the future, and for practitioners to select the DNN architecture(s) that better fit the resource constraints of practical deployments and applications. To complete this work, all the DNNs, as well as the software used for the analysis, are available online.
••
University of North Carolina at Chapel Hill1, University of Pennsylvania2, University of Colorado Denver3, Tel Aviv University4, Baylor University Medical Center5, Durham University6, University of California, San Diego7, Mayo Clinic8, Northwestern University9, Nestlé10, Tufts University11, Boston Children's Hospital12, Icahn School of Medicine at Mount Sinai13, University of Texas Southwestern Medical Center14, Cincinnati Children's Hospital Medical Center15, Baylor College of Medicine16, Nationwide Children's Hospital17, University of Paris18, University of Health Sciences Antigua19, University of Illinois at Urbana–Champaign20, Shimane University21, University Hospitals Coventry and Warwickshire NHS Trust22, Harvard University23, Juntendo University24, University of Ljubljana25, National and Kapodistrian University of Athens26, University of Utah27, University of Adelaide28, University of South Florida29, University of Lausanne30, University College London31, Kaiser Permanente32, University of Newcastle33, Vanderbilt University34, Vrije Universiteit Brussel35, Federal University of Paraná36, Children's Memorial Hospital37, University of Amsterdam38
TL;DR: An updated diagnostic algorithm for EoE was developed, with removal of the PPI trial requirement, and the evidence suggests that PPIs are better classified as a treatment for esophageal eosinophilia that may be due to EOE than as a diagnostic criterion.
••
Pusan National University1, University of Hawaii at Manoa2, Yonsei University3, Pohang University of Science and Technology4, Commonwealth Scientific and Industrial Research Organisation5, Hobart Corporation6, Ocean University of China7, University of Colorado Boulder8, Earth System Research Laboratory9, Georgia Institute of Technology10, University of Paris11, Pacific Marine Environmental Laboratory12, University Corporation for Atmospheric Research13, University of Washington14, Geophysical Fluid Dynamics Laboratory15, Leibniz Institute of Marine Sciences16, National Taiwan University17, Utah State University18, Monash University, Clayton campus19, University of Mary Washington20, University of Reading21, Centre national de la recherche scientifique22, Chonnam National University23, Met Office24, Ulsan National Institute of Science and Technology25, Asia-Pacific Economic Cooperation26, Bureau of Meteorology27, China Meteorological Administration28, University of New South Wales29, University of Exeter30, Chinese Academy of Sciences31, Hanyang University32, Gwangju Institute of Science and Technology33
TL;DR: A synopsis of the current understanding of the spatio-temporal complexity of this important climate mode and its influence on the Earth system is provided and a unifying framework that identifies the key factors for this complexity is proposed.
Abstract: El Nino events are characterized by surface warming of the tropical Pacific Ocean and weakening of equatorial trade winds that occur every few years Such conditions are accompanied by changes in atmospheric and oceanic circulation, affecting global climate, marine and terrestrial ecosystems, fisheries and human activities The alternation of warm El Nino and cold La Nina conditions, referred to as the El Nino–Southern Oscillation (ENSO), represents the strongest year-to-year fluctuation of the global climate system Here we provide a synopsis of our current understanding of the spatio-temporal complexity of this important climate mode and its influence on the Earth system