scispace - formally typeset
Search or ask a question

Showing papers by "École Normale Supérieure published in 2018"


Journal ArticleDOI
Clotilde Théry1, Kenneth W. Witwer2, Elena Aikawa3, María José Alcaraz4  +414 moreInstitutions (209)
TL;DR: The MISEV2018 guidelines include tables and outlines of suggested protocols and steps to follow to document specific EV-associated functional activities, and a checklist is provided with summaries of key points.
Abstract: The last decade has seen a sharp increase in the number of scientific publications describing physiological and pathological functions of extracellular vesicles (EVs), a collective term covering various subtypes of cell-released, membranous structures, called exosomes, microvesicles, microparticles, ectosomes, oncosomes, apoptotic bodies, and many other names. However, specific issues arise when working with these entities, whose size and amount often make them difficult to obtain as relatively pure preparations, and to characterize properly. The International Society for Extracellular Vesicles (ISEV) proposed Minimal Information for Studies of Extracellular Vesicles (“MISEV”) guidelines for the field in 2014. We now update these “MISEV2014” guidelines based on evolution of the collective knowledge in the last four years. An important point to consider is that ascribing a specific function to EVs in general, or to subtypes of EVs, requires reporting of specific information beyond mere description of function in a crude, potentially contaminated, and heterogeneous preparation. For example, claims that exosomes are endowed with exquisite and specific activities remain difficult to support experimentally, given our still limited knowledge of their specific molecular machineries of biogenesis and release, as compared with other biophysically similar EVs. The MISEV2018 guidelines include tables and outlines of suggested protocols and steps to follow to document specific EV-associated functional activities. Finally, a checklist is provided with summaries of key points.

5,988 citations


Journal ArticleDOI
Nabila Aghanim1, Yashar Akrami2, Yashar Akrami3, Yashar Akrami4  +229 moreInstitutions (70)
TL;DR: In this paper, the cosmological parameter results from the final full-mission Planck measurements of the CMB anisotropies were presented, with good consistency with the standard spatially-flat 6-parameter CDM cosmology having a power-law spectrum of adiabatic scalar perturbations from polarization, temperature, and lensing separately and in combination.
Abstract: We present cosmological parameter results from the final full-mission Planck measurements of the CMB anisotropies. We find good consistency with the standard spatially-flat 6-parameter $\Lambda$CDM cosmology having a power-law spectrum of adiabatic scalar perturbations (denoted "base $\Lambda$CDM" in this paper), from polarization, temperature, and lensing, separately and in combination. A combined analysis gives dark matter density $\Omega_c h^2 = 0.120\pm 0.001$, baryon density $\Omega_b h^2 = 0.0224\pm 0.0001$, scalar spectral index $n_s = 0.965\pm 0.004$, and optical depth $\tau = 0.054\pm 0.007$ (in this abstract we quote $68\,\%$ confidence regions on measured parameters and $95\,\%$ on upper limits). The angular acoustic scale is measured to $0.03\,\%$ precision, with $100\theta_*=1.0411\pm 0.0003$. These results are only weakly dependent on the cosmological model and remain stable, with somewhat increased errors, in many commonly considered extensions. Assuming the base-$\Lambda$CDM cosmology, the inferred late-Universe parameters are: Hubble constant $H_0 = (67.4\pm 0.5)$km/s/Mpc; matter density parameter $\Omega_m = 0.315\pm 0.007$; and matter fluctuation amplitude $\sigma_8 = 0.811\pm 0.006$. We find no compelling evidence for extensions to the base-$\Lambda$CDM model. Combining with BAO we constrain the effective extra relativistic degrees of freedom to be $N_{\rm eff} = 2.99\pm 0.17$, and the neutrino mass is tightly constrained to $\sum m_ u< 0.12$eV. The CMB spectra continue to prefer higher lensing amplitudes than predicted in base -$\Lambda$CDM at over $2\,\sigma$, which pulls some parameters that affect the lensing amplitude away from the base-$\Lambda$CDM model; however, this is not supported by the lensing reconstruction or (in models that also change the background geometry) BAO data. (Abridged)

3,077 citations


Journal ArticleDOI
Corinne Le Quéré1, Robbie M. Andrew, Pierre Friedlingstein2, Stephen Sitch2, Judith Hauck3, Julia Pongratz4, Julia Pongratz5, Penelope A. Pickers1, Jan Ivar Korsbakken, Glen P. Peters, Josep G. Canadell6, Almut Arneth7, Vivek K. Arora, Leticia Barbero8, Leticia Barbero9, Ana Bastos5, Laurent Bopp10, Frédéric Chevallier11, Louise Chini12, Philippe Ciais11, Scott C. Doney13, Thanos Gkritzalis14, Daniel S. Goll11, Ian Harris1, Vanessa Haverd6, Forrest M. Hoffman15, Mario Hoppema3, Richard A. Houghton16, George C. Hurtt12, Tatiana Ilyina4, Atul K. Jain17, Truls Johannessen18, Chris D. Jones19, Etsushi Kato, Ralph F. Keeling20, Kees Klein Goldewijk21, Kees Klein Goldewijk22, Peter Landschützer4, Nathalie Lefèvre23, Sebastian Lienert24, Zhu Liu1, Zhu Liu25, Danica Lombardozzi26, Nicolas Metzl23, David R. Munro27, Julia E. M. S. Nabel4, Shin-Ichiro Nakaoka28, Craig Neill29, Craig Neill30, Are Olsen18, T. Ono, Prabir K. Patra31, Anna Peregon11, Wouter Peters32, Wouter Peters33, Philippe Peylin11, Benjamin Pfeil18, Benjamin Pfeil34, Denis Pierrot8, Denis Pierrot9, Benjamin Poulter35, Gregor Rehder36, Laure Resplandy37, Eddy Robertson19, Matthias Rocher11, Christian Rödenbeck4, Ute Schuster2, Jörg Schwinger34, Roland Séférian11, Ingunn Skjelvan34, Tobias Steinhoff38, Adrienne J. Sutton39, Pieter P. Tans39, Hanqin Tian40, Bronte Tilbrook30, Bronte Tilbrook29, Francesco N. Tubiello41, Ingrid T. van der Laan-Luijkx33, Guido R. van der Werf42, Nicolas Viovy11, Anthony P. Walker15, Andy Wiltshire19, Rebecca Wright1, Sönke Zaehle4, Bo Zheng11 
University of East Anglia1, University of Exeter2, Alfred Wegener Institute for Polar and Marine Research3, Max Planck Society4, Ludwig Maximilian University of Munich5, Commonwealth Scientific and Industrial Research Organisation6, Karlsruhe Institute of Technology7, Atlantic Oceanographic and Meteorological Laboratory8, Cooperative Institute for Marine and Atmospheric Studies9, École Normale Supérieure10, Centre national de la recherche scientifique11, University of Maryland, College Park12, University of Virginia13, Flanders Marine Institute14, Oak Ridge National Laboratory15, Woods Hole Research Center16, University of Illinois at Urbana–Champaign17, Geophysical Institute, University of Bergen18, Met Office19, University of California, San Diego20, Utrecht University21, Netherlands Environmental Assessment Agency22, University of Paris23, Oeschger Centre for Climate Change Research24, Tsinghua University25, National Center for Atmospheric Research26, Institute of Arctic and Alpine Research27, National Institute for Environmental Studies28, Cooperative Research Centre29, Hobart Corporation30, Japan Agency for Marine-Earth Science and Technology31, University of Groningen32, Wageningen University and Research Centre33, Bjerknes Centre for Climate Research34, Goddard Space Flight Center35, Leibniz Institute for Baltic Sea Research36, Princeton University37, Leibniz Institute of Marine Sciences38, National Oceanic and Atmospheric Administration39, Auburn University40, Food and Agriculture Organization41, VU University Amsterdam42
TL;DR: In this article, the authors describe data sets and methodology to quantify the five major components of the global carbon budget and their uncertainties, including emissions from land use and land-use change data and bookkeeping models.
Abstract: . Accurate assessment of anthropogenic carbon dioxide ( CO2 ) emissions and their redistribution among the atmosphere, ocean, and terrestrial biosphere – the “global carbon budget” – is important to better understand the global carbon cycle, support the development of climate policies, and project future climate change. Here we describe data sets and methodology to quantify the five major components of the global carbon budget and their uncertainties. Fossil CO2 emissions ( EFF ) are based on energy statistics and cement production data, while emissions from land use and land-use change ( ELUC ), mainly deforestation, are based on land use and land-use change data and bookkeeping models. Atmospheric CO2 concentration is measured directly and its growth rate ( GATM ) is computed from the annual changes in concentration. The ocean CO2 sink ( SOCEAN ) and terrestrial CO2 sink ( SLAND ) are estimated with global process models constrained by observations. The resulting carbon budget imbalance ( BIM ), the difference between the estimated total emissions and the estimated changes in the atmosphere, ocean, and terrestrial biosphere, is a measure of imperfect data and understanding of the contemporary carbon cycle. All uncertainties are reported as ±1σ . For the last decade available (2008–2017), EFF was 9.4±0.5 GtC yr −1 , ELUC 1.5±0.7 GtC yr −1 , GATM 4.7±0.02 GtC yr −1 , SOCEAN 2.4±0.5 GtC yr −1 , and SLAND 3.2±0.8 GtC yr −1 , with a budget imbalance BIM of 0.5 GtC yr −1 indicating overestimated emissions and/or underestimated sinks. For the year 2017 alone, the growth in EFF was about 1.6 % and emissions increased to 9.9±0.5 GtC yr −1 . Also for 2017, ELUC was 1.4±0.7 GtC yr −1 , GATM was 4.6±0.2 GtC yr −1 , SOCEAN was 2.5±0.5 GtC yr −1 , and SLAND was 3.8±0.8 GtC yr −1 , with a BIM of 0.3 GtC. The global atmospheric CO2 concentration reached 405.0±0.1 ppm averaged over 2017. For 2018, preliminary data for the first 6–9 months indicate a renewed growth in EFF of + 2.7 % (range of 1.8 % to 3.7 %) based on national emission projections for China, the US, the EU, and India and projections of gross domestic product corrected for recent changes in the carbon intensity of the economy for the rest of the world. The analysis presented here shows that the mean and trend in the five components of the global carbon budget are consistently estimated over the period of 1959–2017, but discrepancies of up to 1 GtC yr −1 persist for the representation of semi-decadal variability in CO2 fluxes. A detailed comparison among individual estimates and the introduction of a broad range of observations show (1) no consensus in the mean and trend in land-use change emissions, (2) a persistent low agreement among the different methods on the magnitude of the land CO2 flux in the northern extra-tropics, and (3) an apparent underestimation of the CO2 variability by ocean models, originating outside the tropics. This living data update documents changes in the methods and data sets used in this new global carbon budget and the progress in understanding the global carbon cycle compared with previous publications of this data set (Le Quere et al., 2018, 2016, 2015a, b, 2014, 2013). All results presented here can be downloaded from https://doi.org/10.18160/GCP-2018 .

1,458 citations


Journal ArticleDOI
TL;DR: This review gathers the most recent advances concerning the central role of Trp metabolism in microbiota-host crosstalk in health and disease and aims to facilitate a better understanding of the pathogenesis of human diseases and open therapeutic opportunities.

1,172 citations


Posted ContentDOI
Spyridon Bakas1, Mauricio Reyes, Andras Jakab2, Stefan Bauer3  +435 moreInstitutions (111)
TL;DR: This study assesses the state-of-the-art machine learning methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i.e., 2012-2018, and investigates the challenge of identifying the best ML algorithms for each of these tasks.
Abstract: Gliomas are the most common primary brain malignancies, with different degrees of aggressiveness, variable prognosis and various heterogeneous histologic sub-regions, i.e., peritumoral edematous/invaded tissue, necrotic core, active and non-enhancing core. This intrinsic heterogeneity is also portrayed in their radio-phenotype, as their sub-regions are depicted by varying intensity profiles disseminated across multi-parametric magnetic resonance imaging (mpMRI) scans, reflecting varying biological properties. Their heterogeneous shape, extent, and location are some of the factors that make these tumors difficult to resect, and in some cases inoperable. The amount of resected tumoris a factor also considered in longitudinal scans, when evaluating the apparent tumor for potential diagnosis of progression. Furthermore, there is mounting evidence that accurate segmentation of the various tumor sub-regions can offer the basis for quantitative image analysis towards prediction of patient overall survival. This study assesses thestate-of-the-art machine learning (ML) methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i.e., 2012-2018. Specifically, we focus on i) evaluating segmentations of the various glioma sub-regions in pre-operative mpMRI scans, ii) assessing potential tumor progression by virtue of longitudinal growth of tumor sub-regions, beyond use of the RECIST/RANO criteria, and iii) predicting the overall survival from pre-operative mpMRI scans of patients that underwent gross tota lresection. Finally, we investigate the challenge of identifying the best ML algorithms for each of these tasks, considering that apart from being diverse on each instance of the challenge, the multi-institutional mpMRI BraTS dataset has also been a continuously evolving/growing dataset.

1,165 citations


Proceedings ArticleDOI
20 May 2018
TL;DR: OmniLedger ensures security and correctness by using a bias-resistant public-randomness protocol for choosing large, statistically representative shards that process transactions, and by introducing an efficient cross-shard commit protocol that atomically handles transactions affecting multiple shards.
Abstract: Designing a secure permissionless distributed ledger (blockchain) that performs on par with centralized payment processors, such as Visa, is a challenging task. Most existing distributed ledgers are unable to scale-out, i.e., to grow their total processing capacity with the number of validators; and those that do, compromise security or decentralization. We present OmniLedger, a novel scale-out distributed ledger that preserves longterm security under permissionless operation. It ensures security and correctness by using a bias-resistant public-randomness protocol for choosing large, statistically representative shards that process transactions, and by introducing an efficient cross-shard commit protocol that atomically handles transactions affecting multiple shards. OmniLedger also optimizes performance via parallel intra-shard transaction processing, ledger pruning via collectively-signed state blocks, and low-latency "trust-but-verify" validation for low-value transactions. An evaluation of our experimental prototype shows that OmniLedger’s throughput scales linearly in the number of active validators, supporting Visa-level workloads and beyond, while confirming typical transactions in under two seconds.

856 citations


Journal ArticleDOI
TL;DR: In this paper, Long-Term Temporal Convolutional Neural Networks (LTCNNs) were used to learn action representations with high-quality optical flow vector fields and achieved state-of-the-art results on two challenging benchmarks for action recognition.
Abstract: Typical human actions last several seconds and exhibit characteristic spatio-temporal structure. Recent methods attempt to capture this structure and learn action representations with convolutional neural networks. Such representations, however, are typically learned at the level of a few video frames failing to model actions at their full temporal extent. In this work we learn video representations using neural networks with long-term temporal convolutions (LTC). We demonstrate that LTC-CNN models with increased temporal extents improve the accuracy of action recognition. We also study the impact of different low-level representations, such as raw values of video pixels and optical flow vector fields and demonstrate the importance of high-quality optical flow estimation for learning accurate action models. We report state-of-the-art results on two challenging benchmarks for human action recognition UCF101 (92.7%) and HMDB51 (67.2%).

853 citations


Journal ArticleDOI
TL;DR: A convolutional neural network architecture that is trainable in an end-to-end manner directly for the place recognition task, and significantly outperforms non-learnt image representations and off-the-shelf CNN descriptors on two challenging place recognition benchmarks.
Abstract: We tackle the problem of large scale visual place recognition, where the task is to quickly and accurately recognize the location of a given query photograph We present the following four principal contributions First, we develop a convolutional neural network (CNN) architecture that is trainable in an end-to-end manner directly for the place recognition task The main component of this architecture, NetVLAD, is a new generalized VLAD layer, inspired by the “Vector of Locally Aggregated Descriptors” image representation commonly used in image retrieval The layer is readily pluggable into any CNN architecture and amenable to training via backpropagation Second, we create a new weakly supervised ranking loss, which enables end-to-end learning of the architecture's parameters from images depicting the same places over time downloaded from Google Street View Time Machine Third, we develop an efficient training procedure which can be applied on very large-scale weakly labelled tasks Finally, we show that the proposed architecture and training procedure significantly outperform non-learnt image representations and off-the-shelf CNN descriptors on challenging place recognition and image retrieval benchmarks

562 citations


Proceedings ArticleDOI
04 Sep 2018
TL;DR: In this article, a method to automatically and efficiently detect face tampering in videos, and particularly focusing on two recent techniques used to generate hyper-realistic forged videos: Deepfake and Face2Face, is presented.
Abstract: This paper presents a method to automatically and efficiently detect face tampering in videos, and particularly focuses on two recent techniques used to generate hyper-realistic forged videos: Deepfake and Face2Face. Traditional image forensics techniques are usually not well suited to videos due to the compression that strongly degrades the data. Thus, this paper follows a deep learning approach and presents two networks, both with a low number of layers to focus on the mesoscopic properties of images. We evaluate those fast networks on both an existing dataset and a dataset we have constituted from online videos. The tests demonstrate a very successful detection rate with more than 98% for Deepfake and 95% for Face2Face.

539 citations


Proceedings ArticleDOI
TL;DR: A method to automatically and efficiently detect face tampering in videos, and particularly focuses on two recent techniques used to generate hyper-realistic forged videos: Deepfake and Face2Face.
Abstract: This paper presents a method to automatically and efficiently detect face tampering in videos, and particularly focuses on two recent techniques used to generate hyper-realistic forged videos: Deepfake and Face2Face. Traditional image forensics techniques are usually not well suited to videos due to the compression that strongly degrades the data. Thus, this paper follows a deep learning approach and presents two networks, both with a low number of layers to focus on the mesoscopic properties of images. We evaluate those fast networks on both an existing dataset and a dataset we have constituted from online videos. The tests demonstrate a very successful detection rate with more than 98% for Deepfake and 95% for Face2Face.

515 citations


Journal ArticleDOI
TL;DR: A global view of the studies that evaluated nano particles of metal‐organic frameworks' biomedical applications at the preclinical stage, when in vivo tests are described either for pharmacological applications or for toxicity evaluation is provided.
Abstract: In the past few years, numerous studies have demonstrated the great potential of nano particles of metal-organic frameworks (nanoMOFs) at the preclinical level for biomedical applications. Many of them were reported very recently based on their bioactive composition, anticancer application, or from a general drug delivery/theranostic perspective. In this review, the authors aim at providing a global view of the studies that evaluated MOFs' biomedical applications at the preclinical stage, when in vivo tests are described either for pharmacological applications or for toxicity evaluation. The authors first describe the current surface engineering approaches that are crucial to understand the in vivo behavior of the nanoMOFs. Finally, after a detailed and comprehensive analysis of the in vivo studies reported with MOFs so far, and considering the general evolution of the drug delivery science, the authors suggest new directions for future research in the use of nanoMOFs for biomedical applications.

Journal ArticleDOI
TL;DR: The aim of this paper is first to explore the performance of DL architectures for the RS hyperspectral data set classification and second to introduce a new 3-D DL approach that enables a joint spectral and spatial information process.
Abstract: Recently, a variety of approaches have been enriching the field of remote sensing (RS) image processing and analysis. Unfortunately, existing methods remain limited to the rich spatiospectral content of today’s large data sets. It would seem intriguing to resort to deep learning (DL)-based approaches at this stage with regard to their ability to offer accurate semantic interpretation of the data. However, the specificity introduced by the coexistence of spectral and spatial content in the RS data sets widens the scope of the challenges presented to adapt DL methods to these contexts. Therefore, the aim of this paper is first to explore the performance of DL architectures for the RS hyperspectral data set classification and second to introduce a new 3-D DL approach that enables a joint spectral and spatial information process. A set of 3-D schemes is proposed and evaluated. Experimental results based on well-known hyperspectral data sets demonstrate that the proposed method is able to achieve a better classification rate than state-of-the-art methods with lower computational costs.

Proceedings Article
01 Jan 2018
TL;DR: AtNet as mentioned in this paper represents a 3D shape as a collection of parametric surface elements and, in contrast to methods generating voxel grids or point clouds, naturally infers a surface representation of the shape.
Abstract: We introduce a method for learning to generate the surface of 3D shapes. Our approach represents a 3D shape as a collection of parametric surface elements and, in contrast to methods generating voxel grids or point clouds, naturally infers a surface representation of the shape. Beyond its novelty, our new shape generation framework, AtlasNet, comes with significant advantages, such as improved precision and generalization capabilities, and the possibility to generate a shape of arbitrary resolution without memory issues. We demonstrate these benefits and compare to strong baselines on the ShapeNet benchmark for two applications: (i) autoencoding shapes, and (ii) single-view reconstruction from a still image. We also provide results showing its potential for other applications, such as morphing, parametrization, super-resolution, matching, and co-segmentation.

Journal ArticleDOI
07 Mar 2018
TL;DR: This work introduces here the first deep learning approach for sleep stage classification that learns end-to-end without computing spectrograms or extracting handcrafted features, that exploits all multivariate and multimodal polysomnography (PSG) signals (EEG, EMG, and EOG), and that can exploit the temporal context of each 30-s window of data.
Abstract: Sleep stage classification constitutes an important preliminary exam in the diagnosis of sleep disorders. It is traditionally performed by a sleep expert who assigns to each 30 s of the signal of a sleep stage, based on the visual inspection of signals such as electroencephalograms (EEGs), electrooculograms (EOGs), electrocardiograms, and electromyograms (EMGs). We introduce here the first deep learning approach for sleep stage classification that learns end-to-end without computing spectrograms or extracting handcrafted features, that exploits all multivariate and multimodal polysomnography (PSG) signals (EEG, EMG, and EOG), and that can exploit the temporal context of each 30-s window of data. For each modality, the first layer learns linear spatial filters that exploit the array of sensors to increase the signal-to-noise ratio, and the last layer feeds the learnt representation to a softmax classifier. Our model is compared to alternative automatic approaches based on convolutional networks or decisions trees. Results obtained on 61 publicly available PSG records with up to 20 EEG channels demonstrate that our network architecture yields the state-of-the-art performance. Our study reveals a number of insights on the spatiotemporal distribution of the signal of interest: a good tradeoff for optimal classification performance measured with balanced accuracy is to use 6 EEG with 2 EOG (left and right) and 3 EMG chin channels. Also exploiting 1 min of data before and after each data segment offers the strongest improvement when a limited number of channels are available. As sleep experts, our system exploits the multivariate and multimodal nature of PSG signals in order to deliver the state-of-the-art classification performance with a small computational cost.

Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott, T. D. Abbott, Sheelu Abraham  +1145 moreInstitutions (8)
TL;DR: In this article, the authors presented the results from three gravitational-wave searches for coalescing compact binaries with component masses above 1, during the first and second observing runs of the Advanced Gravitational-wave detector network.
Abstract: We present the results from three gravitational-wave searches for coalescing compact binaries with component masses above 1$\mathrm{M}_\odot$ during the first and second observing runs of the Advanced gravitational-wave detector network. During the first observing run (O1), from September $12^\mathrm{th}$, 2015 to January $19^\mathrm{th}$, 2016, gravitational waves from three binary black hole mergers were detected. The second observing run (O2), which ran from November $30^\mathrm{th}$, 2016 to August $25^\mathrm{th}$, 2017, saw the first detection of gravitational waves from a binary neutron star inspiral, in addition to the observation of gravitational waves from a total of seven binary black hole mergers, four of which we report here for the first time: GW170729, GW170809, GW170818 and GW170823. For all significant gravitational-wave events, we provide estimates of the source properties. The detected binary black holes have total masses between $18.6_{-0.7}^{+3.2}\mathrm{M}_\odot$, and $84.4_{-11.1}^{+15.8} \mathrm{M}_\odot$, and range in distance between $320_{-110}^{+120}$ Mpc and $2840_{-1360}^{+1400}$ Mpc. No neutron star - black hole mergers were detected. In addition to highly significant gravitational-wave events, we also provide a list of marginal event candidates with an estimated false alarm rate less than 1 per 30 days. From these results over the first two observing runs, which include approximately one gravitational-wave detection per 15 days of data searched, we infer merger rates at the 90% confidence intervals of $110\, -\, 3840$ $\mathrm{Gpc}^{-3}\,\mathrm{y}^{-1}$ for binary neutron stars and $9.7\, -\, 101$ $\mathrm{Gpc}^{-3}\,\mathrm{y}^{-1}$ for binary black holes assuming fixed population distributions, and determine a neutron star - black hole merger rate 90% upper limit of $610$ $\mathrm{Gpc}^{-3}\,\mathrm{y}^{-1}$.

Journal ArticleDOI
TL;DR: In this paper, the authors show that the extent to which diatoms contribute to the export of carbon varies by diatom type, with carbon transfer modulated by the Si/C ratio of diatom cells, the thickness of the shells and their life strategies; for instance, the tendency to form aggregates or resting spores.
Abstract: Diatoms sustain the marine food web and contribute to the export of carbon from the surface ocean to depth. They account for about 40% of marine primary productivity and particulate carbon exported to depth as part of the biological pump. Diatoms have long been known to be abundant in turbulent, nutrient-rich waters, but observations and simulations indicate that they are dominant also in meso- and submesoscale structures such as fronts and filaments, and in the deep chlorophyll maximum. Diatoms vary widely in size, morphology and elemental composition, all of which control the quality, quantity and sinking speed of biogenic matter to depth. In particular, their silica shells provide ballast to marine snow and faecal pellets, and can help transport carbon to both the mesopelagic layer and deep ocean. Herein we show that the extent to which diatoms contribute to the export of carbon varies by diatom type, with carbon transfer modulated by the Si/C ratio of diatom cells, the thickness of the shells and their life strategies; for instance, the tendency to form aggregates or resting spores. Model simulations project a decline in the contribution of diatoms to primary production everywhere outside of the Southern Ocean. We argue that we need to understand changes in diatom diversity, life cycle and plankton interactions in a warmer and more acidic ocean in much more detail to fully assess any changes in their contribution to the biological pump.

Journal ArticleDOI
TL;DR: In this article, the authors provide a critical summary of recent work on turbulent flows from a unified point of view and present a classification of all known transfer mechanisms, including direct and inverse energy cascades.


Journal ArticleDOI
TL;DR: It is concluded that 3D genome reorganization generally precedes gene expression changes and that removal of locus-specific topological barriers explains why pluripotency genes are activated sequentially during reprogramming.
Abstract: Chromosomal architecture is known to influence gene expression, yet its role in controlling cell fate remains poorly understood. Reprogramming of somatic cells into pluripotent stem cells (PSCs) by the transcription factors (TFs) OCT4, SOX2, KLF4 and MYC offers an opportunity to address this question but is severely limited by the low proportion of responding cells. We have recently developed a highly efficient reprogramming protocol that synchronously converts somatic into pluripotent stem cells. Here, we used this system to integrate time-resolved changes in genome topology with gene expression, TF binding and chromatin-state dynamics. The results showed that TFs drive topological genome reorganization at multiple architectural levels, often before changes in gene expression. Removal of locus-specific topological barriers can explain why pluripotency genes are activated sequentially, instead of simultaneously, during reprogramming. Together, our results implicate genome topology as an instructive force for implementing transcriptional programs and cell fate in mammals.

Journal ArticleDOI
TL;DR: The current understanding of the functions of AhR in the mucosal immune system is reviewed with a focus on its role in intestinal barrier function and intestinal immune cells, as well as in intestinal homeostasis.

Journal ArticleDOI
TL;DR: It is shown that B. wadsworthia aggravates high fat diet induced metabolic dysfunctions and its suppression, both pharmacologically or mediated by Lactobacillus rhamnosus, limits the severity of metabolic impairment.
Abstract: Dietary lipids favor the growth of the pathobiont Bilophila wadsworthia, but the relevance of this expansion in metabolic syndrome pathogenesis is poorly understood. Here, we showed that B. wadsworthia synergizes with high fat diet (HFD) to promote higher inflammation, intestinal barrier dysfunction and bile acid dysmetabolism, leading to higher glucose dysmetabolism and hepatic steatosis. Host-microbiota transcriptomics analysis reveal pathways, particularly butanoate metabolism, which may underlie the metabolic effects mediated by B. wadsworthia. Pharmacological suppression of B. wadsworthia-associated inflammation demonstrate the bacterium's intrinsic capacity to induce a negative impact on glycemic control and hepatic function. Administration of the probiotic Lactobacillus rhamnosus CNCM I-3690 limits B. wadsworthia-induced immune and metabolic impairment by limiting its expansion, reducing inflammation and reinforcing intestinal barrier. Our results suggest a new avenue for interventions against western diet-driven inflammatory and metabolic diseases.

Journal ArticleDOI
Hugh McColl1, Fernando Racimo1, Lasse Vinner1, Fabrice Demeter2, Takashi Gakuhari3, Takashi Gakuhari4, J. Víctor Moreno-Mayar1, George van Driem5, George van Driem6, Uffe Gram Wilken1, Andaine Seguin-Orlando1, Andaine Seguin-Orlando7, Constanza de la Fuente Castro1, Sally Wasef8, Rasmi Shoocongdej9, Viengkeo Souksavatdy, Thongsa Sayavongkhamdy, Mokhtar Saidin10, Morten E. Allentoft1, Takehiro Sato4, Anna-Sapfo Malaspinas11, Farhang Aghakhanian12, Thorfinn Sand Korneliussen1, Ana Prohaska13, Ashot Margaryan14, Ashot Margaryan2, Peter de Barros Damgaard1, Supannee Kaewsutthi15, Patcharee Lertrit15, Thi Mai Huong Nguyen, Hsiao-chun Hung16, Thi Minh Tran, Huu Nghia Truong, Giang Hai Nguyen, Shaiful Shahidan10, Ketut Wiradnyana, Hiromi Matsumae3, Nobuo Shigehara17, Minoru Yoneda18, Hajime Ishida19, Tadayuki Masuyama, Yasuhiro Yamada20, Atsushi Tajima4, Hiroki Shibata21, Atsushi Toyoda22, Tsunehiko Hanihara3, Shigeki Nakagome23, Thibaut Devièse24, Anne-Marie Bacon25, Philippe Duringer26, Jean Luc Ponche26, Laura L. Shackelford27, Elise Patole-Edoumba1, Anh Nguyen, Bérénice Bellina-Pryce28, Jean Christophe Galipaud29, Rebecca Kinaston30, Rebecca Kinaston31, Hallie R. Buckley31, Christophe Pottier32, Silas Anselm Rasmussen33, Thomas Higham24, Robert Foley13, Marta Mirazón Lahr13, Ludovic Orlando7, Ludovic Orlando1, Martin Sikora1, Maude E. Phipps12, Hiroki Oota3, Charles Higham13, Charles Higham31, David M. Lambert8, Eske Willerslev13, Eske Willerslev1, Eske Willerslev34 
06 Jul 2018-Science
TL;DR: Neither interpretation fits the complexity of Southeast Asian history: Both Hòabìnhian hunter-gatherers and East Asian farmers contributed to current Southeast Asian diversity, with further migrations affecting island SEA and Vietnam.
Abstract: The human occupation history of Southeast Asia (SEA) remains heavily debated Current evidence suggests that SEA was occupied by Hoabinhian hunter-gatherers until ~4000 years ago, when farming economies developed and expanded, restricting foraging groups to remote habitats Some argue that agricultural development was indigenous; others favor the "two-layer" hypothesis that posits a southward expansion of farmers giving rise to present-day Southeast Asian genetic diversity By sequencing 26 ancient human genomes (25 from SEA, 1 Japanese Jōmon), we show that neither interpretation fits the complexity of Southeast Asian history: Both Hoabinhian hunter-gatherers and East Asian farmers contributed to current Southeast Asian diversity, with further migrations affecting island SEA and Vietnam Our results help resolve one of the long-standing controversies in Southeast Asian prehistory

Book ChapterDOI
08 Sep 2018
TL;DR: This work presents a new deep learning approach for matching deformable shapes by introducing Shape Deformation Networks which jointly encode 3D shapes and correspondences, and shows that this method is robust to many types of perturbations, and generalizes to non-human shapes.
Abstract: We present a new deep learning approach for matching deformable shapes by introducing Shape Deformation Networks which jointly encode 3D shapes and correspondences. This is achieved by factoring the surface representation into (i) a template, that parameterizes the surface, and (ii) a learnt global feature vector that parameterizes the transformation of the template into the input surface. By predicting this feature for a new shape, we implicitly predict correspondences between this shape and the template. We show that these correspondences can be improved by an additional step which improves the shape feature by minimizing the Chamfer distance between the input and transformed template. We demonstrate that our simple approach improves on state-of-the-art results on the difficult FAUST-inter challenge, with an average correspondence error of 2.88 cm. We show, on the TOSCA dataset, that our method is robust to many types of perturbations, and generalizes to non-human shapes. This robustness allows it to perform well on real unclean, meshes from the SCAPE dataset.

Journal ArticleDOI
TL;DR: This article used a metatranscriptomics approach to capture expressed genes in open ocean Tara Oceans stations across four organismal size fractions, and the individual sequence reads cluster into 116 million unigenes representing the largest reference collection of eukaryotic transcripts from any single biome.
Abstract: While our knowledge about the roles of microbes and viruses in the ocean has increased tremendously due to recent advances in genomics and metagenomics, research on marine microbial eukaryotes and zooplankton has benefited much less from these new technologies because of their larger genomes, their enormous diversity, and largely unexplored physiologies. Here, we use a metatranscriptomics approach to capture expressed genes in open ocean Tara Oceans stations across four organismal size fractions. The individual sequence reads cluster into 116 million unigenes representing the largest reference collection of eukaryotic transcripts from any single biome. The catalog is used to unveil functions expressed by eukaryotic marine plankton, and to assess their functional biogeography. Almost half of the sequences have no similarity with known proteins, and a great number belong to new gene families with a restricted distribution in the ocean. Overall, the resource provides the foundations for exploring the roles of marine eukaryotes in ocean ecology and biogeochemistry.

Posted Content
TL;DR: In this article, the Sinkhorn divergences, a family of geometric divergence that interpolates between Maximum Mean Discrepancies (MMD) and Optimal Transport distances (OT), are studied.
Abstract: Comparing probability distributions is a fundamental problem in data sciences. Simple norms and divergences such as the total variation and the relative entropy only compare densities in a point-wise manner and fail to capture the geometric nature of the problem. In sharp contrast, Maximum Mean Discrepancies (MMD) and Optimal Transport distances (OT) are two classes of distances between measures that take into account the geometry of the underlying space and metrize the convergence in law. This paper studies the Sinkhorn divergences, a family of geometric divergences that interpolates between MMD and OT. Relying on a new notion of geometric entropy, we provide theoretical guarantees for these divergences: positivity, convexity and metrization of the convergence in law. On the practical side, we detail a numerical scheme that enables the large scale application of these divergences for machine learning: on the GPU, gradients of the Sinkhorn loss can be computed for batches of a million samples.

Journal ArticleDOI
TL;DR: The GEOTRACES Intermediate Data Product 2017 (IDP2017) as discussed by the authors is the second publicly available data product of the international GEOTrACES programme, and contains data measured and quality controlled before the end of 2016.

Journal ArticleDOI
08 Aug 2018-Neuron
TL;DR: This work studies a class of recurrent network models in which the connectivity is a sum of a random part and a minimal, low-dimensional structure and shows that the dynamics are low dimensional and can be directly inferred from connectivity using a geometrical approach.

Journal ArticleDOI
TL;DR: In this article, the entanglement spectra of the infinite tower of states of the spin-S AKLT models were analyzed in the zero and finite energy density limits. And they were shown to be multiple shifted copies of the ground-state entropy in the thermodynamic limit.
Abstract: We obtain multiple exact results on the entanglement of the exact excited states of nonintegrable models we introduced in Phys. Rev. B 98, 235155 (2018)10.1103/PhysRevB.98.235155. We first discuss a general formalism to analytically compute the entanglement spectra of exact excited states using matrix product states and matrix product operators and illustrate the method by reproducing a general result on single-mode excitations. We then apply this technique to analytically obtain the entanglement spectra of the infinite tower of states of the spin-S AKLT models in the zero and finite energy density limits. We show that in the zero energy density limit, the entanglement spectra of the tower of states are multiple shifted copies of the ground-state entanglement spectrum in the thermodynamic limit. We show that such a resemblance is destroyed at any nonzero energy density. Furthermore, the entanglement entropy S of the states of the tower that are in the bulk of the spectrum is subthermal S∝logL, as opposed to a volume law S∝L, thus indicating a violation of the strong eigenstate thermalization hypothesis (ETH). These states are examples of what are now called many-body scars. Finally, we analytically study the finite-size effects and symmetry-protected degeneracies in the entanglement spectra of the excited states, extending the existing theory.

Proceedings Article
31 Mar 2018
TL;DR: In this paper, the authors propose a method to train large scale generative models using an optimal transport loss, which is based on two key ideas: (a) entropic smoothing, which turns the original OT loss into one that can be computed using Sinkhorn fixed point iterations; (b) algorithmic (automatic) differentiation of these iterations.
Abstract: The ability to compare two degenerate probability distributions (i.e. two probability distributions supported on two distinct low-dimensional manifolds living in a much higher-dimensional space) is a crucial problem arising in the estimation of generative models for high-dimensional observations such as those arising in computer vision or natural language. It is known that optimal transport metrics can represent a cure for this problem, since they were specifically designed as an alternative to information divergences to handle such problematic scenarios. Unfortunately, training generative machines using OT raises formidable computational and statistical challenges, because of (i) the computational burden of evaluating OT losses, (ii) the instability and lack of smoothness of these losses, (iii) the difficulty to estimate robustly these losses and their gradients in high dimension. This paper presents the first tractable computational method to train large scale generative models using an optimal transport loss, and tackles these three issues by relying on two key ideas: (a) entropic smoothing, which turns the original OT loss into one that can be computed using Sinkhorn fixed point iterations; (b) algorithmic (automatic) differentiation of these iterations. These two approximations result in a robust and differentiable approximation of the OT loss with streamlined GPU execution. Entropic smoothing generates a family of losses interpolating between Wasserstein (OT) and Maximum Mean Discrepancy (MMD), thus allowing to find a sweet spot leveraging the geometry of OT and the favorable high-dimensional sample complexity of MMD which comes with unbiased gradient estimates. The resulting computational architecture complements nicely standard deep network generative models by a stack of extra layers implementing the loss function.

Proceedings Article
02 Dec 2018
TL;DR: An end-to-end trainable convolutional neural network architecture that identifies sets of spatially consistent matches by analyzing neighbourhood consensus patterns in the 4D space of all possible correspondences between a pair of images without the need for a global geometric model is developed.
Abstract: We address the problem of finding reliable dense correspondences between a pair of images. This is a challenging task due to strong appearance differences between the corresponding scene elements and ambiguities generated by repetitive patterns. The contributions of this work are threefold. First, inspired by the classic idea of disambiguating feature matches using semi-local constraints, we develop an end-to-end trainable convolutional neural network architecture that identifies sets of spatially consistent matches by analyzing neighbourhood consensus patterns in the 4D space of all possible correspondences between a pair of images without the need for a global geometric model. Second, we demonstrate that the model can be trained effectively from weak supervision in the form of matching and non-matching image pairs without the need for costly manual annotation of point to point correspondences. Third, we show the proposed neighbourhood consensus network can be applied to a range of matching tasks including both category- and instance-level matching, obtaining the state-of-the-art results on the PF Pascal dataset and the InLoc indoor visual localization benchmark.