scispace - formally typeset
Search or ask a question

Showing papers by "École Normale Supérieure published in 2019"


Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott, T. D. Abbott, Sheelu Abraham  +1145 moreInstitutions (8)
TL;DR: In this paper, the authors presented the results from three gravitational-wave searches for coalescing compact binaries with component masses above 1 Ma during the first and second observing runs of the advanced GW detector network.
Abstract: We present the results from three gravitational-wave searches for coalescing compact binaries with component masses above 1 Ma™ during the first and second observing runs of the advanced gravitational-wave detector network. During the first observing run (O1), from September 12, 2015 to January 19, 2016, gravitational waves from three binary black hole mergers were detected. The second observing run (O2), which ran from November 30, 2016 to August 25, 2017, saw the first detection of gravitational waves from a binary neutron star inspiral, in addition to the observation of gravitational waves from a total of seven binary black hole mergers, four of which we report here for the first time: GW170729, GW170809, GW170818, and GW170823. For all significant gravitational-wave events, we provide estimates of the source properties. The detected binary black holes have total masses between 18.6-0.7+3.2 Mâ™ and 84.4-11.1+15.8 Mâ™ and range in distance between 320-110+120 and 2840-1360+1400 Mpc. No neutron star-black hole mergers were detected. In addition to highly significant gravitational-wave events, we also provide a list of marginal event candidates with an estimated false-alarm rate less than 1 per 30 days. From these results over the first two observing runs, which include approximately one gravitational-wave detection per 15 days of data searched, we infer merger rates at the 90% confidence intervals of 110-3840 Gpc-3 y-1 for binary neutron stars and 9.7-101 Gpc-3 y-1 for binary black holes assuming fixed population distributions and determine a neutron star-black hole merger rate 90% upper limit of 610 Gpc-3 y-1.

2,336 citations


Journal ArticleDOI
TL;DR: In this paper, the authors provide a detailed overview and historical perspective on state-of-the-art solutions, and elaborate on the fundamental differences with other technologies, the most important open research issues to tackle, and the reasons why the use of reconfigurable intelligent surfaces necessitates to rethink the communication-theoretic models currently employed in wireless networks.
Abstract: The future of mobile communications looks exciting with the potential new use cases and challenging requirements of future 6th generation (6G) and beyond wireless networks. Since the beginning of the modern era of wireless communications, the propagation medium has been perceived as a randomly behaving entity between the transmitter and the receiver, which degrades the quality of the received signal due to the uncontrollable interactions of the transmitted radio waves with the surrounding objects. The recent advent of reconfigurable intelligent surfaces in wireless communications enables, on the other hand, network operators to control the scattering, reflection, and refraction characteristics of the radio waves, by overcoming the negative effects of natural wireless propagation. Recent results have revealed that reconfigurable intelligent surfaces can effectively control the wavefront, e.g., the phase, amplitude, frequency, and even polarization, of the impinging signals without the need of complex decoding, encoding, and radio frequency processing operations. Motivated by the potential of this emerging technology, the present article is aimed to provide the readers with a detailed overview and historical perspective on state-of-the-art solutions, and to elaborate on the fundamental differences with other technologies, the most important open research issues to tackle, and the reasons why the use of reconfigurable intelligent surfaces necessitates to rethink the communication-theoretic models currently employed in wireless networks. This article also explores theoretical performance limits of reconfigurable intelligent surface-assisted communication systems using mathematical techniques and elaborates on the potential use cases of intelligent surfaces in 6G and beyond wireless networks.

2,021 citations


Journal ArticleDOI
TL;DR: In this 8th release of JASPAR, the CORE collection has been expanded with 245 new PFMs, and 156 PFMs were updated, and the genomic tracks, inference tool, and TF-binding profile similarity clusters were updated.
Abstract: JASPAR (http://jaspar.genereg.net) is an open-access database of curated, non-redundant transcription factor (TF)-binding profiles stored as position frequency matrices (PFMs) for TFs across multiple species in six taxonomic groups. In this 8th release of JASPAR, the CORE collection has been expanded with 245 new PFMs (169 for vertebrates, 42 for plants, 17 for nematodes, 10 for insects, and 7 for fungi), and 156 PFMs were updated (125 for vertebrates, 28 for plants and 3 for insects). These new profiles represent an 18% expansion compared to the previous release. JASPAR 2020 comes with a novel collection of unvalidated TF-binding profiles for which our curators did not find orthogonal supporting evidence in the literature. This collection has a dedicated web form to engage the community in the curation of unvalidated TF-binding profiles. Moreover, we created a Q&A forum to ease the communication between the user community and JASPAR curators. Finally, we updated the genomic tracks, inference tool, and TF-binding profile similarity clusters. All the data is available through the JASPAR website, its associated RESTful API, and through the JASPAR2020 R/Bioconductor package.

1,219 citations


Journal ArticleDOI
Pierre Friedlingstein1, Pierre Friedlingstein2, Matthew W. Jones3, Michael O'Sullivan2, Robbie M. Andrew, Judith Hauck4, Glen P. Peters, Wouter Peters5, Wouter Peters6, Julia Pongratz7, Julia Pongratz8, Stephen Sitch2, Corinne Le Quéré3, Dorothee C. E. Bakker3, Josep G. Canadell9, Philippe Ciais10, Robert B. Jackson11, Peter Anthoni12, Leticia Barbero13, Leticia Barbero14, Ana Bastos7, Vladislav Bastrikov10, Meike Becker15, Meike Becker16, Laurent Bopp1, Erik T. Buitenhuis3, Naveen Chandra17, Frédéric Chevallier10, Louise Chini18, Kim I. Currie19, Richard A. Feely20, Marion Gehlen10, Dennis Gilfillan21, Thanos Gkritzalis22, Daniel S. Goll23, Nicolas Gruber24, Sören B. Gutekunst25, Ian Harris26, Vanessa Haverd9, Richard A. Houghton27, George C. Hurtt18, Tatiana Ilyina8, Atul K. Jain28, Emilie Joetzjer10, Jed O. Kaplan29, Etsushi Kato, Kees Klein Goldewijk30, Kees Klein Goldewijk31, Jan Ivar Korsbakken, Peter Landschützer8, Siv K. Lauvset16, Nathalie Lefèvre32, Andrew Lenton33, Andrew Lenton34, Sebastian Lienert35, Danica Lombardozzi36, Gregg Marland21, Patrick C. McGuire37, Joe R. Melton, Nicolas Metzl32, David R. Munro38, Julia E. M. S. Nabel8, Shin-Ichiro Nakaoka39, Craig Neill34, Abdirahman M Omar34, Abdirahman M Omar16, Tsuneo Ono, Anna Peregon40, Anna Peregon10, Denis Pierrot13, Denis Pierrot14, Benjamin Poulter41, Gregor Rehder42, Laure Resplandy43, Eddy Robertson44, Christian Rödenbeck8, Roland Séférian10, Jörg Schwinger16, Jörg Schwinger31, Naomi E. Smith5, Naomi E. Smith45, Pieter P. Tans20, Hanqin Tian46, Bronte Tilbrook33, Bronte Tilbrook34, Francesco N. Tubiello47, Guido R. van der Werf48, Andy Wiltshire44, Sönke Zaehle8 
École Normale Supérieure1, University of Exeter2, Norwich Research Park3, Alfred Wegener Institute for Polar and Marine Research4, Wageningen University and Research Centre5, University of Groningen6, Ludwig Maximilian University of Munich7, Max Planck Society8, Commonwealth Scientific and Industrial Research Organisation9, Centre national de la recherche scientifique10, Stanford University11, Karlsruhe Institute of Technology12, Atlantic Oceanographic and Meteorological Laboratory13, Cooperative Institute for Marine and Atmospheric Studies14, Geophysical Institute, University of Bergen15, Bjerknes Centre for Climate Research16, Japan Agency for Marine-Earth Science and Technology17, University of Maryland, College Park18, National Institute of Water and Atmospheric Research19, National Oceanic and Atmospheric Administration20, Appalachian State University21, Flanders Marine Institute22, Augsburg College23, ETH Zurich24, Leibniz Institute of Marine Sciences25, University of East Anglia26, Woods Hole Research Center27, University of Illinois at Urbana–Champaign28, University of Hong Kong29, Utrecht University30, Netherlands Environmental Assessment Agency31, University of Paris32, University of Tasmania33, Hobart Corporation34, University of Bern35, National Center for Atmospheric Research36, University of Reading37, Cooperative Institute for Research in Environmental Sciences38, National Institute for Environmental Studies39, Russian Academy of Sciences40, Goddard Space Flight Center41, Leibniz Institute for Baltic Sea Research42, Princeton University43, Met Office44, Lund University45, Auburn University46, Food and Agriculture Organization47, VU University Amsterdam48
TL;DR: In this article, the authors describe data sets and methodology to quantify the five major components of the global carbon budget and their uncertainties, including emissions from land use and land use change, and show that the difference between the estimated total emissions and the estimated changes in the atmosphere, ocean, and terrestrial biosphere is a measure of imperfect data and understanding of the contemporary carbon cycle.
Abstract: . Accurate assessment of anthropogenic carbon dioxide ( CO2 ) emissions and their redistribution among the atmosphere, ocean, and terrestrial biosphere – the “global carbon budget” – is important to better understand the global carbon cycle, support the development of climate policies, and project future climate change. Here we describe data sets and methodology to quantify the five major components of the global carbon budget and their uncertainties. Fossil CO2 emissions ( EFF ) are based on energy statistics and cement production data, while emissions from land use change ( ELUC ), mainly deforestation, are based on land use and land use change data and bookkeeping models. Atmospheric CO2 concentration is measured directly and its growth rate ( GATM ) is computed from the annual changes in concentration. The ocean CO2 sink ( SOCEAN ) and terrestrial CO2 sink ( SLAND ) are estimated with global process models constrained by observations. The resulting carbon budget imbalance ( BIM ), the difference between the estimated total emissions and the estimated changes in the atmosphere, ocean, and terrestrial biosphere, is a measure of imperfect data and understanding of the contemporary carbon cycle. All uncertainties are reported as ±1σ . For the last decade available (2009–2018), EFF was 9.5±0.5 GtC yr −1 , ELUC 1.5±0.7 GtC yr −1 , GATM 4.9±0.02 GtC yr −1 ( 2.3±0.01 ppm yr −1 ), SOCEAN 2.5±0.6 GtC yr −1 , and SLAND 3.2±0.6 GtC yr −1 , with a budget imbalance BIM of 0.4 GtC yr −1 indicating overestimated emissions and/or underestimated sinks. For the year 2018 alone, the growth in EFF was about 2.1 % and fossil emissions increased to 10.0±0.5 GtC yr −1 , reaching 10 GtC yr −1 for the first time in history, ELUC was 1.5±0.7 GtC yr −1 , for total anthropogenic CO2 emissions of 11.5±0.9 GtC yr −1 ( 42.5±3.3 GtCO2 ). Also for 2018, GATM was 5.1±0.2 GtC yr −1 ( 2.4±0.1 ppm yr −1 ), SOCEAN was 2.6±0.6 GtC yr −1 , and SLAND was 3.5±0.7 GtC yr −1 , with a BIM of 0.3 GtC. The global atmospheric CO2 concentration reached 407.38±0.1 ppm averaged over 2018. For 2019, preliminary data for the first 6–10 months indicate a reduced growth in EFF of +0.6 % (range of −0.2 % to 1.5 %) based on national emissions projections for China, the USA, the EU, and India and projections of gross domestic product corrected for recent changes in the carbon intensity of the economy for the rest of the world. Overall, the mean and trend in the five components of the global carbon budget are consistently estimated over the period 1959–2018, but discrepancies of up to 1 GtC yr −1 persist for the representation of semi-decadal variability in CO2 fluxes. A detailed comparison among individual estimates and the introduction of a broad range of observations shows (1) no consensus in the mean and trend in land use change emissions over the last decade, (2) a persistent low agreement between the different methods on the magnitude of the land CO2 flux in the northern extra-tropics, and (3) an apparent underestimation of the CO2 variability by ocean models outside the tropics. This living data update documents changes in the methods and data sets used in this new global carbon budget and the progress in understanding of the global carbon cycle compared with previous publications of this data set (Le Quere et al., 2018a, b, 2016, 2015a, b, 2014, 2013). The data generated by this work are available at https://doi.org/10.18160/gcp-2019 (Friedlingstein et al., 2019).

981 citations


Journal ArticleDOI
27 Feb 2019-Nature
TL;DR: Using a recently developed formalism called topological quantum chemistry, a high-throughput search of ‘high-quality’ materials in the Inorganic Crystal Structure Database is performed and it is found that more than 27 per cent of all materials in nature are topological.
Abstract: Using a recently developed formalism called topological quantum chemistry, we perform a high-throughput search of 'high-quality' materials (for which the atomic positions and structure have been measured very accurately) in the Inorganic Crystal Structure Database in order to identify new topological phases. We develop codes to compute all characters of all symmetries of 26,938 stoichiometric materials, and find 3,307 topological insulators, 4,078 topological semimetals and no fragile phases. For these 7,385 materials we provide the electronic band structure, including some electronic properties (bandgap and number of electrons), symmetry indicators, and other topological information. Our results show that more than 27 per cent of all materials in nature are topological. We provide an open-source code that checks the topology of any material and allows other researchers to reproduce our results.

782 citations


Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott2, T. D. Abbott, Fausto Acernese3  +1157 moreInstitutions (70)
TL;DR: In this paper, the authors improved initial estimates of the binary's properties, including component masses, spins, and tidal parameters, using the known source location, improved modeling, and recalibrated Virgo data.
Abstract: On August 17, 2017, the Advanced LIGO and Advanced Virgo gravitational-wave detectors observed a low-mass compact binary inspiral. The initial sky localization of the source of the gravitational-wave signal, GW170817, allowed electromagnetic observatories to identify NGC 4993 as the host galaxy. In this work, we improve initial estimates of the binary's properties, including component masses, spins, and tidal parameters, using the known source location, improved modeling, and recalibrated Virgo data. We extend the range of gravitational-wave frequencies considered down to 23 Hz, compared to 30 Hz in the initial analysis. We also compare results inferred using several signal models, which are more accurate and incorporate additional physical effects as compared to the initial analysis. We improve the localization of the gravitational-wave source to a 90% credible region of 16 deg2. We find tighter constraints on the masses, spins, and tidal parameters, and continue to find no evidence for nonzero component spins. The component masses are inferred to lie between 1.00 and 1.89 M when allowing for large component spins, and to lie between 1.16 and 1.60 M (with a total mass 2.73-0.01+0.04 M) when the spins are restricted to be within the range observed in Galactic binary neutron stars. Using a precessing model and allowing for large component spins, we constrain the dimensionless spins of the components to be less than 0.50 for the primary and 0.61 for the secondary. Under minimal assumptions about the nature of the compact objects, our constraints for the tidal deformability parameter Λ are (0,630) when we allow for large component spins, and 300-230+420 (using a 90% highest posterior density interval) when restricting the magnitude of the component spins, ruling out several equation-of-state models at the 90% credible level. Finally, with LIGO and GEO600 data, we use a Bayesian analysis to place upper limits on the amplitude and spectral energy density of a possible postmerger signal.

715 citations


Journal ArticleDOI
TL;DR: An overview of these new P2P electricity markets that starts with the motivation, challenges, market designs moving to the potential future developments in this field is contributed, providing recommendations while considering a test-case.
Abstract: The advent of more proactive consumers, the so-called “prosumers”, with production and storage capabilities, is empowering the consumers and bringing new opportunities and challenges to the operation of power systems in a market environment. Recently, a novel proposal for the design and operation of electricity markets has emerged: these so-called peer-to-peer (P2P) electricity markets conceptually allow the prosumers to directly share their electrical energy and investment. Such P2P markets rely on a consumer-centric and bottom-up perspective by giving the opportunity to consumers to freely choose the way they buy their electric energy. A community can also be formed by prosumers who want to collaborate, or in terms of operational energy management. This paper contributes with an overview of these new P2P markets that starts with the motivation, challenges, market designs moving to the potential future developments in this field, providing recommendations while considering a test-case.

592 citations


Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott2, T. D. Abbott, Sheelu Abraham  +1138 moreInstitutions (6)
TL;DR: In this paper, the authors present four tests of the consistency of the data with binary black hole gravitational waveforms predicted by general relativity, including the best-fit waveform from the data and the consistency with detector noise.
Abstract: The detection of gravitational waves by Advanced LIGO and Advanced Virgo provides an opportunity to test general relativity in a regime that is inaccessible to traditional astronomical observations and laboratory tests. We present four tests of the consistency of the data with binary black hole gravitational waveforms predicted by general relativity. One test subtracts the best-fit waveform from the data and checks the consistency of the residual with detector noise. The second test checks the consistency of the low- and high-frequency parts of the observed signals. The third test checks that phenomenological deviations introduced in the waveform model (including in the post-Newtonian coefficients) are consistent with 0. The fourth test constrains modifications to the propagation of gravitational waves due to a modified dispersion relation, including that from a massive graviton. We present results both for individual events and also results obtained by combining together particularly strong events from the first and second observing runs of Advanced LIGO and Advanced Virgo, as collected in the catalog GWTC-1. We do not find any inconsistency of the data with the predictions of general relativity and improve our previously presented combined constraints by factors of 1.1 to 2.5. In particular, we bound the mass of the graviton to be mg≤4.7×10-23 eV/c2 (90% credible level), an improvement of a factor of 1.6 over our previously presented results. Additionally, we check that the four gravitational-wave events published for the first time in GWTC-1 do not lead to stronger constraints on alternative polarizations than those published previously.

482 citations


Posted Content
TL;DR: The fundamental differences with other technologies, the most important open research issues to tackle, and the reasons why the use of reconfigurable intelligent surfaces necessitates to rethink the communication-theoretic models currently employed in wireless networks are elaborated.
Abstract: The future of mobile communications looks exciting with the potential new use cases and challenging requirements of future 6th generation (6G) and beyond wireless networks. Since the beginning of the modern era of wireless communications, the propagation medium has been perceived as a randomly behaving entity between the transmitter and the receiver, which degrades the quality of the received signal due to the uncontrollable interactions of the transmitted radio waves with the surrounding objects. The recent advent of reconfigurable intelligent surfaces in wireless communications enables, on the other hand, network operators to control the scattering, reflection, and refraction characteristics of the radio waves, by overcoming the negative effects of natural wireless propagation. Recent results have revealed that reconfigurable intelligent surfaces can effectively control the wavefront, e.g., the phase, amplitude, frequency, and even polarization, of the impinging signals without the need of complex decoding, encoding, and radio frequency processing operations. Motivated by the potential of this emerging technology, the present article is aimed to provide the readers with a detailed overview and historical perspective on state-of-the-art solutions, and to elaborate on the fundamental differences with other technologies, the most important open research issues to tackle, and the reasons why the use of reconfigurable intelligent surfaces necessitates to rethink the communication-theoretic models currently employed in wireless networks. This article also explores theoretical performance limits of reconfigurable intelligent surface-assisted communication systems using mathematical techniques and elaborates on the potential use cases of intelligent surfaces in 6G and beyond wireless networks.

463 citations


Proceedings Article
08 Dec 2019
TL;DR: This work shows that this "lazy training" phenomenon is not specific to over-parameterized neural networks, and is due to a choice of scaling that makes the model behave as its linearization around the initialization, thus yielding a model equivalent to learning with positive-definite kernels.
Abstract: In a series of recent theoretical works, it was shown that strongly over-parameterized neural networks trained with gradient-based methods could converge exponentially fast to zero training loss, with their parameters hardly varying. In this work, we show that this "lazy training" phenomenon is not specific to over-parameterized neural networks, and is due to a choice of scaling, often implicit, that makes the model behave as its linearization around the initialization, thus yielding a model equivalent to learning with positive-definite kernels. Through a theoretical analysis, we exhibit various situations where this phenomenon arises in non-convex optimization and we provide bounds on the distance between the lazy and linearized optimization paths. Our numerical experiments bring a critical note, as we observe that the performance of commonly used non-linear deep convolutional neural networks in computer vision degrades when trained in the lazy regime. This makes it unlikely that "lazy training" is behind the many successes of neural networks in difficult high dimensional tasks.

441 citations


Journal ArticleDOI
16 May 2019-Cell
TL;DR: An ∼12-fold expanded global ocean DNA virome dataset is established of 195,728 viral populations, now including the Arctic Ocean, and it is validated that these populations form discrete genotypic clusters.

Journal ArticleDOI
TL;DR: In this paper, the main advancements of the Beijing Climate Center (BCC) climate system model from phase 5 of the Coupled Model Intercomparison Project (CMIP5) to phase 6 (CMP6) are presented, in terms of physical parameterizations and model performance.
Abstract: . The main advancements of the Beijing Climate Center (BCC) climate system model from phase 5 of the Coupled Model Intercomparison Project (CMIP5) to phase 6 (CMIP6) are presented, in terms of physical parameterizations and model performance. BCC-CSM1.1 and BCC-CSM1.1m are the two models involved in CMIP5, whereas BCC-CSM2-MR, BCC-CSM2-HR, and BCC-ESM1.0 are the three models configured for CMIP6. Historical simulations from 1851 to 2014 from BCC-CSM2-MR (CMIP6) and from 1851 to 2005 from BCC-CSM1.1m (CMIP5) are used for models assessment. The evaluation matrices include the following: (a) the energy budget at top-of-atmosphere; (b) surface air temperature, precipitation, and atmospheric circulation for the global and East Asia regions; (c) the sea surface temperature (SST) in the tropical Pacific; (d) sea-ice extent and thickness and Atlantic Meridional Overturning Circulation (AMOC); and (e) climate variations at different timescales, such as the global warming trend in the 20th century, the stratospheric quasi-biennial oscillation (QBO), the Madden–Julian Oscillation (MJO), and the diurnal cycle of precipitation. Compared with BCC-CSM1.1m, BCC-CSM2-MR shows significant improvements in many aspects including the tropospheric air temperature and circulation at global and regional scales in East Asia and climate variability at different timescales, such as the QBO, the MJO, the diurnal cycle of precipitation, interannual variations of SST in the equatorial Pacific, and the long-term trend of surface air temperature.

Journal ArticleDOI
Nabila Aghanim1, Yashar Akrami2, Yashar Akrami3, Yashar Akrami4  +213 moreInstitutions (66)
TL;DR: The 2018 Planck CMB likelihoods were presented in this paper, following a hybrid approach similar to the 2015 one, with different approximations at low and high multipoles, and implementing several methodological and analysis refinements.
Abstract: This paper describes the 2018 Planck CMB likelihoods, following a hybrid approach similar to the 2015 one, with different approximations at low and high multipoles, and implementing several methodological and analysis refinements. With more realistic simulations, and better correction and modelling of systematics, we can now make full use of the High Frequency Instrument polarization data. The low-multipole 100x143 GHz EE cross-spectrum constrains the reionization optical-depth parameter $\tau$ to better than 15% (in combination with with the other low- and high-$\ell$ likelihoods). We also update the 2015 baseline low-$\ell$ joint TEB likelihood based on the Low Frequency Instrument data, which provides a weaker $\tau$ constraint. At high multipoles, a better model of the temperature-to-polarization leakage and corrections for the effective calibrations of the polarization channels (polarization efficiency or PE) allow us to fully use the polarization spectra, improving the constraints on the $\Lambda$CDM parameters by 20 to 30% compared to TT-only constraints. Tests on the modelling of the polarization demonstrate good consistency, with some residual modelling uncertainties, the accuracy of the PE modelling being the main limitation. Using our various tests, simulations, and comparison between different high-$\ell$ implementations, we estimate the consistency of the results to be better than the 0.5$\sigma$ level. Minor curiosities already present before (differences between $\ell$ 800 parameters or the preference for more smoothing of the $C_\ell$ peaks) are shown to be driven by the TT power spectrum and are not significantly modified by the inclusion of polarization. Overall, the legacy Planck CMB likelihoods provide a robust tool for constraining the cosmological model and represent a reference for future CMB observations. (Abridged)

Journal ArticleDOI
07 Jun 2019-Science
TL;DR: It is found that up to early migration, neural crest cells progress through a sequence of common transcriptional states, followed by fate bifurcations during migration that can be formalized as a series of sequential binary decisions.
Abstract: Neural crest cells are embryonic progenitors that generate numerous cell types in vertebrates. With single-cell analysis, we show that mouse trunk neural crest cells become biased toward neuronal lineages when they delaminate from the neural tube, whereas cranial neural crest cells acquire ectomesenchyme potential dependent on activation of the transcription factor Twist1. The choices that neural crest cells make to become sensory, glial, autonomic, or mesenchymal cells can be formalized as a series of sequential binary decisions. Each branch of the decision tree involves initial coactivation of bipotential properties followed by gradual shifts toward commitment. Competing fate programs are coactivated before cells acquire fate-specific phenotypic traits. Determination of a specific fate is achieved by increased synchronization of relevant programs and concurrent repression of competing fate programs.

Journal ArticleDOI
TL;DR: The review indicates that disadvantaged social and socioeconomic conditions contribute to low HL levels, whereby low SES, and particularly educational attainment, is the most important determinant of HL, and that HL mediates the relationship between SES and health status, quality of life, specific health-related outcomes, health behaviors and use of preventive services.
Abstract: While socioeconomic disparities are among the most fundamental causes of health disparities, socioeconomic status (SES) does not impact health directly. One of the potential mediating factors that link SES and health is health literacy (HL). Yet although HL can be considered a modifiable risk factor of socioeconomic disparities in health, the relationship between SES, HL and health disparities is not well understood. This study reviewed the evidence regarding the mediating role of HL in the relationship between socioeconomic and health disparities. Medline, Cinahl, Embase, PsychInfo, Eric, Web of Science, Google, Google Scholar, Mednar, Doaj and Worldcat were used to retrieve studies that specifically addressed socioeconomic and socio-demographic factors related to low HL levels, as well as the mediating role of HL in the relationship between SES and disparities in health outcomes. Selected studies were assessed for methodological quality. Sixteen published studies were retained for inclusion and content analyzed using the constant comparison method. The review indicates that disadvantaged social and socioeconomic conditions contribute to low HL levels, whereby low SES, and particularly educational attainment, is the most important determinant of HL, and that HL mediates the relationship between SES and health status, quality of life, specific health-related outcomes, health behaviors and use of preventive services. HL can be considered as a modifiable risk factor of socioeconomic disparities in health. Enhancing the level of HL in the population or making health services more accessible to people with low HL may be a means to reach a greater equity in health.

Journal ArticleDOI
TL;DR: In this article, the Earth system (ES) model of second generation developed by CNRM-CERFACS for the sixth phase of the Coupled Model Intercomparison Project (CMIP6) was compared to the Atmosphere Ocean General Circulation Model (AOCM) by adding interactive ES components such as carbon cycle, aerosols, and atmospheric chemistry.
Abstract: This study introduces CNRM‐ESM2‐1, the Earth system (ES) model of second generation developed by CNRM‐CERFACS for the sixth phase of the Coupled Model Intercomparison Project (CMIP6). CNRM‐ESM2‐1 offers a higher model complexity than the Atmosphere‐Ocean General Circulation Model CNRM‐CM6‐1 by adding interactive ES components such as carbon cycle, aerosols, and atmospheric chemistry. As both models share the same code, physical parameterizations, and grid resolution, they offer a fully traceable framework to investigate how far the represented ES processes impact the model performance over present‐day, response to external forcing and future climate projections. Using a large variety of CMIP6 experiments, we show that represented ES processes impact more prominently the model response to external forcing than the model performance over present‐day. Both models display comparable performance at replicating modern observations although the mean climate of CNRM‐ESM2‐1 is slightly warmer than that of CNRM‐CM6‐1. This difference arises from land cover‐aerosol interactions where the use of different soil vegetation distributions between both models impacts the rate of dust emissions. This interaction results in a smaller aerosol burden in CNRM‐ESM2‐1 than in CNRM‐CM6‐1, leading to a different surface radiative budget and climate. Greater differences are found when comparing the model response to external forcing and future climate projections. Represented ES processes damp future warming by up to 10% in CNRM‐ESM2‐1 with respect to CNRM‐CM6‐1. The representation of land vegetation and the CO2‐water‐stomatal feedback between both models explain about 60% of this difference. The remainder is driven by other ES feedbacks such as the natural aerosol feedback.

Journal ArticleDOI
Fausto Acernese1, M. Agathos2, Lloyd Paul Aiello, A. Allocca  +354 moreInstitutions (24)
TL;DR: The squeezing injection was fully automated and over the first 5 months of the third joint LIGO-Virgo observation run O3 squeezing was applied for more than 99% of the science time, and several gravitational-wave candidates have been recorded.
Abstract: Current interferometric gravitational-wave detectors are limited by quantum noise over a wide range of their measurement bandwidth. One method to overcome the quantum limit is the injection of squeezed vacuum states of light into the interferometer’s dark port. Here, we report on the successful application of this quantum technology to improve the shot noise limited sensitivity of the Advanced Virgo gravitational-wave detector. A sensitivity enhancement of up to 3.2±0.1 dB beyond the shot noise limit is achieved. This nonclassical improvement corresponds to a 5%–8% increase of the binary neutron star horizon. The squeezing injection was fully automated and over the first 5 months of the third joint LIGO-Virgo observation run O3 squeezing was applied for more than 99% of the science time. During this period several gravitational-wave candidates have been recorded.

Journal ArticleDOI
TL;DR: In this paper, a systematic review of 118 peer-reviewed journal articles published between 1961 and 2017 provides an integrative picture of the state of the art of the family firm innovation literature.
Abstract: Through a systematic review of 118 peer-reviewed journal articles published between 1961 and 2017, this article provides an integrative picture of the state of the art of the family firm innovation literature. Our aim is to widen existing understanding of innovation in family firms by building a theoretical bridge with studies in the mainstream innovation literature. Specifically, in identifying the main gaps in the literature and providing future research directions, our critical and dynamic picture of family-specific determinants of innovation is intended to advance the debate on innovation in general, and family firms in particular.

Proceedings Article
16 Jun 2019
TL;DR: In this paper, a single CNN is simultaneously a dense feature descriptor and a feature detector, and the obtained keypoints are more stable than their traditional counterparts based on early detection of low-level structures.
Abstract: In this work we address the problem of finding reliable pixel-level correspondences under difficult imaging conditions. We propose an approach where a single convolutional neural network plays a dual role: It is simultaneously a dense feature descriptor and a feature detector. By postponing the detection to a later stage, the obtained keypoints are more stable than their traditional counterparts based on early detection of low-level structures. We show that this model can be trained using pixel correspondences extracted from readily available large-scale SfM reconstructions, without any further annotations. The proposed method obtains state-of-the-art performance on both the difficult Aachen Day-Night localization dataset and the InLoc indoor localization benchmark, as well as competitive performance on other benchmarks for image matching and 3D reconstruction.

Journal ArticleDOI
TL;DR: This book is dedicated to the victims of the Paris terror attacks of 22 July 1997, which claimed the lives of 129 people and injured more than 200 others.
Abstract: Zinger, Lucie; Bonin, Aurélie; Alsos, Inger G; Bálint, Miklós; Bik, Holly; Boyer, Frédéric; Chariton, Anthony A; Creer, Simon; Coissac, Eric; Deagle, Bruce E; De Barba, Marta; Dickie, Ian A; Dumbrell, Alex J; Ficetola, Gentile Francesco; Fierer, Noah; Fumagalli, Luca; Gilbert, M Thomas P; Jarman, Simon; Jumpponen, Ari; Kauserud, Håvard; Orlando, Ludovic; Pansu, Johan; Pawlowski, Jan; Tedersoo, Leho; Thomsen, Philip Francis; Willerslev, Eske; Taberlet, Pierre

Journal ArticleDOI
TL;DR: This young research area requires standardization of techniques and bioinformatic analysis, as well as complete, curated databases, to reach a level of insight similar to that of the bacterial microbiota.
Abstract: The gut microbiota is a dense and diverse ecosystem that is involved in many physiological functions as well as in disease pathogenesis. It is dominated by bacteria, which have been extensively studied in the past 15 years; however, other microorganisms, such as fungi, phages, archaea and protists, are also present in the gut microbiota. Exploration of the fungal component, namely, the mycobiota, is at an early stage, and several specific technical challenges are associated with mycobiota analysis. The number of fungi in the lower gastrointestinal tract is far lower than that of bacteria, but fungal cells are much larger and much more complex than bacterial cells. In addition, a role of the mycobiota in disease, notably in IBD, is indicated by both descriptive data in humans and mechanistic data in mice. Interactions between bacteria and fungi within the gut, their functional roles and their interplay with the host and its immune system are fascinating areas that researchers are just beginning to investigate. In this Review, we discuss the newest data on the gut mycobiota and explore both the technical aspects of its study and its role in health and gastrointestinal diseases. The authors review the newest data on the gut fungal microbiota. They explore technical aspects of its analysis, how the mycobiome is influenced by immune and environmental factors, including fungi–bacteria interactions, and links between the mycobiota and gut diseases.


Proceedings Article
16 Apr 2019
TL;DR: In this paper, the authors studied the sample complexity of the Sinkhorn divergences (SDs), a regularized variant of OT distances which can interpolate, depending on the regularization strength, between OT and maximum mean discrepancies (MMD), and showed that the optimizers of regularized SDs are bounded in a Sobolev ball independent of the two measures.
Abstract: Optimal transport (OT) and maximum mean discrepancies (MMD) are now routinely used in machine learning to compare probability measures. We focus in this paper on \emph{Sinkhorn divergences} (SDs), a regularized variant of OT distances which can interpolate, depending on the regularization strength $\varepsilon$, between OT ($\varepsilon=0$) and MMD ($\varepsilon=\infty$). Although the tradeoff induced by that regularization is now well understood computationally (OT, SDs and MMD require respectively $O(n^3\log n)$, $O(n^2)$ and $n^2$ operations given a sample size $n$), much less is known in terms of their \emph{sample complexity}, namely the gap between these quantities, when evaluated using finite samples \emph{vs.} their respective densities. Indeed, while the sample complexity of OT and MMD stand at two extremes, $1/n^{1/d}$ for OT in dimension $d$ and $1/\sqrt{n}$ for MMD, that for SDs has only been studied empirically. In this paper, we \emph{(i)} derive a bound on the approximation error made with SDs when approximating OT as a function of the regularizer $\varepsilon$, \emph{(ii)} prove that the optimizers of regularized OT are bounded in a Sobolev (RKHS) ball independent of the two measures and \emph{(iii)} provide the first sample complexity bound for SDs, obtained,by reformulating SDs as a maximization problem in a RKHS. We thus obtain a scaling in $1/\sqrt{n}$ (as in MMD), with a constant that depends however on $\varepsilon$, making the bridge between OT and MMD complete.

Journal ArticleDOI
14 Nov 2019-Cell
TL;DR: The relative contribution of gene expression changes to be significantly lower in polar than in non-polar waters and it is hypothesized that in polar regions, alterations in community activity in response to ocean warming will be driven more strongly by changes in organismal composition than by gene regulatory mechanisms.

Journal ArticleDOI
14 Nov 2019-Cell
TL;DR: This work investigates the latitudinal gradients and global predictors of plankton diversity across archaea, bacteria, eukaryotes, and major virus clades using both molecular and imaging data from Tara Oceans to show a decline of diversity for most planktonic groups toward the poles.


Journal ArticleDOI
TL;DR: In this article, the authors propose to attribute such costs through exogenous network charges in several alternative ways, i.e., uniformly, based on the electrical distance between agents and by zones.
Abstract: The deployment of distributed energy resources, combined with a more proactive demand side management, is inducing a new paradigm in power system operation and electricity markets. Within a consumer-centric market framework, peer-to-peer (P2P) approaches have gained substantial interest. P2P markets rely on multibilateral negotiation among all agents to match supply and demand. These markets can yield a complete mapping of exchanges onto the grid; hence, allowing to rethink the sharing of costs related to the use of common infrastructure and services. We propose here to attribute such costs through exogenous network charges in several alternative ways, i.e., uniformly, based on the electrical distance between agents and by zones. This variety covers the main grid physical and regulatory configurations. Since attribution mechanisms are defined in an exogenous manner to affect each P2P trade, they eventually shift the market issue to cover the grid exploitation costs. It can even be used to release the stress on the grid when necessary. The interest of our approach is illustrated on a test case using the IEEE 39 bus test system, underlying the impact of attribution mechanisms on trades and grid usage.

Journal ArticleDOI
01 Dec 2019-Appetite
TL;DR: A systematic literature review of different motives and barriers and their association with purchase decisions in context to organic food shows implications for scholars, managers, and policymakers interested in better understanding issues related toorganic food consumption.

Posted Content
TL;DR: Demucs is proposed, a new waveform-to-waveform model, which has an architecture closer to models for audio generation with more capacity on the decoder, and human evaluations show that Demucs has significantly higher quality than Conv-Tasnet, but slightly more contamination from other sources, which explains the difference in SDR.
Abstract: Source separation for music is the task of isolating contributions, or stems, from different instruments recorded individually and arranged together to form a song. Such components include voice, bass, drums and any other accompaniments. Contrarily to many audio synthesis tasks where the best performances are achieved by models that directly generate the waveform, the state-of-the-art in source separation for music is to compute masks on the magnitude spectrum. In this paper, we compare two waveform domain architectures. We first adapt Conv-Tasnet, initially developed for speech source separation, to the task of music source separation. While Conv-Tasnet beats many existing spectrogram-domain methods, it suffers from significant artifacts, as shown by human evaluations. We propose instead Demucs, a novel waveform-to-waveform model, with a U-Net structure and bidirectional LSTM. Experiments on the MusDB dataset show that, with proper data augmentation, Demucs beats all existing state-of-the-art architectures, including Conv-Tasnet, with 6.3 SDR on average, (and up to 6.8 with 150 extra training songs, even surpassing the IRM oracle for the bass source). Using recent development in model quantization, Demucs can be compressed down to 120MB without any loss of accuracy. We also provide human evaluations, showing that Demucs benefit from a large advantage in terms of the naturalness of the audio. However, it suffers from some bleeding, especially between the vocals and other source.

Journal ArticleDOI
TL;DR: In this paper, the authors summarized the current progress in the field of porous carbons, especially N-enriched carbons obtained from the carbonization of MOFs with or without additional N-containing compounds.